Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Download RSS feed: News Articles / In the Media / Audio

Iwnetim Abate wearing a bright red vest with a suit and tie poses against a blurred background

Extracting hydrogen from rocks

Iwnetim Abate aims to stimulate natural hydrogen production underground, potentially unearthing a new path to a cheap, carbon-free energy source.

April 8, 2024

Read full story →

Square spring device in Petri dish, pair of fine point scissors, and pliers sit atop green surface

MIT engineers design flexible “skeletons” for soft, muscle-powered robots

New modular, spring-like devices maximize the work of live muscle fibers so they can be harnessed to power biohybrid bots.

About five photos of a robotic experiment are collaged together. A robotic arm uses a spoon to pick up red marbles and place in a bowl. A human hand pushes and pulls the robotic hand. Marbles are scattered on the table and are also being poured into the new bowl.

Engineering household robots to have a little common sense

With help from a large language model, MIT engineers enabled robots to self-correct after missteps and carry on with their chores.

March 25, 2024

A stylized road is made up of purplish curved lines. A green road, with dots, cuts a clear path and leads to a large green circle with a “pin” icon.

Researchers help robots navigate efficiently in uncertain environments

A new algorithm reduces travel time by identifying shortcuts a robot could take on the way to its destination.

March 14, 2024

Seated at a lab table, Audrey Chen solders a circuit board

“Imagine it, build it” at MIT

In class 2.679 (Electronics for Mechanical Systems II) a hands-on approach provides the skills engineers use to create and solve problems.

Close-up of a dress neckline on a mannequin being targeted by a laser

Is this the future of fashion?

Developed by the Self-Assembly Lab, the 4D Knit Dress uses several technologies to create a custom design and a custom fit, while addressing sustainability concerns.

March 7, 2024

Multiple robotic arms working in close proximity inside a warehouse setting

Method rapidly verifies that a robot will avoid collisions

Faster and more accurate than some alternatives, this approach could be useful for robots that interact with humans or work in tight spaces.

Rendering shows the Perseverance on the rocky brown surface of Mars. The Perseverance resembles a go-cart and has 6 wheels and an arm extending that houses the drill. The top of the Perseverance has a long neck and a camera on top.

Study determines the original orientations of rocks drilled on Mars

The “oriented” samples, the first of their kind from any planet, could shed light on Mars’ ancient magnetic field.

March 4, 2024

Three African children lean over a table, working on a circuit board, with more people outside the frame

“We offer another place for knowledge”

After acquiring data science and AI skills from MIT, Jospin Hassan shared them with his community in the Dzaleka Refugee Camp in Malawi and built pathways for talented learners.

February 26, 2024

Collage of four images of a hand wearing a white, fabric-based glove with black fingertips and haptics and sensors sewn in. Two use cases shown include manipulating a robotic arm and playing a piano.

Smart glove teaches new physical skills

Adaptive smart glove from MIT CSAIL researchers can send tactile feedback to teach users new skills, guide robots with more precise manipulation, and help train surgeons and pilots.

February 20, 2024

Grid of 6 portrait photos of MIT student award-winners

Six MIT students selected as spring 2024 MIT-Pillar AI Collective Fellows

The graduate students will aim to commercialize innovations in AI, machine learning, and data science.

February 6, 2024

Headshot of Igor Paul

Professor Emeritus Igor Paul, an expert in product design and safety, dies at 87

Longtime professor helped develop the Department of Mechanical Engineering’s design and manufacturing curriculum, contributed to artificial joints as well as NASA inertial guidance systems.

January 31, 2024

Baran Mensah stands in a studio filled with organized shelves and small rovers on large tables. Mensah wears a Brass Rat ring.

Baran Mensah: Savoring college life in a new country

From robotics to dance, the MIT senior has made it his mission to explore as many new experiences as possible at the Institute.

January 19, 2024

Headshots of Athul Paul Jacob, Maohao Shen, Victor Butoi, and Andi Peng.

Reasoning and reliability in AI

PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.

January 18, 2024

Headshot of Richard Wiesman

Richard Wiesman, professor of the practice in mechanical engineering, dies at age 69

A highly respected educator and mentor with a distinguished industry career, Wiesman inspired generations of mechanical engineering students.

January 10, 2024

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Shaping the future of advanced robotics

The Google DeepMind Robotics Team

  • Copy link ×

Auto-RT Robot

Introducing AutoRT, SARA-RT and RT-Trajectory to improve real-world robot data collection, speed, and generalization

Picture a future in which a simple request to your personal helper robot - “tidy the house” or “cook us a delicious, healthy meal” - is all it takes to get those jobs done. These tasks, straightforward for humans, require a high-level understanding of the world for robots.

Today we’re announcing a suite of advances in robotics research that bring us a step closer to this future. AutoRT, SARA-RT, and RT-Trajectory build on our historic Robotics Transformers work to help robots make decisions faster, and better understand and navigate their environments.

AutoRT: Harnessing large models to better train robots

We introduce AutoRT , a system that harnesses the potential of large foundation models which is critical to creating robots that can understand practical human goals. By collecting more experiential training data – and more diverse data – AutoRT can help scale robotic learning to better train robots for the real world.

AutoRT combines large foundation models such as a Large Language Model (LLM) or Visual Language Model (VLM), and a robot control model (RT-1 or RT-2) to create a system that can deploy robots to gather training data in novel environments. AutoRT can simultaneously direct multiple robots, each equipped with a video camera and an end effector, to carry out diverse tasks in a range of settings. For each robot, the system uses a VLM to understand its environment and the objects within sight. Next, an LLM suggests a list of creative tasks that the robot could carry out, such as “Place the snack onto the countertop” and plays the role of decision-maker to select an appropriate task for the robot to carry out.

In extensive real-world evaluations over seven months, the system safely orchestrated as many as 20 robots simultaneously, and up to 52 unique robots in total, in a variety of office buildings, gathering a diverse dataset comprising 77,000 robotic trials across 6,650 unique tasks.

recent research work in robotics

(1) An autonomous wheeled robot finds a location with multiple objects. (2) A VLM describes the scene and objects to an LLM. (3) An LLM suggests diverse manipulation tasks for the robot and decides which tasks the robot could do unassisted, which would require remote control by a human, and which are impossible, before making a choice. (4) The chosen task is attempted, the experiential data collected, and the data scored for its diversity/novelty. Repeat.

Layered safety protocols are critical

Before robots can be integrated into our everyday lives, they need to be developed responsibly with robust research demonstrating their real-world safety.

While AutoRT is a data-gathering system, it is also an early demonstration of autonomous robots for real-world use. It features safety guardrails, one of which is providing its LLM-based decision-maker with a Robot Constitution - a set of safety-focused prompts to abide by when selecting tasks for the robots. These rules are in part inspired by Isaac Asimov’s Three Laws of Robotics – first and foremost that a robot “may not injure a human being”. Further safety rules require that no robot attempts tasks involving humans, animals, sharp objects or electrical appliances.

But even if large models are prompted correctly with self-critiquing, this alone cannot guarantee safety. So the AutoRT system comprises layers of practical safety measures from classical robotics. For example, the collaborative robots are programmed to stop automatically if the force on its joints exceed a given threshold, and all active robots were kept in line-of-sight of a human supervisor with a physical deactivation switch.

SARA-RT: Making Robotics Transformers leaner and faster

Our new system, Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT), converts Robotics Transformer (RT) models into more efficient versions.

The RT neural network architecture developed by our team is used in the latest robotic control systems, including our state-of-the-art RT-2 model . The best SARA-RT-2 models were 10.6% more accurate and 14% faster than RT-2 models after being provided with a short history of images. We believe this is the first scalable attention mechanism to provide computational improvements with no quality loss. While transformers are powerful, they can be limited by computational demands that slow their decision-making. Transformers critically rely on attention modules of quadratic complexity. That means if an RT model’s input doubles – by giving a robot additional or higher-resolution sensors, for example – the computational resources required to process that input rise by a factor of four, which can slow decision-making.

SARA-RT makes models more efficient using a novel method of model fine-tuning that we call “up-training”. Up-training converts the quadratic complexity to mere linear complexity, sharply reducing the computational requirements. This conversion not only increases the original model’s speed, but also preserves its quality.

We designed our system for usability and hope many researchers and practitioners will apply it, in robotics and beyond. Because SARA provides a universal recipe for speeding up Transformers, without need for computationally expensive pre-training, this approach has the potential to massively scale up use of Transformers technology. SARA-RT does not require any additional code as various open-sourced linear variants can be used.

When we applied SARA-RT to a state-of-the-art RT-2 model with billions of parameters, it resulted in faster decision-making and better performance on a wide range of robotic tasks.

SARA-RT-2 model for manipulation tasks. Robot’s actions are conditioned on images and text commands.

And with its robust theoretical grounding, SARA-RT can be applied to a wide variety of Transformer models. For example, applying SARA-RT to Point Cloud Transformers - used to process spatial data from robot depth cameras - more than doubled their speed.

RT-Trajectory: Helping robots generalize

It may be intuitive for humans to understand how to wipe a table, but there are many possible ways a robot could translate an instruction into actual physical motions.

We developed a model called RT-Trajectory , which automatically adds visual outlines that describe robot motions in training videos. RT-Trajectory takes each video in a training dataset and overlays it with a 2D trajectory sketch of the robot arm’s gripper as it performs the task. These trajectories, in the form of RGB images, provide low-level, practical visual hints to the model as it learns its robot-control policies.

When tested on 41 tasks unseen in the training data, an arm controlled by RT-Trajectory more than doubled the performance of existing state-of-the-art RT models: it achieved a task success rate of 63%, compared with 29% for RT-2.

Traditionally, training a robotic arm relies on mapping abstract natural language (“wipe the table”) to specific movements (close gripper, move left, move right), making it hard for models to generalize to novel tasks. In contrast, an RT-Trajectory model enables RT models to understand "how to do" tasks by interpreting specific robot motions like those contained in videos or sketches.

The system is versatile: RT-Trajectory can also create trajectories by watching human demonstrations of desired tasks, and even accept hand-drawn sketches. And it can be readily adapted to different robot platforms.

Left: A robot, controlled by an RT model trained with a natural-language-only dataset, is stymied when given the novel task: “clean the table”. A robot controlled by RT-Trajectory, trained on the same dataset augmented by 2D trajectories, successfully plans and executes a wiping trajectory

Right: A trained RT-Trajectory model given a novel task (“clean the table”) can create 2D trajectories in a variety of ways, assisted by humans or on its own using a vision-language model.

RT-Trajectory makes use of the rich robotic-motion information that is present in all robot datasets, but currently under-utilized. RT-Trajectory not only represents another step along the road to building robots able to move with efficient accuracy in novel situations, but also unlocking knowledge from existing datasets.

Building the foundations for next-generation robots

By building on the foundation of our state-of-the-art RT-1 and RT-2 models, each of these pieces help create ever more capable and helpful robots. We envision a future in which these models and systems can be integrated to create robots – with the motion generalization of RT-Trajectory, the efficiency of SARA-RT, and the large-scale data collection from models like AutoRT. We will continue to tackle challenges in robotics today and to adapt to the new capabilities and technologies of more advanced robotics.

Three Trends in Stanford Robotics Research

In the latest industry brief, learn about how scholars are advancing more adaptive, assistive robotics and pushing forward autonomous technology.

Two Stanford students sit behind a large robotic arm and practice moving blocks with it.

Stanford students experiment with a robotic arm in a Stanford robotics lab on campus. | Winny Lucas

The COVID-19 pandemic has drastically changed how we operate within our daily lives and across industries. More companies moved toward automating processes and implementing robotics. While that move might inoculate some industries from the fragility of systems built solely on human labor, it would be a mistake to move to full automation, which would worsen workers’ economic power and create a wider power imbalance between those who control technology and those who don’t. The best approach lies somewhere in the middle, where we can combine human intelligence with robotic capabilities to create more durable and adaptable systems. 

Researchers at Stanford are focused on applying innovative machine learning techniques in robotics to enable higher levels of human-interaction ability. A new Stanford HAI industry brief details some of the cutting-edge research with applications in our personal lives and across industries, including manufacturing, healthcare, and autonomous vehicles. Following are three trends in robotics research at Stanford.

Image of robotics industry report

More adaptive robots: New robotic learning techniques – some involving learning from human demonstration, adaptive learning, optimization, and more – are leading to more useful robotics. Robot capabilities have grown to become more adaptable to dynamically changing environments while solving highly complex problems, making robotics more suitable for a wider array of industrial applications, including manufacturing insertion and manipulation tasks. 

Robotic helpers: Another research focus is human-robot interaction, including assistive robotics, medical robotics, and human augmentation. Scholars in these areas focus on the ability to interpret, adapt, and enhance human behavior. By creating robotics that are responsive to human input, they are amplifying human skills, such as with teleoperated surgery, and improving quality of life, such as assisting patients in dressing and bathing.

Better autonomous technologies: Mobility represents another large and fundamental problem space within robotics – it brings together the need for capabilities across human interaction, adaptation in dynamic environments, perception, and complex decision making. Autonomous vehicles have enormous potential within transportation and the future of supply chain logistics. New Stanford research is addressing essential problems within those applications, including multiple object detection, safe route planning during sensor failure, navigating around humans, and more – all of which require innovative usage of artificial intelligence. 

Recent breakthroughs in machine learning have led to a huge growth in robotic intelligence and thus widened the applications of robotics within our personal and professional lives. Dig into this recent industry brief to learn more about research in these sectors and see the latest advances from Stanford faculty. 

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition.  Learn more .

More News Topics

Having a machine learning agent interact with its environment requires true unsupervised learning, skill acquisition, active learning, exploration and reinforcement, all ingredients of human learning that are still not well understood or exploited through the supervised approaches that dominate deep learning today. Our goal is to improve robotics via machine learning, and improve machine learning via robotics. We foster close collaborations between machine learning researchers and roboticists to enable learning at scale on real and simulated robotic systems.

Recent Publications

Some of our teams.

We're always looking for more talented, passionate people.

Careers

  • Position paper
  • Open access
  • Published: 28 January 2022

Human-centered AI and robotics

  • Stephane Doncieux   ORCID: orcid.org/0000-0003-1541-054X 1 ,
  • Raja Chatila 1 ,
  • Sirko Straube 2 &
  • Frank Kirchner 2 , 3  

AI Perspectives volume  4 , Article number:  1 ( 2022 ) Cite this article

9522 Accesses

5 Citations

13 Altmetric

Metrics details

Robotics has a special place in AI as robots are connected to the real world and robots increasingly appear in humans everyday environment, from home to industry. Apart from cases were robots are expected to completely replace them, humans will largely benefit from real interactions with such robots. This is not only true for complex interaction scenarios like robots serving as guides, companions or members in a team, but also for more predefined functions like autonomous transport of people or goods. More and more, robots need suitable interfaces to interact with humans in a way that humans feel comfortable and that takes into account the need for a certain transparency about actions taken. The paper describes the requirements and state-of-the-art for a human-centered robotics research and development, including verbal and non-verbal interaction, understanding and learning from each other, as well as ethical questions that have to be dealt with if robots will be included in our everyday environment, influencing human life and societies.

Introduction

Already 30 years ago, people have learned in school that automation of facilities is replacing human workers, but over time people recognized in parallel that working profiles are changing and that also new type of work is created through this development, so that the effect was rather a change in industry and not a mere replacement of work. Now, we see that AI systems are getting increasingly powerful in many domains that were initially solvable only using human intelligence and cognition, thus starting this debate anew. Examples for AI beating human experts in Chess [ 1 ] or Go [ 2 ], for instance, cause significant enthusiasm and concerns at the same time about where societies are going when widely using robotics and AI. However, we see at the same time with a closer look, that although the performance of AI in such selected domains may outrun that of humans, the mechanisms and algorithms applied do not necessarily resemble human intelligence and methodology, and may even not involve any kind of cognition. In addition, AI algorithms are application specific and their transfer to other domains is not straightforward [ 3 ].

Robots using AI means an advancement from pure automation systems to intelligent agents in the environment that can not only work in isolated factory areas, but also in an unstructured or natural environment as well as in direct interaction with humans. Then, the application areas of robots are highly diverse, such that robots might influence our everyday life in the future in many ways. Already without direct contact to a human being required, robots are sought to support human ambitions, e.g. for surface exploration or installment, inspection or maintenance of infrastructure in our oceans [ 4 , 5 ] or in space [ 6 – 8 ]. Everywhere, the field of robotics is an integrator for AI technology, since complex robots need to be capable in many ways, because they have the ability to act and thus have a physical impact on their environment. Robots therefore create opportunities for collaboration and empowerment that are more diverse than what a computer-only AI system can offer. A robot can speak or show pictures through an embedded screen, but it can also make gestures or physically interact with humans [ 9 ], opening many possible interactions for a wide variety of applications. Interactions that can benefit to children with autism [ 10 , 11 ] or elderly [ 12 ] have been shown with robots that are called social[ 13 , 14 ] as they put a strong emphasis on robot social skills. Mechanical skills are also important for empowering humans, for instance through a collaborative work in teams involving both robots and humans [ 15 , 16 ]. Such robots are called cobots: collaborative robots that share the physical space of a human operator and can help to achieve a task by handling tools or parts to assemble. Thus cobots can help the operator to achieve a task with a greater precision while limiting the trauma associated to repetitive motions, excessive loads or awkward postures [ 17 ]. Similar robots can be used in other contexts, for instance in rehabilitation [ 18 , 19 ].

If humans and robots work together in such a close way, then it is required that humans have a certain trust in the technology and also an impression of understanding what the robot is doing and why. Providing robots with the ability to communicate and naturally interact with humans, would minimize the required adaptation from the human side. Making this a requirement such that humans can actually work and interact with robots in the same environment, complements the view of Human-Centered AI as a technology designed for collaboration and empowerment of humans [ 20 ].

After examining the specificity of robotics from an AI point of view in the next section, we discuss the requirements of human-centered robotics and, in the light of the current research on these topics, we examine the following questions: How can a robot interact with humans? How can it understand and learn from a human? How can the human understand the robot? And finally what ethical issues does it raise?

AI and robotics

A robot is a physical agent that is connected to the real world through its sensors and effectors [ 21 ]. It perceives the environment and uses this information to decide what action to apply at a particular moment (Fig.  1 ). These interactions of an autonomous robot with its environment are not mediated by humans: sensor data flows shape perceptions which are directed to the decision or planning system after some processing, but without any human intervention. Likewise, when an autonomous robot selects an action to apply, it sends the corresponding orders directly to its motors without going through any human mediated process. Its actions have an impact on the environment and influence future perceptions. This direct relation of the robot with the real world thus raises many challenges for AI and takes robotics away from the fields in which AI has known its major recent successes.

figure 1

A typical AI system interacts with a human user (search engine, recommendation tool, translation engine...). The human user launches the request and the result is intended to be perceived by him or her and there is in general no other connection to the real world. The system is thus not active in the real world, only the human is. A robotic system is active. It directly interacts with its environment through its perceptions and actions. Humans may be part of the environment, but otherwise are not involved in robot control loop, at least for autonomous robots

When it was first coined in 1956 at the Dartmouth College workshop, AI was defined as the problem of “making a machine behave in ways that would be called intelligent if a human were so behaving” [ 22 ]. This definition has evolved over time, with a traditional definition now that states that “AI refers to machines or agents that are capable of observing their environment, learning, and based on the knowledge and experience gained, taking intelligent action or proposing decisions” [ 23 ]. This view of AI includes many of the impressive applications that have appeared since Watson’s victory at the Jeopardy! quiz show in 2011, from recommendation tools or image recognition to machine translation software. These major successes of AI actually rely on learning algorithms and in particular on deep learning algorithms. Their results heavily depend on the data they are fed with. The fact that the design of the dataset is critical for the returned results has been clearly demonstrated by Tay, the learning chatbot launched in 2016 by Microsoft that twitted racist, sexist and anti-Semitic messages after less than 24 h of interactions with users [ 24 ]. Likewise, despite impressive results in natural language processing, as demonstrated by Watson success at the Jeopardy! show, this system has had troubles to be useful for applications in oncology, where medical records are frequently ambiguous and contain subtle indications that are clear for a doctor, but not straightforward to extract for Watson’s algorithm [ 25 ]. The "intelligence" of these algorithms thus again depends heavily on the datasets used for learning, that should be complete, unambiguous and fair. They are external to the system and need to be carefully prepared.

Typically, AI systems receive data in forms of images or texts generated or selected by humans and send their result directly to the human user. Contrary to robots, such AI systems are not directly connected to the real world and critically depend on humans at different levels. Building autonomous robots is thus part of a more restrictive definition of AI based on the whole intelligent agent design problem: “an intelligent agent is a system that acts intelligently: What it does is appropriate for its circumstances and its goal, it is flexible to changing environments and changing goals, it learns from experience, and it makes appropriate choices given perceptual limitations and finite computation” [ 26 ].

The need to face the whole agent problem makes robotics challenging for AI, but robotics also raises other challenges. A robot is in a closed-loop interaction with its environment: any error at some point may be amplified over time or create oscillations, calling for methods that ensure stability, at least asymptotically. A robot moves in a continuous environment, most of the time with either less degrees-of-freedom than required – underactuated system, like cars – or more degrees-of-freedom than required – redundant systems, like humanoid robots. Both conditions imply the development of special strategies to make the system act in an appropriate way. Likewise, the robot relies on its own sensors to make a decision, potentially leading to partial observability. Sensors and actuators may also be a source of errors because of noise or failures. These issues can be abstracted away for AI to focus on high level decision, but doing so limits the capabilities that are reachable for the robot, as building the low-level control part of the robot requires to make decisions in advance about what the robot can do and how it can achieve it: does it need position control, velocity control, force control or impedance control (controlling both force and position)? Does it need a slow but accurate control or a fast and rough one? For a multi-purpose robot like a humanoid robot, deciding it a priori limits what the robot can achieve and considering control and planning or decision in a unified framework opens the possibility to better coordinate the tasks the robot has to achieve [ 27 , 28 ].

In the meantime, robotics also creates unique opportunities for AI. A robot has a body and this embodiment produces alternative possibilities to solve the problems it is facing. Morphological computation is the ability of materials to take over some of the processes normally attributed to control and computation [ 29 ]. It may drastically simplify complex tasks. Grasping with rigid grippers requires, for instance, to determine where to put the fingers and what effort to exert on the object. The same task with granular jamming grippers or any other gripper made with soft and compliant materials is much simpler as there is basically just to activate grasping without any particular computation [ 30 ]. Embodiment may also help to deal with one of the most important problems in AI: symbol grounding [ 31 ]. Approaches like Watson rely on a huge text dataset in which the relevant relations between symbols are expected to be explicitly described. An alternative is to let the robot experience such relations through interactions with the environment and the observation of their consequences. Pushing an object and observing what has moved clearly shows object boundaries without the need to have a large database of similar objects, this is called interactive perception [ 32 ]. Many concepts are easier to understand when interaction can be taken into account: a chair can be characterised by the sitting ability, so if the system can experience what sitting means, it can guess whether an object is a chair or not without the need to have a dataset of labelled images containing similar chairs. This is the notion of affordance that associates perception, action and effect [ 33 ]: a chair is sittable, a button pushable, an object graspable, etc.

Robots are a challenge for AI, but also an opportunity to build an artificial intelligence that is embodied in the real world and thus close to the conditions that allowed the emergence of human intelligence. Robots have another specificity: humans are explicitly out of the interaction loop between the robot and its environment. The gap between robots and humans is thus larger than for other AI systems. Current robots on the market are designed for simple tasks with limited or even no interactions (e.g. vacuum cleaning). This situation can be overcome only if the goal of a human-centered robotic assistant is properly addressed, because the robot has to reach a certain level of universality to be perceived as an interaction partner. One component alone, like, e.g., speech recognition, is not enough to satisfy the needs for proper interaction.

Requirements of human-centered AI and robotics

All humans are different. If they share some common behaviours, each human has their specificities that may further change along time. A human-centered robot should deal with this to properly collaborate with humans and empower them. It should then be robust and adaptive to unknown and changing conditions. Each robot is engaged in an interaction with its environment that can be perturbed in different ways. A walking robot may slip on the ground, a flying one may experience wind gusts. Adaptation is thus a core objective of robotics since its advent and in all fields of robotics, from control to mechanics or planning. All fields of robotics aim thus at reaching the goal of a robot that can ultimately deal with the changes it is confronted with, but these changes are, in general, known to the robot designer that has anticipated the strategies to deal with them. With these strategies one tries to build methods that can, to some extent, deal with perturbations and changes.

Crafting the robot environment and simplifying its task is a straight-forward way to control the variability the robot can be subject to. The application of this principle to industry has lead to the large deployment of robots integrated in production lines built explicitly to make their work as simple as possible. New applications of robotics have known a rapid development since the years 2000: autonomous vacuum cleaners. These robots are not locked up into cages as they move around in uncontrolled environments, but despite the efforts deployed by engineers, they may still have some troubles in certain situations [ 34 ]. When a trouble happens, the user has to discover where the problem comes from and make whatever change to its own home or to the way the robot is used so that the situation will not occur again. Adaptation is thus on the human user side. Human-centered robotics aims at building robots that can collaborate with humans and empower them. They should then first not be a burden for their human collaborators and exhibit a high level of autonomy [ 35 ].

The more variable the tasks and the environments to fulfil them, the more difficult it is to anticipate all the situations that may occur. Human-centered robots are supposed to be in contact with humans and thus experience their everyday environment, that is extremely diverse. Current robots clearly have trouble to appropriately react to situations that have not been taken into account by their designer. When an unexpected situation occurs and results in a robot failure, a human-centered robot is expected to, at least, avoid to infinitely repeat this failure. It implies an ability to exploit its experience to improve its behaviour: a human-centered robot needs to possess a learning ability . Learning is the ability to exploit experience to improve the behaviour of a machine [ 36 ]. Robotics represents a challenge for all learning algorithms, including deep learning [ 37 ]. Reinforcement learning algorithms aim at discovering the behaviour of an agent from a reward that tells whether it behaves well or not. From an indication of what to do, it searches how to do it. It is thus a powerful tool to make robots more versatile and less dependant on their initial skills, but reinforcement learning is notoriously difficult in robotics [ 38 ]. One of the main reasons is that a robot is in a continuous environment, with continuous actions in a context that is, in general, partially observable and subject to noise and uncertainty. A robot that successfully learns to achieve a task owes a significant part of its success to the appropriate design of the state and action spaces that learning relies on. Different kinds of algorithms do exist to explore the possible behaviours and keep the ones that maximise the reward [ 39 ], but for all of them holds, the larger the state and action spaces, the more difficult the discovery of appropriate behaviours. In the meantime, a small state and action space limits robot abilities. A human-centered robot is expected to be versatile, it is thus important to avoid too strong limitations of their capabilities. A solution is to build robots with an open-ended learning ability [ 40 , 41 ], that is with the ability to build their own state and action spaces on-the-fly [ 42 ]. The perception of their environment can be structured by their interaction capability (Fig.  2 ). The skills they need can be built on the basis of an exploration of possible behaviours. In a process inspired from child development [ 43 ], this search process can be guided by intrinsic motivations, that can replace the task oriented reward used in reinforcement learning, for the robot to bootstrap the acquisition process of world models and motor skills [ 44 ]. This adaptation capability is important to make robots able to deal with the variability of human behaviours and environments and to put adaptation on the robot side instead of the human side, but it is not enough to make robots human-centered.

figure 2

A PR2 robot engaged in an interactive perception experiment to learn a segmentation of its visual scene [ 93 , 94 ]. The interaction of the robot with its surrounding environment provides data to learn to discriminate objects that can be moved by the robot from the background (Copyright: Sorbonne Université)

The main reason is that humans play a marginal role in this process, if any. A human-centered robot needs to have or develop human-specific skills. To do so, they first need to be able to interact with humans . It can be done in different ways that are introduced, with the challenges it raises, in “ Humans in the loop ” section. They also need to understand humans . “namerefsec:Undersanding-humans” section discusses this topic. Based on this understanding, robots may have to adapt their behaviour. Humans are used to transmit their knowledge and skills to other humans. They can teach, explain or show the knowledge they want to convey. Providing a robot with a particular knowledge is done through programming, a process that requires a strong expertise. A human-centered robot needs to provide other means of knowledge transmission. It needs to be able to learn from humans , see “ Learning from humans ” section for a discussion on this topic. Last but not least, humans need to understand what robots know, what they can and what they cannot do. It is not straightforward, in particular in the context of the current trend of AI that mostly relies on black-box machine learning algorithms [ 45 ]. “namerefsec:robots-understandable” section examines this topic in a robotics context.

Humans in the loop

The body of literature about the interaction of humans with computers and robots is huge and contains metrics [ 46 , 47 ], taxonomies [ 48 ] and other kinds of descriptions and classifications trying to establish criteria for the possible scenarios. Often, a certain aspect is in the focus, like e.g. safety [ 49 ]. Still, a structured and coherent view is not established, such that it remains difficult to directly compare approaches in a universal concept [ 50 ]. Despite this ongoing discussion, we take a more fundamental view in the following to describe what is actually possible. A human has three possibilities to interact with robots: physical interaction, verbal interaction and non-verbal interaction. Each of these interaction modalities has its own features, complexities and creates its own requirements.

Physical interaction

As a robot has a physical body, any of its movements is likely to create a physical interaction with a human. It may not be voluntary, for instance if the robot hits a human that it has not perceived, but physical interaction is also used on purpose, when gestures are the main target. Physical interaction between humans and robots has gained much attention over the past years since some significant advancements have been made in two main areas of robotics. On the one hand, new mechanical designs of robotic systems integrate compliant materials as well as compliant elements like springs. On the other hand, on the control side, it became possible to effectively control compliant structures because of increased computational power of embedded micro-controllers. Another reason is also the availability of new, smaller and yet very powerful sensor elements to measure forces applied to the mechanical structures. It has lead to the implementation of control algorithms that can react extremely rapidly to external forces applied to the mechanical structure. A good overview of the full range of applications and the several advancements that have been made in recent years can be found in [ 51 ].

These advancements were mandatory for a safe use of robotic systems in direct contact with human beings in highly integrated interaction scenarios like rehabilitation. Rehabilitation opens up enormous possibilities for the immediate restoration of mobility and thus quality of life (see, e.g. the scene with an exoskeleton and a wheel chair depicted in Fig.  3 ), while at the same time promoting the human neuronal structures through sensory influx. Furthermore, the above-mentioned methods of machine learning, especially in their deep (Deep-Learning) form, are suitable methods to observe and even predict accompanying neural processes in the human brain [ 52 ]. By observing the human electro-encephalogram, it becomes possible to predict the so-called lateral readiness potential (LRP) -that reflects the process of certain brain regions to prepare deliberate extremity movements- up to 200ms before the actual movement occurs. This potential still occurs in people even after lesions or strokes and can be predicted by AI-methods. In experimental studies, the prediction of an LRP was used to actually perform the intended human movement via an exoskeleton. By predicting the intended movement at an early stage and controlling the exoskeleton mechanics in time, the human being experiences the intended movement as being consciously performed by him or herself.

figure 3

An upper-body exoskeleton integrated into a wheel chair can support patients in doing everyday tasks as well as the overall rehabilitation process. (Copyright: DFKI GmbH)

As appealing and promising such scenarios sound, it is necessary to consider the implications of having an ’intelligent’ robot acting in direct contact with humans. There are several aspects that need to be considered and that do pose challenges in several ways [ 53 ]. To start with, we do need to consider the mechanical design and the kinematic structure in much deeper way as we would have to in other domains. First of all, there is the issue of safety of the human. In no way can it be allowed for the robot to harm the human interaction partner. Therefore safety is usually considered on three different levels:

On the level of mechanical design we must ensure that compliant mechanisms are used that absorb the energy of potential impacts with an object or a human. This can be done in several ways by integrating spring like elements in the actuators that work in series with a motor/gear setting. This usually allows the spring to absorb any impact energy but on the other hand it decreases the stiffness of the system which is a problem if it comes to very precise control with repeatable motions even under load.

on the second level the control loops can be used to basically implement an electronic spring. This is done by measuring the forces and torques on the motor and by controlling the actuators based on these values instead of position signal only. The control based on position ensures a very stiff and extremely precise and repeatable system performance while torque control is somewhat less precise. It further requires a nested control approach which combines position and torque control in order to achieve the desired position of the joint while at the same time respecting torque limits set by the extra control loop. Overall the effect is similar to that of a mechanical spring as the robot will immediately retract (or stop to advance) as soon as external forces are measured, and torque limits are violated. Even though this sounds like it is a pure control problem and AI-Technologies are not required. The problem quickly becomes NP Hard if the robot actually consists of many degrees of freedom like e.g. a humanoid robot. In these cases, deep neural network strategies are used to find approximations to the optimal control scheme [ 54 ]. Yet there are cases when even higher levels of cognitive AI approaches are required, and this is in cases where the limitations of torques to the joints contradict the stability of the robot standing or walking behavior, for instance, or when it comes to deliberately surpass the torque limits if e.g. the robot needs to drill a hole in the wall. In this case some joints need to be extremely stiff in order to provide enough resistance to penetrate the wall with the drill. These cases require higher levels of spatio-temporal planning and reasoning approaches to correctly predict context and to adjust the low-level control parameters accordingly and temporarily.

on the level of environmental observation there are several techniques that use external sensors like cameras, laser range finders and other kinds of sensors to monitor the environment of the robot and to intervene with the control scheme of the robot as soon as a person enters the work cell of the robotic system. Several AI technologies are used to predict the intentions of the person entering the robots environment and can be used to modify the robots behavior in an adequate way: instead of just a full stop if anything enters the area, it is a progressive approach with a decrease of robot movement speed if the person comes closer. In most well-defined scenarios these approaches can be implemented with static rule-based reasoning approaches, however, imagine a scenario where a robot and a human being are working together to build cars. In this situation there will always be close encounters between the robot and the human and most of them are wanted and required. There might even be cases where the human and the robot actually get into physical contact, for instance when handing over a tool. Classical reasoning and planning approaches have huge difficulties in adequately representing such situations [ 55 ]. What is needed instead is an even deeper approach to actually make the robot understand intentions of the human partner [ 56 ].

Verbal interaction

“Go forward”, “turn left”, “go to the break room”, it is very convenient to give orders to robots using natural language, in particular when robot users are not experts or physically impaired [ 57 ]. Besides sending orders to the robot (human-to-robot interaction), a robot could answer questions or ask for help (robot-to-human interaction) or engage in a conversation (two-way communication) [ 58 ]. Verbal interaction has thus many different applications in robotics and contrary to physical interactions, it does not create strong safety requirements. A human cannot be physically harmed through verbal interaction, except if it makes the robot act in a way that is dangerous for the human, but in this case the danger still comes from the physical interaction, not from the verbal interaction that has initiated it.

Although a lot of progress has been made on natural language processing, robotics creates specific challenges. A robot has a body. Robots are thus expected to understand spatial (and eventually temporal) relations and to connect the symbols they are manipulating to their sensorimotor flow [ 59 ]. This is a situated interaction. Giving a robot an order as “go through the door” is expected to make the robot move to the particular door that is in the vicinity of the robot. There is a need to connect words to the robots own sensorimotor flow: each robot has specific sensors and effectors and it needs to be taken into account. If the robot needs to understand a limited number of known words, it can be hand-crafted [ 57 ]. It can also rely on deep learning methods [ 60 ], but language is not static, it dynamically evolves through social interaction, as illustrated by the appearance of new words: in 2019, 2700 words have been added to the Oxford English Dictionary Footnote 1 . Furthermore the same language may be used in a different way in distant places of the world. French as talked in Quebec, for instance, has some specificities that distinguishes it from the French talked in France. A human-centered robot needs to be able to adapt the language it uses to its interlocutor. It raises many different challenges [ 61 ], including symbol grounding, that is one of the main long-standing AI challenges [ 31 ]. Using words requires to know their meaning. This meaning can be guessed from a semantic network, but as the interaction is situated, at least some of the words will need to be associated with raw data from the sensorimotor flow, for instance the door in the "go through the door" order needs to be identified and found in the robot environment. This is the grounding problem.

The seminal work of Steels on language games [ 62 , 63 ] shows how robots could actually engage in a process that converges to a shared vocabulary of grounded words. When the set of symbols is closed and known beforehand, symbol grounding is not a challenge anymore, but it still is if the robot has to build it autonomously [ 64 ]. To differentiate it from the grounding of a fixed set of symbols, it has been named symbol emergence [ 65 , 66 ]. A symbol has different definitions. In symbolic AI, symbols are basically a pointer to a name, a value and possibly other properties, like a function definition, for instance. A symbol carries a semantic which is different for the human and for the robot, but enables them to partially share the same grounds. In the context of language study, the definition of a symbol is different. Semiotics, the study of signs that mediate communication, defines it as a triadic relationship between an object, a sign and an interpretant. This is not a static relationship, but a process. The interpretant is the effect of a sign on its receiver, it is thus a process relating the sign with the object. The dynamic of this process can be seen in our ability to dynamically give names to objects (may they be known or not). Although many progresses have been made recently on these topic [ 58 , 66 ], building a robot with this capability remains a challenge.

Non-verbal interaction

The embodiment of robots creates opportunities to communicate with humans by other means than language. It is an important issue as multiple nonverbal communication modalities do exist between humans and they are estimated to represent a significant part of communicated meaning between humans. Non verbal cues revealed for instance to help children to learn new words from robots [ 67 ]. Adding nonverbal interaction abilities to robots thus opens the perspective of building robots that can better engage with humans [ 68 ], i.e. social robots [ 13 ]. Nonverbal interaction may support verbal communication, as lip-syncing or other intertwined motor actions as head nods [ 69 ], and may have a significant impact on humans [ 70 ], as observed through their behaviour response, task performance, emotion recognition and response as well as cognitive framing, that is the perspective humans adopt, in particular on the robot they interact with.

Different kinds of nonverbal communications do exist. The ones that incorporate robots movements are kinesics, proxemics, haptics and chronemics. Kinesics relies on body movements, positioning, facial expressions and gestures and most robotics related research on the topic focus on arm gestures, body and head movements, eye gaze and facial expressions. Proxemics is about the perception and use of space in the context of communication, including the notions of social distance or personal space. Haptics is about the sense of touch and chronemics with time-experiencing. Sanuderson and Nejat have reviewed robotics research work on these different topics [ 70 ].

Besides explicit non-verbal communication means, the appearance of a robot has revealed to impact the way humans perceive a robot and engage in a human-robot interaction [ 71 , 72 ]. It has been shown for instance that a humanlike-shape influences non-verbal behaviors towards a robot like delay of response, distance [ 73 ] or embarrassment [ 74 ]. Anthropomorphic robots significantly draw the attention of the public and thus creates high expectations in different service robotics applications, but the way they are perceived and their acceptance is a complex function involving multiple factors, including user culture, context and quality of the interaction or even degree of human likeness [ 75 ]. The impact of this last point, in particular, is not trivial. Mori proposed the uncanny valley theory to model this relation [ 76 , 77 ]. In this model, the emotional response improves when robot appearance gets more humanlike, but a sudden drop appears beyond a certain level: robots that look like humans but still with noticeable differences, can thus create a feeling of eeriness resulting in discomfort and rejection. This effect disappears when the robot appearance gets close enough to humans. The empirical validation of this model is difficult. Some experiments seem to validate it [ 78 ], while others lead to contradicting results [ 79 ]. For more details, see the reviews by Fink [ 80 ] or Złotowski et al. [ 81 ].

Understanding humans and human intentions

There are situations in which robots operate in isolation, such as in manufacturing lines for welding or painting, or in deep sea or planetary exploration. Such situations are dangerous for humans and the robot task is provided to it through pre-programming (e.g. welding) or teleprogramming (e.g., a location to reach on a remote planet). However, in many robotic application areas, be it in manufacturing or in service, robots and humans are starting to more and more interact with each other in different ways. The key characteristics making these interactions so challenging are the following:

Sharing space, for navigation or for reaching to objects for manipulation

Deciding for joint actions that are going to be executed by both the robot and the human

Coordination of actions over time and space

Achieving joint actions physically

These characteristics lead to many different scientific questions and issues. For example sharing space requires geometric reasoning, motion planning and control capabilities [ 82 ]. Deciding for joint actions [ 83 ] requires a mutual representation of human capabilities by the robot and vice-versa, e.g., is the human (resp. robot) capable of holding a given object? It also requires a Theory of Mind on the part of the robot and of the human: what are the robot’s representations and what are the human’s representations of a given situation? What is the human (resp. robot) expected to do in this situation?

The third mentioned characteristic, coordination of action, requires in addition to what has been mentioned above signal exchanges between human and robot to ensure that each is indeed engaged and committed to the task being executed. For example gaze detection through eye trackers enables to formulate hypotheses about human visual focus. The robot in turn has to provide equivalent information to the human, since the human usually cannot determine the robot’s visual focus from only observing its sensors. In this case, it becomes therefore necessary that the robot signals explicitly what is its focus or what are its intentions (see “namerefsec:robots-understandable” section).

Now, when it comes to physical interaction, robot and human are not only in close proximity, but they also exchange physical signals such as force. Consider for example a robot and a human moving a table together. Force feedback enables to distribute the load correctly between them, and enables to coordinate the actions. In the case of physical interaction, another important aspect is to ensure human safety, which puts constraints on robot design and control. Compliance and haptic feedback become key (see “ Physical interaction ” section).

In all these interaction scenarios, the robot must already have all the autonomous capacities for decision-making and task supervision. Indeed the robot must be able to plan its own actions to achieve a common goal with the human, taking into account the human model and intentions.

Take the simple example of a human handing an object to the robot. The common goal is that, in the final state, the robot is holding the object, whereas in the initial state the human is holding it. The goal must be shared right from the beginning of the interaction, for example through an explicit order given by the human. Alternatively the robot might be able to determine the common goal by observing the human’s behavior, which requires the robot to have the ability to deduce human intentions from their actions, posture, gestures (e.g., deictic gestures) or facial expressions. This cannot be but a probabilistic reasoning capacity, given the uncertainties of observation and of prior hypotheses. Then the robot must plan its actions according to its human model, and this cannot be but a probabilistic planning process, e.g., using markovian processes, because of the inherent uncertainties of the observations – and therefore the robot’s beliefs – and of action execution. Robot task supervision must also ensure that the human is acting in accordance to the plan, by observing actions and posture.

Another essential dimension for complex interactions is communication using dialogue. The robot can start such a dialogue for example when it detects that some information is needed to complete its model, or to reduce its uncertainties. Formulating the correct questions requires the robot to have a self assessment capacity of its own belief state.

Learning from humans

Using the human as a teacher to train robotic systems has been around for some time [ 84 ]. Many cases and scenarios, like the hybrid team scenario (see example depicted in Fig.  4 ) where humans and robots are building cars together acting as a team, are too complex to be completely modelled. Consequently, it is difficult or impossible to devise exact procedures and rule-based action execution schemes in advance. One example here could be to formulate the task to have a robot pack a pair of shoes in a shoebox [ 85 ]. Even a task that sounds as simple as this proved to be impossible to be completely modeled. Therefore, a learning by demonstration method has been applied to teach the robot the task by a human demonstrator. In such cases learning, or said differently a step-wise approximation and improvement of the optimal control strategy, is the most straightforward option available. In situations where enough a priori data is available, this can be done offline and the robotic system can be trained to achieve a certain task. However, in many cases, data is not available and therefore online strategies are needed to acquire the desired skill. The learning by demonstration approach can already be implemented quite successfully by e.g. recording data from human demonstrators that are instrumented with reflectors for image capturing devices and then feeding skeleton representations of the human movements as sample trajectories into the learning system which in turn uses e.g. Reinforcement Learning techniques to generate appropriate trajectories. This approach usually leads to quite usable policies on the side of the robotic system, yet in many cases when applied in a realistic task scenario it turns out that “quite good” is not good enough and online optimization has to be performed. Here it turns out to be advantageous to include approaches like discussed in the previous section on understanding human intentions or state of mind.

figure 4

Examples for humans, robots and other AI agents working in hybrid teams. Due to the possible applications and scenarios robots can be configured here as stationary or mobile systems up to even complex systems with humanoid appearance. (Copyright: Uwe Völkner/Fotoagentur FOX)

Using this general idea, it was possible to online improve the performance of an already trained robot by applying a signal generated by the human brain on a subconscious level providing it as a reinforcement signal back to the robot [ 56 ]. The signal is the so-called Error potential. This is an event related potential (ERP) generated by brain areas when a mismatch between expected input and actual input occurs. In many real-world situations such a signal is produced e.g., when a human observes another human to perform a movement in an obviously wrong way in the correct context or the correct movement is performed but in the wrong context. The beauty about this signal is that it is generated on subconscious levels, so before the human actively is aware of it. This is important for two reasons:

When the human becomes aware of the signal that means that it was already analyzed and modulated by other brain regions. This means that a cognitive classification of the subconscious signal has taken place which will disassociate the original signal.

The second reason why it is important that the signal occurs before evaluation by other brain areas is that it does not have to be externalized e.g. by verbalization. Imagine a hybrid team scenario where the human in the team has to explicitly verbalize each error that he or she observes in the performance of the robot. First, the above mentioned disassociation process will lead to a blurriness or haziness of the verbalized feedback to the robot but more importantly as a second result the human would probably not verbalize each and every error due to fatigue and information valuable for interaction is lost.

To summarize, the learning could either happen using external information available, like getting commands or watching humans demonstrating a task, or implicit signals during interaction like evaluation of facial expressions or by using brain signals like certain ERPs to provide feedback. The latter is of course using information from the human interaction partner that is not directly controlled by the human and also not per se voluntarily given. This raises ethical and legal questions that have to be addressed when using this as a standard procedure for interaction (see also “ Ethical questions ” section), underlining the fact that Human-centered AI and robotics ultimately include the involvement of disciplines from social sciences. At the same time, we have outlined that making use of such information can be highly beneficial for fluent and intuitive interaction and learning.

Making robots understandable for humans

In “ Understanding humans and human intentions ” section, it was discussed how the robot can better understand humans and how this can be achieved to some point. It is rather straightforward to equip the robot with the necessary sensors and software to detect humans and to interpret gestures, postures and movements, as well as to detect their gaze and infer some intentions. Even if it is not the whole complexity of human behavior, these capacities can capture enough of human intentions and actions to enable task sharing and cooperation. Equally important however in an interaction is the opposite case, that is how can the human better understand the robot’s intentions and actions.

In most scenarios, we can safely assume that the human does have some a priori knowledge about the framework of action that the robot is equipped with. That is to say that the human can infer some of the physical capabilities and limitations of the system from its appearance (e.g., a legged robot vs. a wheeled robot) but not of its power e.g., can the robot jump or climb a given slope? Even if the human could have some general ideas of the spectrum of robot sensing possibilities, it is not clear whether the robot perceptive capabilities and their limits can be completely and precisely understood. This is e.g., a result of the fact that it is difficult for humans to understand the capabilities and limitations of sensors that they don’t have e.g., infrared sensors or laser-rangefinders providing point-clouds. It is fundamentally impossible for a human being to understand the information processing going on in robot systems with multi-level hierarchies, from low-level control of single joints to higher levels of control involving deep neural networks and finally to top level planning and reasoning processes that all interact with each other and influence each other’s output. This is even extremely difficult for trained computer science experts and robot designers. It represents a complete field of research that deals with the problems of how to manage the algorithmic complexity that occurs in structurally complex robotic systems that act in dynamic environments. Actually the design of robot control or cognitive architectures is an open research area and still a big challenge for AI-Based-Robotics [ 86 ].

Attempts to approach the problem of understanding robots by humans have been made in several directions. One attempt is the robot verbally explaining its actions [ 16 ]. This is to say that the robot actually tells (or writes on a screen) the human what it is doing and why a specific action is carried out. At the same time, it is possible for the human to ask the robot for an explanation of its action(s) and the robot gives the explanation verbally, in computer animated graphics or in iconized form on a screen installed on the robot. The hope behind such approaches is that the need for explanations deliberately uttered by the robot as well as the quest for answers from the side of the human will decrease over time as learning and understanding occurs on the side of the human. Of course this is difficult to assess as long term studies so far have not been carried out or could not be carried out because of the unavailability of appropriate robots. But one assumption that we can safely make is that the explicit answering or required listening to explanations by the human will not be highly appreciated when it comes to practical situations, and the repetitive explanatory utterances of the robot will quickly bother humans.

Therefore it is necessary to think about more subtle strategies to communicate robot internal states and intentions to the human counterpart e.g., its current goals, its knowledge about the world, its intended motions, its acknowledgement of a command, or its requests for an action by the human. Examples of such approaches are to use mimics and gestures. Robots equipped with faces - either just as computer screens where the face is generated or by actually actuated motors forming faces under artificial skin covered robotic heads (if such devices are deemed acceptable - see “ Ethical questions ” section - in order to produce facial expressions which gives some information about the internal state of the robot. These approaches could successfully be applied in e.g. home and elderly care scenarios. However, the internal states being externalized here are rather simple ones that are meant to stimulate actions on the human side like in the pet robot Paro.

However, we can assume that it should be possible in well known scenarios, such as in manufacturing settings, to define fixed signals for interaction made from a set of gestures, including deictic gestures, facial expressions or simply graphical patterns that can be used to externalize internal robot states to human partners. Such a model of communication can be described as the first steps towards achieving a more general common alphabet [ 87 ] as the basis for a language between humans and robots. It is likely that such a common language will be developed or more likely emerge, from more and more robot human interaction scenarios in real world applications as a result of best practice experiences.

It is certain that the corresponding challenges on the robotic side go beyond what was described earlier like the soft and compliant joints that are used for safety reasons. It will be necessary to develop soft and intelligent skin as a cover of the mechanical robot structures that can be used not just as an interface for expressions -in the case of facial skin- but also as a great and powerful sensor on other parts of the robot body for improving and extending the range of physical interactions with humans [ 88 ]. Just a simple example that we all know is that in a task performed by two humans it is often observed that one of two partners slightly pushes or touches the other on the shoulder or the arm in order to communicate e.g. that a stable grip has been achieved or to say: ’okay I got it, you can let it go...’. This kind of interaction could also be verbally transmitted to the interaction partner, but humans have the ability to visualize the internal states of their human counterparts, because we share the same kinematic structure and disposition. It is thus in this case not necessary to speak. Just a simple touch suffices to transmit a complex state of affairs. Yet, the interaction of humans with robots that are equipped with such kind of advanced skin technologies can be expected to be a starting point for a common language. The physical interaction will therefore enable new ways of non-physical interaction and very likely the increased possibilities for nonphysical interaction will in turn stimulate other physical interaction possibilities. In summary, it will be an interesting voyage to undertake if in fact intelligent and structurally competent robotic systems will become available as human partners in various everyday life situation. Like in all other technologies, the human designer will shape the technology, but at the same time the technology will shape the human, both as a user of the technology but also as the designer of this technology.

Ethical questions

There are several issues which raise questions of ethics of robotic technologies considered as interaction partners for humans [ 89 ]. To list but a few:

Transformation of work in situations where humans and robots interact together. Depending on how it is designed, the interaction might impose constraints on the human instead of making the robot adapt to the human and carry the burden of the interaction. For example the human is given more dexterous tasks such as grasping, which end up being repetitive and wearing when robot speed doing simpler tasks imposes the pace.

Mass surveillance and privacy issues when personal or domestic robots collect information about their users and households, or self-driving cars which are permanently collecting data on their users and their environments.

Affective bonds and attachment to personal robots, especially those made to detect and express emotions.

Human transformation and augmentation through exoskeletons or prosthetic devices.

Human identity, status of robots in society (e.g;, legal personality), especially for android robots mimicking humans in appearance, language and behavior.

Sexbots designed to be sexual devices that can be made to degrade the image of women, or to look like children

Autonomous weapon systems - which are not so to speak "interacting" with humans, but which are endowed with recognition capacities to target humans.

If we speak about ethics in the context of robots and AI technologies, what we fundamentally mean is that we want to make sure that this technology is designed and used for the good of mankind and not for the bad. The first problem is obviously how do we define good and bad? There are the obvious answers implying that a robot should not harm a person. No question, but what about a surgical robot that needs to inject a vaccine into the arm of a person with a syringe, thus physically injuring her at the moment, but for her benefit? How can we make the distinction between these cases in a formal way? This is the core of the problem.

If we speak about ethics and how to design ethical deliberation into technical systems so that the robot decision-making or control system behaves for "the good", we are fundamentally required to come up with a formalization of ethics. In some form or the other we will be required to put down in expressions of logic and numerical values what is ethical and what is not. In our understanding this will not be possible in a general form, because human ethical judgment and moral thinking is not amenable to algorithmic processing and computations. For example, how would we define algorithmically a principle of respect for human dignity? The concept of dignity itself is complex and has several moral and legal interpretations.

Ethical deliberation cannot be reduced to computing and comparing utilities, as we often see in publications on ethical dilemmas for self driving cars for example. The car could only make computations based on data acquired by its sensors, but the ethical choices would have actually been already made by the designers. Even deciding that the passengers can customise ethical choices, or to let the system learn [ 90 ], for example in simulations, to determine values to be optimized is a negation of what ethical deliberation is. Indeed this would entail an a priori decision on a situation to come, or to decide that ethical deliberation is based on statistics of past actions.

We will of course be able to formalize ethical guidelines (to the designers) for robot design and control if concrete well specified domains are regarded. We could e.g. solve the syringe problem easily if we built a surgical robot that is used and operated only in hospitals and that has a clearly defined set of tasks to fulfill in e.g. the vaccination department of the hospital. And then this becomes a matter of safety design, similar to any other technical device. But what about a household service robot that is designed to clean the floors and wash the dishes... Wouldn’t we want this robot also to be able to perform first aid services e.g. if the person in the household suffers diabetics and need insulin injections from time to time... Cases can be constructed where we come to the problem that a complete and full formalization of ethics is impossible.

Carrying a responsible approach or a value-based design procedure [ 91 ] can help to conceive robots and AI systems for which ethical issues are actually solved by the human designers and manufacturers beforehand, during specification, development and manufacturing. The robot itself will not be endowed with moral judgment. But we will have to make sure that the humans will abstain from misusing the technology.

But more profound questions arise when it comes to the last three issues listed above. For example, building android human-like robots can be considered a scientific research topic, or a practical solution to facilitate human-robot interaction. However, the confusion this identification of humans with machines provokes requires a reflection on the nature of human identity as compared to machines, that needs to address all aspects and consequences of such technical achievements.

A reflection grounded on philosophical, societal and legal considerations is necessary, beyond sole scholarly studies, to address the impact of these technologies on society. Indeed, there are numerous initiatives and expert groups who have actually already issued ethics recommendations on the development and use of AI and Robotics systems, including the European High-Level Expert Group on AI (HLEG-AI), the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the UNESCO COMEST, and the OECD (see [ 92 ] for a comprehensive overview). As an example of commonly accepted ethics recommendations are the seven “requirements for trustworthy AI Footnote 2 ” issued by the HLEG-AI in 2019:

“Human agency and oversight”: AI systems should be subject to human oversight and they should support humans in their autonomy and decision-making

“Technical Robustness and Safety” should be provided. Systems should be reliable and stable also in situations with uncertainty, they should be resilient against manipulations from outside

“Privacy and Data Governance” should be guaranteed during the lifecycle with data access controlled and managed, and data quality provided.

“Transparency”: Data and processes should be well documented to trace the cause of errors. Systems should become explainable to the user on the level appropriate to understand certain decisions the system is making.

“Diversity, Non-Discrimination and Fairness” should be ensured by controlling for biases that could lead to discriminatory results. Access to AI should be granted to all people.

“Societal and Environmental Well-Being”: The use of AI should be for the benefit of society and the natural environment. Violation of democratic processes should be prevented.

“Accountability” should be provided such that AI systems can be assessed and audited. Negative impacts should be minimised or erased.

However there are still open issues, mostly related to how to translate principles into practice, or topics subject to hard debates such as robot legal personality, advocated by some to address liability issues. Furthermore, when considering specific use-cases, tensions between several requirements could arise, that will have to be specifically addressed.

Most AI systems are tools for which humans play a critical role, either at the input of the system, to analyse their behavior, or at the output, to give them an information they need. Robotics is different as it develops physical systems that can perceive and act in the real world without the mediation of any humans, at least for autonomous robots. Building human-centered robots requires to put humans back into the loop and to provide the system with the ability to interact with humans, to understand them and learn from them while ensuring that humans will also understand what they can and cannot do. It also raises many ethical questions that have been listed and discussed. Human centered AI and Robotics thus create many different challenges and require the integration of a wide spectrum of technologies. It also highlights that robots assisting humans are not only a technological challenge in many aspects, but rather a socio-technological transformation in our societies. In particular, the use of this technology and how it is accessible, are important topics involving actors in dealing with social processes, public awareness and political and legal decisions.

Availability of data and materials

Not applicable.

https://public.oed.com/updates/

https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Campbell M, Hoane Jr AJ, Hsu F. -h. (2002) Deep blue. Artificial intelligence 134(1-2):57–83.

Article   MATH   Google Scholar  

Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, et al. (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489.

Article   Google Scholar  

Torrey L, Shavlik J (2010) Transfer learning In: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, 242–264.. IGI global, Hershey.

Chapter   Google Scholar  

Yuh J, West M (2001) Underwater robotics. Adv Robot 15(5):609–639. https://doi.org/10.1163/156855301317033595 .

Kirchner F, Straube S, Kühn D, Hoyer N (2020) AI Technology for Underwater Robots. Springer, Cham.

Book   Google Scholar  

Yoshida K (2009) Achievements in space robotics. IEEE Robot Autom Mag 16(4):20–28. https://doi.org/10.1109/MRA.2009.934818 .

Yoshida K, Wilcox B (2008) Space robots In: Springer handbook of robotics, 1031–1063.. Springer, Berlin.

Yangsheng X, Kanade T (1993) Space Robotics: Dynamics and Control. Springer.

Goodrich MA, Schultz AC (2008) Human-robot Interaction: a Survey. Now Publishers Inc.

Ricks DJ, Colton MB (2010) Trends and considerations in robot-assisted autism therapy In: 2010 IEEE International Conference on Robotics and Automation, 4354–4359, Anchorage.

Boucenna S, Narzisi A, Tilmont E, Muratori F, Pioggia G, Cohen D, Chetouani M (2014) Interactive technologies for autistic children: A review. Cogn Comput 6(4):722–740.

Shishehgar M, Kerr D, Blake J (2018) A systematic review of research into how robotic technology can help older people. Smart Health 7:1–18.

Breazeal C, Dautenhahn K, Kanda T (2016) Social robotics In: Springer Handbook of Robotics, 1935–1972.. Springer, Berlin.

Sheridan TB (2020) A review of recent research in social robotics. Curr Opin Psychol 36:7–12.

Schwartz T, Feld M, Bürckert C, Dimitrov S, Folz J, Hutter D, Hevesi P, Kiefer B, Krieger H, Lüth C, Mronga D, Pirkl G, Röfer T, Spieldenner T, Wirkus M, Zinnikus I, Straube S (2016) Hybrid teams of humans, robots, and virtual agents in a production setting In: 2016 12th International Conference on Intelligent Environments (IE), 234–237.. IOS Press, Amsterdam.

Schwartz T, Zinnikus I, Krieger H-U, Bürckert C, Folz J, Kiefer B, Hevesi P, Lüth C, Pirkl G, Spieldenner T, Schmitz N, Wirkus M, Straube S (2016) Hybrid teams: Flexible collaboration between humans, robots and virtual agents. In: Klusch M, Unland R, Shehory O, Pokahr A, Ahrndt S (eds)Multiagent System Technologies, 131–146.. Springer, Cham.

Peshkin M, Colgate JE (1999) Cobots. Ind Robot Int J 26(5):335–341.

Maciejasz P, Eschweiler J, Gerlach-Hahn K, Jansen-Troy A, Leonhardt S (2014) A survey on robotic devices for upper limb rehabilitation. J Neuroeng Rehabil 11(1):3.

Kumar S, Wöhrle H, Trampler M, Simnofske M, Peters H, Mallwitz M, Kirchner EA, Kirchner F (2019) Modular design and decentralized control of the recupera exoskeleton for stroke rehabilitation. Appl Sci 9(4). https://doi.org/10.3390/app9040626 .

Nowak A, Lukowicz P, Horodecki P (2018) Assessing artificial intelligence for humanity: Will ai be the our biggest ever advance? or the biggest threat [opinion]. IEEE Technol Soc Mag 37(4):26–34.

Siciliano B, Khatib O (2016) Springer Handbook of Robotics. Springer, Berlin.

Book   MATH   Google Scholar  

McCarthy J, Minsky ML, Rochester N, Shannon CE (2006) A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI Mag 27(4):12–12.

Google Scholar  

Annoni A, Benczur P, Bertoldi P, Delipetrev B, De Prato G, Feijoo C, Macias EF, Gutierrez EG, Portela MI, Junklewitz H, et al. (2018) Artificial intelligence: A european perspective. Technical report, Joint Research Centre (Seville site).

Wolf MJ, Miller KW, Grodzinsky FS (2017) Why we should have seen that coming: comments on microsoft’s tay “experiment,” and wider implications. ORBIT J 1(2):1–12.

Strickland E (2019) Ibm watson, heal thyself: How ibm overpromised and underdelivered on ai health care. IEEE Spectr 56(4):24–31.

Article   MathSciNet   Google Scholar  

Poole D, Mackworth A, Goebel R (1998) Computational intelligence.

Salini J, Padois V, Bidaud P (2011) Synthesis of complex humanoid whole-body behavior: A focus on sequencing and tasks transitions In: 2011 IEEE International Conference on Robotics and Automation, 1283–1290, Changaï.

Hayet J-B, Esteves C, Arechavaleta G, Stasse O, Yoshida E (2012) Humanoid locomotion planning for visually guided tasks. Int J Humanoid Robotics 9(02):1250009.

Pfeifer R, Gómez G (2009) Morphological computation–connecting brain, body, and environment In: Creating Brain-like Intelligence, 66–83.. Springer, Berlin.

Shintake J, Cacucciolo V, Floreano D, Shea H (2018) Soft robotic grippers. Adv Mater 30(29):1707035.

Harnad S (1990) The symbol grounding problem. Physica D Nonlinear Phenom 42(1-3):335–346.

Bohg J, Hausman K, Sankaran B, Brock O, Kragic D, Schaal S, Sukhatme GS (2017) Interactive perception: Leveraging action in perception and perception in action. IEEE Trans Robot 33(6):1273–1291.

Jamone L, Ugur E, Cangelosi A, Fadiga L, Bernardino A, Piater J, Santos-Victor J (2016) Affordances in psychology, neuroscience, and robotics: A survey. IEEE Trans Cogn Dev Syst 10(1):4–25.

Vaussard F, Fink J, Bauwens V, Rétornaz P, Hamel D, Dillenbourg P, Mondada F (2014) Lessons learned from robotic vacuum cleaners entering the home ecosystem. Robot Auton Syst 62(3):376–391.

Kaufman K, Ziakas E, Catanzariti M, Stoppa G, Burkhard R, Schulze H, Tanner A (2020) Social robots: Development and evaluation of a human-centered application scenario In: Human Interaction and Emerging Technologies: Proceedings of the 1st International Conference on Human Interaction and Emerging Technologies (IHIET 2019), August 22-24, 2019, Nice, France, vol. 1018, 3–9.. Springer Nature, Berlin.

Jordan MI, Mitchell TM (2015) Machine learning: Trends, perspectives, and prospects. Science 349(6245):255–260.

Article   MathSciNet   MATH   Google Scholar  

Sünderhauf N, Brock O, Scheirer W, Hadsell R, Fox D, Leitner J, Upcroft B, Abbeel P, Burgard W, Milford M, et al. (2018) The limits and potentials of deep learning for robotics. Int J Robot Res 37(4-5):405–420.

Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: A survey. Int J Robot Res 32(11):1238–1274.

Sigaud O, Stulp F (2019) Policy search in continuous action domains: an overview. Neural Netw 113:28–40.

Doncieux S, Filliat D, Díaz-Rodríguez N, Hospedales T, Duro R, Coninx A, Roijers DM, Girard B, Perrin N, Sigaud O (2018) Open-ended learning: a conceptual framework based on representational redescription. Front Neurorobotics 12:59.

Doncieux S, Bredeche N, Goff LL, Girard B, Coninx A, Sigaud O, Khamassi M, Díaz-Rodríguez N, Filliat D, Hospedales T, et al. (2020) Dream architecture: a developmental approach to open-ended learning in robotics. arXiv preprint arXiv:2005.06223.

Lesort T, Díaz-Rodríguez N, Goudou J-F, Filliat D (2018) State representation learning for control: An overview. Neural Netw 108:379–392.

Cangelosi A, Schlesinger M (2015) Developmental Robotics: From Babies to Robots. MIT press.

Santucci VG, Oudeyer P-Y, Barto A, Baldassarre G (2020) Intrinsically motivated open-ended learning in autonomous robots. Front Neurorobotics 13:115.

Hagras H (2018) Toward human-understandable, explainable ai. Computer 51(9):28–36.

Steinfeld A, Fong T, Kaber D, Lewis M, Scholtz J, Schultz A, Goodrich M (2006) Common metrics for human-robot interaction In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI ’06, 33–40.. Association for Computing Machinery, New York. https://doi.org/10.1145/1121241.1121249 .

Murphy R, Schreckenghost D (2013) Survey of metrics for human-robot interaction In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’13, 197–198.. IEEE Press.

Yanco HA, Drury J (2004) Classifying human-robot interaction: an updated taxonomy In: 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), 2841–28463. https://doi.org/10.1109/ICSMC.2004.1400763 .

Pervez A, Ryu J (2008) Safe physical human robot interaction–past, present and future. J Mech Sci Technol 22:469–483.

Onnasch L, Roesler E (2021) A taxonomy to structure and analyze human–robot interaction. Int J Soc Robot 13(4):833–849.

Haddadin S, Croft E (2016) Physical Human–Robot Interaction. In: Siciliano B Khatib O (eds)Springer Handbook of Robotics, 1835–1874.. Springer, Cham. https://doi.org/10.1007/978-3-319-32552-169 .

Gutzeit L, Otto M, Kirchner EA (2016) Simple and robust automatic detection and recognition of human movement patterns in tasks of different complexity In: Physiological Computing Systems, 39–57.. Springer, Berlin.

Kirchner EA, Fairclough SH, Kirchner F (2019) Embedded multimodal interfaces in robotics: applications, future trends, and societal implications In: The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions-Volume 3, 523–576.

Haarnoja T, Ha S, Zhou A, Tan J, Tucker G, Levine S (2018) Learning to walk via deep reinforcement learning. arXiv preprint arXiv:1812.11103:1–10.

Tsarouchi P, Makris S, Chryssolouris G (2016) Human–robot interaction review and challenges on task planning and programming. Int J Comput Integr Manuf 29(8):916–931. https://doi.org/10.1080/0951192X.2015.1130251 .

Kim S, Kirchner E, Stefes A, Kirchner F (2017) Intrinsic interactive reinforcement learning–using error-related potentials for real world human-robot interaction. Sci Rep 7.

Williams T, Scheutz M (2017) The state-of-the-art in autonomous wheelchairs controlled through natural language: A survey. Robot Auton Syst 96:171–183.

Tellex S, Gopalan N, Kress-Gazit H, Matuszek C (2020) Robots that use language. Annu Rev Control Robot Auton Syst 3:25–55.

Landsiedel C, Rieser V, Walter M, Wollherr D (2017) A review of spatial reasoning and interaction for real-world robotics. Adv Robot 31(5):222–242.

Mei H, Bansal M, Walter MR (2016) Listen, attend, and walk: neural mapping of navigational instructions to action sequences In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2772–2778.

Taniguchi T, Mochihashi D, Nagai T, Uchida S, Inoue N, Kobayashi I, Nakamura T, Hagiwara Y, Iwahashi N, Inamura T (2019) Survey on frontiers of language and robotics. Adv Robot 33(15-16):700–730.

Steels L (2001) Language games for autonomous robots. IEEE Intell Syst 16(5):16–22.

Steels L (2015) The Talking Heads Experiment: Origins of Words and Meanings, vol. 1. Language Science Press.

Steels L (2008) The symbol grounding problem has been solved. so what’s next. Symbols Embodiment Debates Meaning Cogn:223–244.

Taniguchi T, Nagai T, Nakamura T, Iwahashi N, Ogata T, Asoh H (2016) Symbol emergence in robotics: a survey. Adv Robot 30(11-12):706–728.

Taniguchi T, Ugur E, Hoffmann M, Jamone L, Nagai T, Rosman B, Matsuka T, Iwahashi N, Oztop E, Piater J, et al. (2018) Symbol emergence in cognitive developmental systems: a survey. IEEE Trans Cogn Dev Syst 11(4):494–516.

Westlund JMK, Dickens L, Jeong S, Harris PL, DeSteno D, Breazeal CL (2017) Children use non-verbal cues to learn new words from robots as well as people. Int J Child-Computer Interact 13:1–9.

Anzalone SM, Boucenna S, Ivaldi S, Chetouani M (2015) Evaluating the engagement with social robots. Int J Soc Robot 7(4):465–478.

Mavridis N (2015) A review of verbal and non-verbal human–robot interactive communication. Robot Auton Syst 63:22–35.

Saunderson S, Nejat G (2019) How robots influence humans: A survey of nonverbal communication in social human–robot interaction. Int J Soci Robot 11(4):575–608.

Mathur MB, Reichling DB (2009) An uncanny game of trust: social trustworthiness of robots inferred from subtle anthropomorphic facial cues In: 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 313–314.. IEEE.

Natarajan M, Gombolay M (2020) Effects of anthropomorphism and accountability on trust in human robot interaction In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 33–42.

Kanda T, Miyashita T, Osada T, Haikawa Y, Ishiguro H (2008) Analysis of humanoid appearances in human–robot interaction. IEEE Trans Robot 24(3):725–735.

Bartneck C, Bleeker T, Bun J, Fens P, Riet L (2010) The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Paladyn 1(2):109–115.

Murphy J, Gretzel U, Pesonen J (2019) Marketing robot services in hospitality and tourism: the role of anthropomorphism. J Travel Tourism Mark 36(7):784–795.

MORI M (1970) Bukimi no tani [the uncanny valley]. Energy 7:33–35.

Mori M, MacDorman KF, Kageki N (2012) The uncanny valley [from the field]. IEEE Robot Autom Mag 19(2):98–100.

De Visser EJ, Monfort SS, McKendrick R, Smith MA, McKnight PE, Krueger F, Parasuraman R (2016) Almost human: Anthropomorphism increases trust resilience in cognitive agents. J Exp Psychol Appl 22(3):331.

Bartneck C, Kanda T, Ishiguro H, Hagita N (2009) My robotic doppelgänger-a critical look at the uncanny valley In: RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, 269–276.. IEEE.

Fink J (2012) Anthropomorphism and human likeness in the design of robots and human-robot interaction In: International Conference on Social Robotics, 199–208.. Springer.

Złotowski J, Proudfoot D, Yogeeswaran K, Bartneck C (2015) Anthropomorphism: opportunities and challenges in human–robot interaction. Int J Soc Robot 7(3):347–360.

Khambhaita H, Alami R (2020) Viewing robot navigation in human environment as a cooperative activity. In: Amato NM, Hager G, Thomas S, Torres-Torriti M (eds)Robotics Research, 285–300.. Springer, Cham.

Khamassi M, Girard B, Clodic A, Sandra D, Renaudo E, Pacherie E, Alami R, Chatila R (2016) Integration of action, joint action and learning in robot cognitive architectures. Intellectica-La revue de l’Association pour la Recherche sur les sciences de la Cognition (ARCo) 2016(65):169–203.

Billard AG, Calinon S, Dillmann R (2016) Learning from Humans(Siciliano B, Khatib O, eds.). Springer, Cham.

Gracia L, Pérez-Vidal C, Mronga D, Paco J, Azorin J-M, Gea J (2017) Robotic manipulation for the shoe-packaging process. Int J Adv Manuf. Technol. 92:1053–1067.

Chatila R, Renaudo E, Andries M, Chavez-Garcia R-O, Luce-Vayrac P, Gottstein R, Alami R, Clodic A, Devin S, Girard B, Khamassi M (2018) Toward self-aware robots. Front Robot AI 5:88. https://doi.org/10.3389/frobt.2018.00088 .

de Gea Fernández J, Mronga D, Günther M, Knobloch T, Wirkus M, Schröer M, Trampler M, Stiene S, Kirchner E, Bargsten V, Bänziger T, Teiwes J, Krüger T, Kirchner F (2017) Multimodal sensor-based whole-body control for human–robot collaboration in industrial settings. Robot Auton Syst 94:102–119. https://doi.org/10.1016/j.robot.2017.04.007 .

Aggarwal A, Kampmann P (2012) Tactile sensors based object recognition and 6d pose estimation In: ICIRA.. Springer, Berlin.

Veruggio G, Operto F, Bekey G (2016) Roboethics: Social and Ethical Implications(Siciliano B, Khatib O, eds.). Springer, Cham.

Iacca G, Lagioia F, Loreggia A, Sartor G (2020) A genetic approach to the ethical knob In: Legal Knowledge and Information Systems. JURIX 2020: The Thirty-third Annual Conference, Brno, Czech Republic, December 9–11, 2020, 103–112.. IOS Press BV, 2020, 334.

Dignum V (2019) Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Berlin.

Jobin A, Ienca M, Vayena E (2019) The global landscape of ai ethics guidelines. Nat Mach Intell 1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2 .

Goff LKL, Mukhtar G, Coninx A, Doncieux S (2019) Bootstrapping robotic ecological perception from a limited set of hypotheses through interactive perception. arXiv preprint arXiv:1901.10968.

Goff LKL, Yaakoubi O, Coninx A, Doncieux S (2019) Building an affordances map with interactive perception. arXiv preprint arXiv:1903.04413.

Download references

The project has received funding from the European Union’s Horizon 2020 research and innovation programme Project HumanE-AI-Net under grant agreement No 952026.

Author information

Authors and affiliations.

Institute of Intelligent Systems and Robotics (ISIR), Sorbonne Université, CNRS, Paris, France

Stephane Doncieux & Raja Chatila

Robotics Innovation Center, DFKI GmbH (German Research Center for Artificial Intelligence), Bremen, DE, Germany

Sirko Straube & Frank Kirchner

Faculty of Mathematics and Computer Science, Robotics Group, University of Bremen, Bremen, DE, Germany

Frank Kirchner

You can also search for this author in PubMed   Google Scholar

Contributions

All authors have contributed to the text and approved the final manuscript.

Corresponding author

Correspondence to Stephane Doncieux .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Doncieux, S., Chatila, R., Straube, S. et al. Human-centered AI and robotics. AI Perspect 4 , 1 (2022). https://doi.org/10.1186/s42467-021-00014-x

Download citation

Received : 02 June 2021

Accepted : 27 October 2021

Published : 28 January 2022

DOI : https://doi.org/10.1186/s42467-021-00014-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Human-centered
  • Human robot interaction

recent research work in robotics

ScienceDaily

Engineers design soft and flexible 'skeletons' for muscle-powered robots

New modular, spring-like devices maximize the work of live muscle fibers so they can be harnessed to power biohybrid bots.

Our muscles are nature's perfect actuators -- devices that turn energy into motion. For their size, muscle fibers are more powerful and precise than most synthetic actuators. They can even heal from damage and grow stronger with exercise.

For these reasons, engineers are exploring ways to power robots with natural muscles. They've demonstrated a handful of "biohybrid" robots that use muscle-based actuators to power artificial skeletons that walk, swim, pump, and grip. But for every bot, there's a very different build, and no general blueprint for how to get the most out of muscles for any given robot design.

Now, MIT engineers have developed a spring-like device that could be used as a basic skeleton-like module for almost any muscle-bound bot. The new spring, or "flexure," is designed to get the most work out of any attached muscle tissues. Like a leg press that's fit with just the right amount of weight, the device maximizes the amount of movement that a muscle can naturally produce.

The researchers found that when they fit a ring of muscle tissue onto the device, much like a rubber band stretched around two posts, the muscle pulled on the spring, reliably and repeatedly, and stretched it five times more, compared with other previous device designs.

The team sees the flexure design as a new building block that can be combined with other flexures to build any configuration of artificial skeletons. Engineers can then fit the skeletons with muscle tissues to power their movements.

"These flexures are like a skeleton that people can now use to turn muscle actuation into multiple degrees of freedom of motion in a very predictable way," says Ritu Raman, the Brit and Alex d'Arbeloff Career Development Professor in Engineering Design at MIT. "We are giving roboticists a new set of rules to make powerful and precise muscle-powered robots that do interesting things."

Raman and her colleagues report the details of the new flexure design in a paper appearing in the journal Advanced Intelligent Systems. The study's MIT co-authors include Naomi Lynch '12, SM '23; undergraduate Tara Sheehan; graduate students Nicolas Castro, Laura Rosado, and Brandon Rios; and professor of mechanical engineering Martin Culpepper.

Muscle pull

When left alone in a petri dish in favorable conditions, muscle tissue will contract on its own but in directions that are not entirely predictable or of much use.

"If muscle is not attached to anything, it will move a lot, but with huge variability, where it's just flailing around in liquid," Raman says.

To get a muscle to work like a mechanical actuator, engineers typically attach a band of muscle tissue between two small, flexible posts. As the muscle band naturally contracts, it can bend the posts and pull them together, producing some movement that would ideally power part of a robotic skeleton. But in these designs, muscles have produced limited movement, mainly because the tissues are so variable in how they contact the posts. Depending on where the muscles are placed on the posts, and how much of the muscle surface is touching the post, the muscles may succeed in pulling the posts together but at other times may wobble around in uncontrollable ways.

Raman's group looked to design a skeleton that focuses and maximizes a muscle's contractions regardless of exactly where and how it is placed on a skeleton, to generate the most movement in a predictable, reliable way.

"The question is: How do we design a skeleton that most efficiently uses the force the muscle is generating?" Raman says.

The researchers first considered the multiple directions that a muscle can naturally move. They reasoned that if a muscle is to pull two posts together along a specific direction, the posts should be connected to a spring that only allows them to move in that direction when pulled.

"We need a device that is very soft and flexible in one direction, and very stiff in all other directions, so that when a muscle contracts, all that force gets efficiently converted into motion in one direction," Raman says.

As it turns out, Raman found many such devices in Professor Martin Culpepper's lab. Culpepper's group at MIT specializes in the design and fabrication of machine elements such as miniature actuators, bearings, and other mechanisms, that can be built into machines and systems to enable ultraprecise movement, measurement, and control, for a wide variety of applications. Among the group's precision machined elements are flexures -- spring-like devices, often made from parallel beams, that can flex and stretch with nanometer precision.

"Depending on how thin and far apart the beams are, you can change how stiff the spring appears to be," Raman says.

She and Culpepper teamed up to design a flexure specifically tailored with a configuration and stiffness to enable muscle tissue to naturally contract and maximally stretch the spring. The team designed the device's configuration and dimensions based on numerous calculations they carried out to relate a muscle's natural forces with a flexure's stiffness and degree of movement.

The flexure they ultimately designed is 1/100 the stiffness of muscle tissue itself. The device resembles a miniature, accordion-like structure, the corners of which are pinned to an underlying base by a small post, which sits near a neighboring post that is fit directly onto the base. Raman then wrapped a band of muscle around the two corner posts (the team molded the bands from live muscle fibers that they grew from mouse cells), and measured how close the posts were pulled together as the muscle band contracted.

The team found that the flexure's configuration enabled the muscle band to contract mostly along the direction between the two posts. This focused contraction allowed the muscle to pull the posts much closer together -- five times closer -- compared with previous muscle actuator designs.

"The flexure is a skeleton that we designed to be very soft and flexible in one direction, and very stiff in all other directions," Raman says. "When the muscle contracts, all the force is converted into movement in that direction. It's a huge magnification."

The team found they could use the device to precisely measure muscle performance and endurance. When they varied the frequency of muscle contractions (for instance, stimulating the bands to contract once versus four times per second), they observed that the muscles "grew tired" at higher frequencies, and didn't generate as much pull.

"Looking at how quickly our muscles get tired, and how we can exercise them to have high-endurance responses -- this is what we can uncover with this platform," Raman says.

The researchers are now adapting and combining flexures to build precise, articulated, and reliable robots, powered by natural muscles.

"An example of a robot we are trying to build in the future is a surgical robot that can perform minimally invasive procedures inside the body," Raman says. "Technically, muscles can power robots of any size, but we are particularly excited in making small robots, as this is where biological actuators excel in terms of strength, efficiency, and adaptability."

  • Engineering and Construction
  • Construction
  • Sports Science
  • Medical Technology
  • Neural Interfaces
  • Artificial Intelligence
  • Computer Science
  • Electric power
  • Humanoid robot
  • Industrial robot
  • Nuclear power plant
  • Nanorobotics
  • Robotic surgery
  • Three-phase electric power

Story Source:

Materials provided by Massachusetts Institute of Technology . Original written by Jennifer Chu. Note: Content may be edited for style and length.

Journal Reference :

  • Naomi Lynch, Nicolas Castro, Tara Sheehan, Laura Rosado, Brandon Rios, Martin Culpepper, Ritu Raman. Enhancing and Decoding the Performance of Muscle Actuators with Flexures . Advanced Intelligent Systems , 2024; DOI: 10.1002/aisy.202300834

Cite This Page :

Explore More

  • No Two Worms Are Alike
  • Quantum Effects in Electron Waves
  • Star Trek's Holodeck Recreated Using ChatGPT
  • Cloud Engineering to Mitigate Global Warming
  • Detecting Delayed Concussion Recovery
  • Bonobos: More Aggressive Than Thought
  • Brightest Gamma-Ray Burst
  • Stellar Winds of Three Sun-Like Stars Detected
  • Fences Causing Genetic Problems for Mammals
  • Ozone Impact On Fly Species

Trending Topics

Strange & offbeat.

Artificial Intelligence

  • Data Science
  • Hardware & Sensors

Machine Learning

Agriculture.

  • Defense & Cyber Security
  • Healthcare & Sports
  • Hospitality & Retail
  • Logistics & Industrial
  • Office & Household
  • Write for Us

recent research work in robotics

Unlocking the power and security of autonomous databases

Empowering automation: the revolution of automated charging for agv batteries, how to build a winning robotics competition team, designing combat robots: essential tips for success, questions every ceo should ask about cybersecurity, top open source malware analysis tools, best email forensics techniques and tools, 6 essential audiobooks for cybersecurity professionals, why businesses should invest in decentralized apps, a close watch: how uk businesses benefit from advanced cctv systems, empowering small businesses: the role of it support in growth and…, product marketing: leveraging photorealistic product rendering services, everything tech: how technology has evolved and how to keep up….

  • Technologies

500 research papers and projects in robotics – Free Download

recent research work in robotics

The recent history of robotics is full of fascinating moments that accelerated the rapid technological advances in artificial intelligence , automation , engineering, energy storage, and machine learning. The result transformed the capabilities of robots and their ability to take over tasks once carried out by humans at factories, hospitals, farms, etc.

These technological advances don’t occur overnight; they require several years of research and development in solving some of the biggest engineering challenges in navigation, autonomy, AI and machine learning to build robots that are much safer and efficient in a real-world situation. A lot of universities, institutes, and companies across the world are working tirelessly in various research areas to make this reality.

In this post, we have listed 500+ recent research papers and projects for those who are interested in robotics. These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision, artificial intelligence, collaborative robotics, nano robotics, social robotics, cloud, swan robotics, sensors, mobile robotics, humanoid, service robots, automation, autonomous, etc. Feel free to download. Share your own research papers with us to be added into this list. Also, you can ask a professional academic writer from  CustomWritings – research paper writing service  to assist you online on any related topic.

Navigation and Motion Planning

  • Robotics Navigation Using MPEG CDVS
  • Design, Manufacturing and Test of a High-Precision MEMS Inclination Sensor for Navigation Systems in Robot-assisted Surgery
  • Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment
  • One Point Perspective Vanishing Point Estimation for Mobile Robot Vision Based Navigation System
  • Application of Ant Colony Optimization for finding the Navigational path of Mobile Robot-A Review
  • Robot Navigation Using a Brain-Computer Interface
  • Path Generation for Robot Navigation using a Single Ceiling Mounted Camera
  • Exact Robot Navigation Using Power Diagrams
  • Learning Socially Normative Robot Navigation Behaviors with Bayesian Inverse Reinforcement Learning
  • Pipelined, High Speed, Low Power Neural Network Controller for Autonomous Mobile Robot Navigation Using FPGA
  • Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology
  • Optimality and limit behavior of the ML estimator for Multi-Robot Localization via GPS and Relative Measurements
  • Aerial Robotics: Compact groups of cooperating micro aerial vehicles in clustered GPS denied environment
  • Disordered and Multiple Destinations Path Planning Methods for Mobile Robot in Dynamic Environment
  • Integrating Modeling and Knowledge Representation for Combined Task, Resource and Path Planning in Robotics
  • Path Planning With Kinematic Constraints For Robot Groups
  • Robot motion planning for pouring liquids
  • Implan: Scalable Incremental Motion Planning for Multi-Robot Systems
  • Equilibrium Motion Planning of Humanoid Climbing Robot under Constraints
  • POMDP-lite for Robust Robot Planning under Uncertainty
  • The RoboCup Logistics League as a Benchmark for Planning in Robotics
  • Planning-aware communication for decentralised multi- robot coordination
  • Combined Force and Position Controller Based on Inverse Dynamics: Application to Cooperative Robotics
  • A Four Degree of Freedom Robot for Positioning Ultrasound Imaging Catheters
  • The Role of Robotics in Ovarian Transposition
  • An Implementation on 3D Positioning Aquatic Robot

Robotic Interactions

  • On Indexicality, Direction of Arrival of Sound Sources and Human-Robot Interaction
  • OpenWoZ: A Runtime-Configurable Wizard-of-Oz Framework for Human-Robot Interaction
  • Privacy in Human-Robot Interaction: Survey and Future Work
  • An Analysis Of Teacher-Student Interaction Patterns In A Robotics Course For Kindergarten Children: A Pilot Study
  • Human Robotics Interaction (HRI) based Analysis–using DMT
  • A Cautionary Note on Personality (Extroversion) Assessments in Child-Robot Interaction Studies
  • Interaction as a bridge between cognition and robotics
  • State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction
  • Eliciting Conversation in Robot Vehicle Interactions
  • A Comparison of Avatar, Video, and Robot-Mediated Interaction on Users’ Trust in Expertise
  • Exercising with Baxter: Design and Evaluation of Assistive Social-Physical Human- Robot Interaction
  • Using Narrative to Enable Longitudinal Human- Robot Interactions
  • Computational Analysis of Affect, Personality, and Engagement in HumanRobot Interactions
  • Human-robot interactions: A psychological perspective
  • Gait of Quadruped Robot and Interaction Based on Gesture Recognition
  • Graphically representing child- robot interaction proxemics
  • Interactive Demo of the SOPHIA Project: Combining Soft Robotics and Brain-Machine Interfaces for Stroke Rehabilitation
  • Interactive Robotics Workshop
  • Activating Robotics Manipulator using Eye Movements
  • Wireless Controlled Robot Movement System Desgined using Microcontroller
  • Gesture Controlled Robot using LabVIEW
  • RoGuE: Robot Gesture Engine

Obstacle Avoidance

  • Low Cost Obstacle Avoidance Robot with Logic Gates and Gate Delay Calculations
  • Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance
  • Controlling Obstacle Avoiding And Live Streaming Robot Using Chronos Watch
  • Movement Of The Space Robot Manipulator In Environment With Obstacles
  • Assis-Cicerone Robot With Visual Obstacle Avoidance Using a Stack of Odometric Data.
  • Obstacle detection and avoidance methods for autonomous mobile robot
  • Moving Domestic Robotics Control Method Based on Creating and Sharing Maps with Shortest Path Findings and Obstacle Avoidance
  • Control of the Differentially-driven Mobile Robot in the Environment with a Non-Convex Star-Shape Obstacle: Simulation and Experiments
  • A survey of typical machine learning based motion planning algorithms for robotics
  • Linear Algebra for Computer Vision, Robotics , and Machine Learning
  • Applying Radical Constructivism to Machine Learning: A Pilot Study in Assistive Robotics
  • Machine Learning for Robotics and Computer Vision: Sampling methods and Variational Inference
  • Rule-Based Supervisor and Checker of Deep Learning Perception Modules in Cognitive Robotics
  • The Limits and Potentials of Deep Learning for Robotics
  • Autonomous Robotics and Deep Learning
  • A Unified Knowledge Representation System for Robot Learning and Dialogue

Computer Vision

  • Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot
  • Non-Euclidean manifolds in robotics and computer vision: why should we care?
  • Topology of singular surfaces, applications to visualization and robotics
  • On the Impact of Learning Hierarchical Representations for Visual Recognition in Robotics
  • Focused Online Visual-Motor Coordination for a Dual-Arm Robot Manipulator
  • Towards Practical Visual Servoing in Robotics
  • Visual Pattern Recognition In Robotics
  • Automated Visual Inspection: Position Identification of Object for Industrial Robot Application based on Color and Shape
  • Automated Creation of Augmented Reality Visualizations for Autonomous Robot Systems
  • Implementation of Efficient Night Vision Robot on Arduino and FPGA Board
  • On the Relationship between Robotics and Artificial Intelligence
  • Artificial Spatial Cognition for Robotics and Mobile Systems: Brief Survey and Current Open Challenges
  • Artificial Intelligence, Robotics and Its Impact on Society
  • The Effects of Artificial Intelligence and Robotics on Business and Employment: Evidence from a survey on Japanese firms
  • Artificially Intelligent Maze Solver Robot
  • Artificial intelligence, Cognitive Robotics and Human Psychology
  • Minecraft as an Experimental World for AI in Robotics
  • Impact of Robotics, RPA and AI on the insurance industry: challenges and opportunities

Probabilistic Programming

  • On the use of probabilistic relational affordance models for sequential manipulation tasks inrobotics
  • Exploration strategies in developmental robotics: a unified probabilistic framework
  • Probabilistic Programming for Robotics
  • New design of a soft-robotics wearable elbow exoskeleton based on Shape Memory Alloy wires actuators
  • Design of a Modular Series Elastic Upgrade to a Robotics Actuator
  • Applications of Compliant Actuators to Wearing Robotics for Lower Extremity
  • Review of Development Stages in the Conceptual Design of an Electro-Hydrostatic Actuator for Robotics
  • Fluid electrodes for submersible robotics based on dielectric elastomer actuators
  • Cascaded Control Of Compliant Actuators In Friendly Robotics

Collaborative Robotics

  • Interpretable Models for Fast Activity Recognition and Anomaly Explanation During Collaborative Robotics Tasks
  • Collaborative Work Management Using SWARM Robotics
  • Collaborative Robotics : Assessment of Safety Functions and Feedback from Workers, Users and Integrators in Quebec
  • Accessibility, Making and Tactile Robotics : Facilitating Collaborative Learning and Computational Thinking for Learners with Visual Impairments
  • Trajectory Adaptation of Robot Arms for Head-pose Dependent Assistive Tasks

Mobile Robotics

  • Experimental research of proximity sensors for application in mobile robotics in greenhouse environment.
  • Multispectral Texture Mapping for Telepresence and Autonomous Mobile Robotics
  • A Smart Mobile Robot to Detect Abnormalities in Hazardous Zones
  • Simulation of nonlinear filter based localization for indoor mobile robot
  • Integrating control science in a practical mobile robotics course
  • Experimental Study of the Performance of the Kinect Range Camera for Mobile Robotics
  • Planification of an Optimal Path for a Mobile Robot Using Neural Networks
  • Security of Networking Control System in Mobile Robotics (NCSMR)
  • Vector Maps in Mobile Robotics
  • An Embedded System for a Bluetooth Controlled Mobile Robot Based on the ATmega8535 Microcontroller
  • Experiments of NDT-Based Localization for a Mobile Robot Moving Near Buildings
  • Hardware and Software Co-design for the EKF Applied to the Mobile Robotics Localization Problem
  • Design of a SESLogo Program for Mobile Robot Control
  • An Improved Ekf-Slam Algorithm For Mobile Robot
  • Intelligent Vehicles at the Mobile Robotics Laboratory, University of Sao Paolo, Brazil [ITS Research Lab]
  • Introduction to Mobile Robotics
  • Miniature Piezoelectric Mobile Robot driven by Standing Wave
  • Mobile Robot Floor Classification using Motor Current and Accelerometer Measurements
  • Sensors for Robotics 2015
  • An Automated Sensing System for Steel Bridge Inspection Using GMR Sensor Array and Magnetic Wheels of Climbing Robot
  • Sensors for Next-Generation Robotics
  • Multi-Robot Sensor Relocation To Enhance Connectivity In A WSN
  • Automated Irrigation System Using Robotics and Sensors
  • Design Of Control System For Articulated Robot Using Leap Motion Sensor
  • Automated configuration of vision sensor systems for industrial robotics

Nano robotics

  • Light Robotics: an all-optical nano-and micro-toolbox
  • Light-driven Nano- robotics
  • Light-driven Nano-robotics
  • Light Robotics: a new tech–nology and its applications
  • Light Robotics: Aiming towards all-optical nano-robotics
  • NanoBiophotonics Appli–cations of Light Robotics
  • System Level Analysis for a Locomotive Inspection Robot with Integrated Microsystems
  • High-Dimensional Robotics at the Nanoscale Kino-Geometric Modeling of Proteins and Molecular Mechanisms
  • A Study Of Insect Brain Using Robotics And Neural Networks

Social Robotics

  • Integrative Social Robotics Hands-On
  • ProCRob Architecture for Personalized Social Robotics
  • Definitions and Metrics for Social Robotics, along with some Experience Gained in this Domain
  • Transmedia Choreography: Integrating Multimodal Video Annotation in the Creative Process of a Social Robotics Performance Piece
  • Co-designing with children: An approach to social robot design
  • Toward Social Cognition in Robotics: Extracting and Internalizing Meaning from Perception
  • Human Centered Robotics : Designing Valuable Experiences for Social Robots
  • Preliminary system and hardware design for Quori, a low-cost, modular, socially interactive robot
  • Socially assistive robotics: Human augmentation versus automation
  • Tega: A Social Robot

Humanoid robot

  • Compliance Control and Human-Robot Interaction – International Journal of Humanoid Robotics
  • The Design of Humanoid Robot Using C# Interface on Bluetooth Communication
  • An Integrated System to approach the Programming of Humanoid Robotics
  • Humanoid Robot Slope Gait Planning Based on Zero Moment Point Principle
  • Literature Review Real-Time Vision-Based Learning for Human-Robot Interaction in Social Humanoid Robotics
  • The Roasted Tomato Challenge for a Humanoid Robot
  • Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality

Cloud Robotics

  • CR3A: Cloud Robotics Algorithms Allocation Analysis
  • Cloud Computing and Robotics for Disaster Management
  • ABHIKAHA: Aerial Collision Avoidance in Quadcopter using Cloud Robotics
  • The Evolution Of Cloud Robotics: A Survey
  • Sliding Autonomy in Cloud Robotics Services for Smart City Applications
  • CORE: A Cloud-based Object Recognition Engine for Robotics
  • A Software Product Line Approach for Configuring Cloud Robotics Applications
  • Cloud robotics and automation: A survey of related work
  • ROCHAS: Robotics and Cloud-assisted Healthcare System for Empty Nester

Swarm Robotics

  • Evolution of Task Partitioning in Swarm Robotics
  • GESwarm: Grammatical Evolution for the Automatic Synthesis of Collective Behaviors in Swarm Robotics
  • A Concise Chronological Reassess Of Different Swarm Intelligence Methods With Multi Robotics Approach
  • The Swarm/Potential Model: Modeling Robotics Swarms with Measure-valued Recursions Associated to Random Finite Sets
  • The TAM: ABSTRACTing complex tasks in swarm robotics research
  • Task Allocation in Foraging Robot Swarms: The Role of Information Sharing
  • Robotics on the Battlefield Part II
  • Implementation Of Load Sharing Using Swarm Robotics
  • An Investigation of Environmental Influence on the Benefits of Adaptation Mechanisms in Evolutionary Swarm Robotics

Soft Robotics

  • Soft Robotics: The Next Generation of Intelligent Machines
  • Soft Robotics: Transferring Theory to Application,” Soft Components for Soft Robots”
  • Advances in Soft Computing, Intelligent Robotics and Control
  • The BRICS Component Model: A Model-Based Development Paradigm For ComplexRobotics Software Systems
  • Soft Mechatronics for Human-Friendly Robotics
  • Seminar Soft-Robotics
  • Special Issue on Open Source Software-Supported Robotics Research.
  • Soft Brain-Machine Interfaces for Assistive Robotics: A Novel Control Approach
  • Towards A Robot Hardware ABSTRACT ion Layer (R-HAL) Leveraging the XBot Software Framework

Service Robotics

  • Fundamental Theories and Practice in Service Robotics
  • Natural Language Processing in Domestic Service Robotics
  • Localization and Mapping for Service Robotics Applications
  • Designing of Service Robot for Home Automation-Implementation
  • Benchmarking Speech Understanding in Service Robotics
  • The Cognitive Service Robotics Apartment
  • Planning with Task-oriented Knowledge Acquisition for A Service Robot
  • Cognitive Robotics
  • Meta-Morphogenesis theory as background to Cognitive Robotics and Developmental Cognitive Science
  • Experience-based Learning for Bayesian Cognitive Robotics
  • Weakly supervised strategies for natural object recognition in robotics
  • Robotics-Derived Requirements for the Internet of Things in the 5G Context
  • A Comparison of Modern Synthetic Character Design and Cognitive Robotics Architecture with the Human Nervous System
  • PREGO: An Action Language for Belief-Based Cognitive Robotics in Continuous Domains
  • The Role of Intention in Cognitive Robotics
  • On Cognitive Learning Methodologies for Cognitive Robotics
  • Relational Enhancement: A Framework for Evaluating and Designing Human-RobotRelationships
  • A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering
  • Spatial Cognition in Robotics
  • IOT Based Gesture Movement Recognize Robot
  • Deliberative Systems for Autonomous Robotics: A Brief Comparison Between Action-oriented and Timelines-based Approaches
  • Formal Modeling and Verification of Dynamic Reconfiguration of Autonomous RoboticsSystems
  • Robotics on its feet: Autonomous Climbing Robots
  • Implementation of Autonomous Metal Detection Robot with Image and Message Transmission using Cell Phone
  • Toward autonomous architecture: The convergence of digital design, robotics, and the built environment
  • Advances in Robotics Automation
  • Data-centered Dependencies and Opportunities for Robotics Process Automation in Banking
  • On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications
  • Advances in RoboticsAutomation
  • Meshworm With Segment-Bending Anchoring for Colonoscopy. IEEE ROBOTICS AND AUTOMATION LETTERS. 2 (3) pp: 1718-1724.
  • Recent Advances in Robotics and Automation
  • Key Elements Towards Automation and Robotics in Industrialised Building System (IBS)
  • Knowledge Building, Innovation Networks, and Robotics in Math Education
  • The potential of a robotics summer course On Engineering Education
  • Robotics as an Educational Tool: Impact of Lego Mindstorms
  • Effective Planning Strategy in Robotics Education: An Embodied Approach
  • An innovative approach to School-Work turnover programme with Educational Robotics
  • The importance of educational robotics as a precursor of Computational Thinking in early childhood education
  • Pedagogical Robotics A way to Experiment and Innovate in Educational Teaching in Morocco
  • Learning by Making and Early School Leaving: an Experience with Educational Robotics
  • Robotics and Coding: Fostering Student Engagement
  • Computational Thinking with Educational Robotics
  • New Trends In Education Of Robotics
  • Educational robotics as an instrument of formation: a public elementary school case study
  • Developmental Situation and Strategy for Engineering Robot Education in China University
  • Towards the Humanoid Robot Butler
  • YAGI-An Easy and Light-Weighted Action-Programming Language for Education and Research in Artificial Intelligence and Robotics
  • Simultaneous Tracking and Reconstruction (STAR) of Objects and its Application in Educational Robotics Laboratories
  • The importance and purpose of simulation in robotics
  • An Educational Tool to Support Introductory Robotics Courses
  • Lollybot: Where Candy, Gaming, and Educational Robotics Collide
  • Assessing the Impact of an Autonomous Robotics Competition for STEM Education
  • Educational robotics for promoting 21st century skills
  • New Era for Educational Robotics: Replacing Teachers with a Robotic System to Teach Alphabet Writing
  • Robotics as a Learning Tool for Educational Transformation
  • The Herd of Educational Robotic Devices (HERD): Promoting Cooperation in RoboticsEducation
  • Robotics in physics education: fostering graphing abilities in kinematics
  • Enabling Rapid Prototyping in K-12 Engineering Education with BotSpeak, a UniversalRobotics Programming Language
  • Innovating in robotics education with Gazebo simulator and JdeRobot framework
  • How to Support Students’ Computational Thinking Skills in Educational Robotics Activities
  • Educational Robotics At Lower Secondary School
  • Evaluating the impact of robotics in education on pupils’ skills and attitudes
  • Imagining, Playing, and Coding with KIBO: Using Robotics to Foster Computational Thinking in Young Children
  • How Does a First LEGO League Robotics Program Provide Opportunities for Teaching Children 21st Century Skills
  • A Software-Based Robotic Vision Simulator For Use In Teaching Introductory Robotics Courses
  • Robotics Practical
  • A project-based strategy for teaching robotics using NI’s embedded-FPGA platform
  • Teaching a Core CS Concept through Robotics
  • Ms. Robot Will Be Teaching You: Robot Lecturers in Four Modes of Automated Remote Instruction
  • Robotic Competitions: Teaching Robotics and Real-Time Programming with LEGO Mindstorms
  • Visegrad Robotics Workshop-different ideas to teach and popularize robotics
  • LEGO® Mindstorms® EV3 Robotics Instructor Guide
  • DRAFT: for Automaatiop iv t22 MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • Autonomous Robot Design and Build: Novel Hands-on Experience for Undergraduate Students
  • Semi-Autonomous Inspection Robot
  • Sumo Robot Competition
  • Engagement of students with Robotics-Competitions-like projects in a PBL Bsc Engineering course
  • Robo Camp K12 Inclusive Outreach Program: A three-step model of Effective Introducing Middle School Students to Computer Programming and Robotics
  • The Effectiveness of Robotics Competitions on Students’ Learning of Computer Science
  • Engaging with Mathematics: How mathematical art, robotics and other activities are used to engage students with university mathematics and promote
  • Design Elements of a Mobile Robotics Course Based on Student Feedback
  • Sixth-Grade Students’ Motivation and Development of Proportional Reasoning Skills While Completing Robotics Challenges
  • Student Learning of Computational Thinking in A Robotics Curriculum: Transferrable Skills and Relevant Factors
  • A Robotics-Focused Instructional Framework for Design-Based Research in Middle School Classrooms
  • Transforming a Middle and High School Robotics Curriculum
  • Geometric Algebra for Applications in Cybernetics: Image Processing, Neural Networks, Robotics and Integral Transforms
  • Experimenting and validating didactical activities in the third year of primary school enhanced by robotics technology

Construction

  • Bibliometric analysis on the status quo of robotics in construction
  • AtomMap: A Probabilistic Amorphous 3D Map Representation for Robotics and Surface Reconstruction
  • Robotic Design and Construction Culture: Ethnography in Osaka University’s Miyazaki Robotics Lab
  • Infrastructure Robotics: A Technology Enabler for Lunar In-Situ Resource Utilization, Habitat Construction and Maintenance
  • A Planar Robot Design And Construction With Maple
  • Robotics and Automations in Construction: Advanced Construction and FutureTechnology
  • Why robotics in mining
  • Examining Influences on the Evolution of Design Ideas in a First-Year Robotics Project
  • Mining Robotics
  • TIRAMISU: Technical survey, close-in-detection and disposal mine actions in Humanitarian Demining: challenges for Robotics Systems
  • Robotics for Sustainable Agriculture in Aquaponics
  • Design and Fabrication of Crop Analysis Agriculture Robot
  • Enhance Multi-Disciplinary Experience for Agriculture and Engineering Students with Agriculture Robotics Project
  • Work in progress: Robotics mapping of landmine and UXO contaminated areas
  • Robot Based Wireless Monitoring and Safety System for Underground Coal Mines using Zigbee Protocol: A Review
  • Minesweepers uses robotics’ awesomeness to raise awareness about landminesexplosive remnants of war
  • Intelligent Autonomous Farming Robot with Plant Disease Detection using Image Processing
  • Auotomatic Pick And Place Robot
  • Video Prompting to Teach Robotics and Coding to Students with Autism Spectrum Disorder
  • Bilateral Anesthesia Mumps After RobotAssisted Hysterectomy Under General Anesthesia: Two Case Reports
  • Future Prospects of Artificial Intelligence in Robotics Software, A healthcare Perspective
  • Designing new mechanism in surgical robotics
  • Open-Source Research Platforms and System Integration in Modern Surgical Robotics
  • Soft Tissue Robotics–The Next Generation
  • CORVUS Full-Body Surgical Robotics Research Platform
  • OP: Sense, a rapid prototyping research platform for surgical robotics
  • Preoperative Planning Simulator with Haptic Feedback for Raven-II Surgical Robotics Platform
  • Origins of Surgical Robotics: From Space to the Operating Room
  • Accelerometer Based Wireless Gesture Controlled Robot for Medical Assistance using Arduino Lilypad
  • The preliminary results of a force feedback control for Sensorized Medical Robotics
  • Medical robotics Regulatory, ethical, and legal considerations for increasing levels of autonomy
  • Robotics in General Surgery
  • Evolution Of Minimally Invasive Surgery: Conventional Laparoscopy Torobotics
  • Robust trocar detection and localization during robot-assisted endoscopic surgery
  • How can we improve the Training of Laparoscopic Surgery thanks to the Knowledge in Robotics
  • Discussion on robot-assisted laparoscopic cystectomy and Ileal neobladder surgery preoperative care
  • Robotics in Neurosurgery: Evolution, Current Challenges, and Compromises
  • Hybrid Rendering Architecture for Realtime and Photorealistic Simulation of Robot-Assisted Surgery
  • Robotics, Image Guidance, and Computer-Assisted Surgery in Otology/Neurotology
  • Neuro-robotics model of visual delusions
  • Neuro-Robotics
  • Robotics in the Rehabilitation of Neurological Conditions
  • What if a Robot Could Help Me Care for My Parents
  • A Robot to Provide Support in Stigmatizing Patient-Caregiver Relationships
  • A New Skeleton Model and the Motion Rhythm Analysis for Human Shoulder Complex Oriented to Rehabilitation Robotics
  • Towards Rehabilitation Robotics: Off-The-Shelf BCI Control of Anthropomorphic Robotic Arms
  • Rehabilitation Robotics 2013
  • Combined Estimation of Friction and Patient Activity in Rehabilitation Robotics
  • Brain, Mind and Body: Motion Behaviour Planning, Learning and Control in view of Rehabilitation and Robotics
  • Reliable Robotics – Diagnostics
  • Robotics for Successful Ageing
  • Upper Extremity Robotics Exoskeleton: Application, Structure And Actuation

Defence and Military

  • Voice Guided Military Robot for Defence Application
  • Design and Control of Defense Robot Based On Virtual Reality
  • AI, Robotics and Cyber: How Much will They Change Warfare
  • BORDER SECURITY ROBOT
  • Brain Controlled Robot for Indian Armed Force
  • Autonomous Military Robotics
  • Wireless Restrained Military Discoursed Robot
  • Bomb Detection And Defusion In Planes By Application Of Robotics
  • Impacts Of The Robotics Age On Naval Force Design, Effectiveness, And Acquisition

Space Robotics

  • Lego robotics teacher professional learning
  • New Planar Air-bearing Microgravity Simulator for Verification of Space Robotics Numerical Simulations and Control Algorithms
  • The Artemis Rover as an Example for Model Based Engineering in Space Robotics
  • Rearrangement planning using object-centric and robot-centric action spaces
  • Model-based Apprenticeship Learning for Robotics in High-dimensional Spaces
  • Emergent Roles, Collaboration and Computational Thinking in the Multi-Dimensional Problem Space of Robotics
  • Reaction Null Space of a multibody system with applications in robotics

Other Industries

  • Robotics in clothes manufacture
  • Recent Trends in Robotics and Computer Integrated Manufacturing: An Overview
  • Application Of Robotics In Dairy And Food Industries: A Review
  • Architecture for theatre robotics
  • Human-multi-robot team collaboration for efficent warehouse operation
  • A Robot-based Application for Physical Exercise Training
  • Application Of Robotics In Oil And Gas Refineries
  • Implementation of Robotics in Transmission Line Monitoring
  • Intelligent Wireless Fire Extinguishing Robot
  • Monitoring and Controlling of Fire Fighthing Robot using IOT
  • Robotics An Emerging Technology in Dairy Industry
  • Robotics and Law: A Survey
  • Increasing ECE Student Excitement through an International Marine Robotics Competition
  • Application of Swarm Robotics Systems to Marine Environmental Monitoring

Future of Robotics / Trends

  • The future of Robotics Technology
  • RoboticsAutomation Are Killing Jobs A Roadmap for the Future is Needed
  • The next big thing (s) in robotics
  • Robotics in Indian Industry-Future Trends
  • The Future of Robot Rescue Simulation Workshop
  • PreprintQuantum Robotics: Primer on Current Science and Future Perspectives
  • Emergent Trends in Robotics and Intelligent Systems

RELATED ARTICLES MORE FROM AUTHOR

Robot competitions: safety, pit etiquette, and troubleshooting tips, top online stores and retailers to buy tools and parts for your robot, open source software and programming environments for robotics, 5 best professional personal ai robots of 2024, robot dexterity explained, top 10 cooking robots [with images], robots are more effective than humans in the recycling industry.

  • Privacy Policy
  • Terms & Conditions

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 08 April 2024

The evolution of robotics: research and application progress of dental implant robotic systems

  • Chen Liu   ORCID: orcid.org/0009-0000-1771-5430 1 , 2 , 3 , 4   na1 ,
  • Yuchen Liu 1 , 2 , 3 , 4   na1 ,
  • Rui Xie 1 , 2 , 3 , 4 ,
  • Zhiwen Li 1 , 2 , 3 , 4 ,
  • Shizhu Bai   ORCID: orcid.org/0000-0002-2439-3211 1 , 2 , 3 , 4 &
  • Yimin Zhao 1 , 2 , 3 , 4  

International Journal of Oral Science volume  16 , Article number:  28 ( 2024 ) Cite this article

290 Accesses

1 Altmetric

Metrics details

  • Medical research
  • Oral diseases
  • Preclinical research

The use of robots to augment human capabilities and assist in work has long been an aspiration. Robotics has been developing since the 1960s when the first industrial robot was introduced. As technology has advanced, robotic-assisted surgery has shown numerous advantages, including more precision, efficiency, minimal invasiveness, and safety than is possible with conventional techniques, which are research hotspots and cutting-edge trends. This article reviewed the history of medical robot development and seminal research papers about current research progress. Taking the autonomous dental implant robotic system as an example, the advantages and prospects of medical robotic systems would be discussed which would provide a reference for future research.

Similar content being viewed by others

recent research work in robotics

Clinical translation of wireless soft robotic medical devices

Tianlu Wang, Yingdan Wu, … Metin Sitti

recent research work in robotics

Advanced robotic surgical systems in ophthalmology

Matthew J. Gerber, Moritz Pettenkofer & Jean-Pierre Hubschman

recent research work in robotics

The benefits of haptic feedback in robot assisted surgery and their moderators: a meta-analysis

Max Bergholz, Manuel Ferle & Bernhard M. Weber

The development of medical robots has been a long journey of exploration. After being practically validated in industrial robots, this technology has become widespread globally and is now an essential part of modern production and lifestyles. Medical robots are increasingly in the vanguard of the field in diagnosis, treatment, visualization, and other areas of clinical practice. We are currently witnessing a transformative shift from cutting-edge research to the widespread application of medical robots. This review focused on the historical trajectory of medical robots, with a particular emphasis on the development history, current research status, and prospects of dental implant robotic systems.

Definition and history of robots

Definition and architectures of robots.

According to the International Organization for Standardization (ISO), a robot is an automatic, position-controlled, programmable multi-functional manipulator with several axes. It can process various materials, parts, tools, and special devices through programmable automation to perform intended tasks. 1 A robot’s structure typically consists of four parts: the actuation system, the drive-transmission system, the control system, and the intelligent system. The actuation system is the part of the robot that directly performs work, similar to a human hand. The drive-transmission system transmits force and motion to the actuator through a power source. The control system comprises a control computer, control software, and servo controllers, similar to a human brain. The intelligent system typically includes a perception system and an analytical decision-making intelligent system.

Evolution of robots

The history of robots can be traced back over 3 000 years. 2 Throughout history, scientists and craftsmen have designed and manufactured robot prototypes that simulate animal or human characteristics. 1 However, these inventions can only be classified as mechanical devices that primarily achieved automated functions through mechanical and physical principles with the lack of intelligence and autonomy of modern robots. These inventions demonstrate the level of engineering technology and mechanical manufacturing in ancient times, laying the foundation for later research on robots. Joseph Engelberger, recognized as the Father of Robotics, founded Unimation Corporation in 1958, the world’s first robot-manufacturing factory, which marked the official start of the industrialization of robots. In 1978, Unimation developed a Programmable Universal Machine for Assembly (PUMA) which represents a significant milestone in the development of international industrial robotics. In recent years, robotics has expanded significantly due to the continued development of sensor types, intelligent algorithms, and multidisciplinary integration. The technology has advanced from the initial industrial robotic arms to bionic robots, soft robots, nanorobots, and other forms.

Classification of robotics

The International Federation of Robotics (IFR) classifies robotics into two distinct categories: industrial robotics and service robotics, in accordance with the international standard ISO 8373:2012. 3 Industrial robotics are multipurpose manipulators with automatic control and programmability, which can operate with fixed or autonomous mobility and are primarily used in industrial production. 3 Service robotics are driving mechanisms that can perform useful tasks but do not include industrial automation applications. The IFR has classified service robotics into different segments to meet the diverse requirements of various industries (Fig. 1 ).

figure 1

Categories of robots according to the International Federation of Robotics

Medical robotics

In 1985, the Puma 200 robot (Westinghouse Electric, Pittsburgh, PA) was used for needle placement in computed tomography (CT)-guided brain biopsy at the Los Angeles Hospital in the United States, marking the beginning of the era of medical robot applications. 4 , 5 After nearly 40 years of continuous development and progress, medical robotics have been widely used in multiple fields, including surgery, nursing, and rehabilitation, demonstrating numerous remarkable advantages and potential.

Yang 6 , 7 , 8 has divided the level of autonomy of medical robotics into six levels, as follows: (0) no autonomy, (1) robot assistance, (2) task autonomy, (3) conditional autonomy, (4) high autonomy, and (5) full autonomy. At level 0, the robot requires operators to perform all tasks, including monitoring, generating performance options, selecting the option to perform (decision making), and executing the decision made, such as the da Vinci robotic system (Intuitive Inc., California, USA). At level 1, operators are required to continuously control the robot while the robot provides guidance with positional constraints. The Mako Smart Robotics used in orthopedic surgery is an example. At level 2, operators are required to discretely rather than continuously control the robot, and the robot can independently complete specific tasks based on operator instructions and pre-programmed procedures. An example of this level is the ROBODOC, which performs total hip and total knee replacement surgeries. At level 3, robots have the ability to perform surgeries based on pre-programmed procedures and can also modify the pre-planned schedule in real time to accommodate changes in the intraoperative position of the target object. An example of such robotics is the CyberKnife radiation therapy robotics, which has respiratory tracking functionality. At the higher levels of autonomy (specifically level 5 and possibly level 4), the robot is not only a medical device but also capable of practicing medicine, which currently does not exist due to some regulatory, ethical, and legal considerations. 6 , 7 , 8

Medical robotics are classified by IFR as special robotics with a combination of medical diagnosis methods with new technologies, such as artificial intelligence (AI) and big data, to provide services such as surgery, rehabilitation, nursing, medical transportation, and consultation for patients. 9 Medical robotics are categorized into the following five types based on their functions: surgical robotics, rehabilitation robotics, diagnostic robotics, laboratory analysis automation, and other robotics (robotics used for medical transportation are not included in this category).

Surgical robotics

Minimally invasive surgery and accurate intervention require surgeons to exercise more discernment, expand their range of vision, and increase their flexibility which brings the surgical robotics development (the surgical robot architecture 10 was shown in Fig. 2 ). Not only can it be equipped with an advanced three-dimensional (3D) imaging system and augmented reality technology to provide high-definition images of the surgical scene, but it is also capable of displaying important anatomical structures such as blood vessel and nerve locations in real-time. This allows surgeons to perform precise operations with the assistance of robots. For higher-level automatic medical robots, precise surgical operations are performed through image guidance and navigation systems based on preoperative planning. Moreover, the robotic arm has a high level of precision and stability that surpasses the capabilities of a free hand. This allows it to perform small and delicate operations with reduced errors caused by physician experience, fatigue, and hand tremors. In addition, the surgical robot also integrates artificial intelligence technology, which can perform automatic diagnostic analysis, adjust surgical strategies, and provide personalized surgical plans through deep learning. 11 Therefore, surgical robots could utilize vision, speech recognition, telecommunication, 3D imaging, and artificial intelligence technologies to enhance surgical skills through sensing and image guidance systems. This overcomes the limitations of manual operations and improves surgical accuracy and reliability. In comparison to conventional surgery, robotic-assisted surgery could reduce trauma, shorten recovery periods, and relieve pain. 12 , 13 Additionally, it can be used for remote surgery, operates continuously without fatigue, reduces the workload of medical staff, and minimizes occupational exposure for surgeons. Medical robotics have gradually entered the commercialization stage and have been utilized in clinical settings (Table 1 ). Currently, the most well-known surgical robot is the da Vinci system, which enables surgeons to accurately and minimally perform invasive surgery for multiple complicated diseases with good hand-eye coordination and magnification.

figure 2

The surgical robotic architecture

Dental treatment involves the special anatomical structure of the mouth and is characterized by limited visibility, narrow operation space, and the disturbance of saliva and tongue. As a result, the dental operation is intricate and mainly reliant on the surgeon’s experience and expertize, which takes inexperienced surgeons a long time to acquire. With the successful use of the da Vinci robotic system in laparoscopic surgery, surgeons are beginning to consider its potential application in maxillofacial surgery. Da Vinci robot has been used for cleft palate repair, 14 , 15 treating patients with obstructive sleep apnea-hypopnea syndrome (OSAHS), 16 as well as oral and oropharyngeal tumor resection. 17 , 18 However, due to the complexity of the oropharyngeal anatomy, the multiple robotic arms of the da Vinci system limit the surgeon’s vision, which is not conducive to surgical performance. In order to overcome these shortcomings, flexible robots (such as The Flex) approved by the Food and Drug Administration have made it possible to be used for oropharyngeal surgery. Additionally, oral and cranio-maxillofacial bone surgery, such as orthognathic surgery and dental implant surgery, requires accurate ostomies, which cannot be achieved by the da Vinci system. Robotic-assisted dental implant surgery research originated in 2001, and related studies have shown a gradual increase in recent years. In addition to conventional implant surgery, dental implant robotics can also perform zygomatic implant placement. 19 , 20 Among these studies, the largest number of articles were published in China, followed by the United States (Fig. 3 ). In Part 3 of this article, the relevant studies on dental implant robotics will be elaborated in detail.

figure 3

Related research on robotic-assisted dental implant placement. a The number of published papers on dental implant robotics in different years and b in different countries (as of December 2023)

Rehabilitation robotics

Rehabilitation robotics are significant area and research hotspot in medical robotics, second only to surgical robotics. Rehabilitation robotics are classified into two categories, as follows: therapeutic and assistive robotics. Therapeutic robotics provide psychological or physical treatment to improve specific functions of patients and are widely used in physical training and functional recovery of patients with paralysis and in improving the interactive ability of children with autism through behavioral induction. 21 Assistive robotics aim to improve the quality of life for individuals with musculoskeletal or neuromuscular impairments by compensating for or replacing their mobility or functionality. 22 , 23 , 24 For instance, Mike Topping’s Handy1 assists the most severely disabled with several everyday functions. 25 Similarly, Israel’s ReWalk provides powered hip and knee motion to enable individuals with spinal cord injury to stand upright, walk, turn, climb, and descend stairs. 26 Moreover, Japan’s wearable powered prosthesis, HAL, can enable patients to control joint movements independently by detecting bioelectrical signals on the skin surface during movement, in combination with foot pressure sensors. 27

Diagnostic robotics

Diagnostic robotics aid doctors in conducting examinations and making diagnoses, with the aim of improving accuracy, convenience, non-invasiveness, and safety of diagnosis. For instance, wireless capsule endoscopy introduced by Given Imaging (now Medtronic) allows minimally invasive inspection of the gastrointestinal tract. Patients can swallow a pillcam that captures images deep within the intestines, which has revolutionized gastrointestinal endoscopy and is now a clinically viable alternative to standard interventional endoscopy. Furthermore, wearable robotics are increasingly being utilized to non-invasively detect various health indicators and assist in disease diagnosis.

Laboratory robotics

Laboratory robotics handle and analyze samples in medical laboratories. Innovations in robotics and information technologies have created new opportunities for laboratory automation. These robots tirelessly and accurately perform tasks, improving the precision and reliability of experiments while reducing costs. At the University of Virginia Medical Center, robots operate instruments and analyze blood gases and electrolytes in the hospital laboratory. In addition, the robotic system works continuously, not only improving laboratory efficiency but also reducing the burden on laboratory techniques. 28 Nicole Rupp, based in Germany, has utilized the Dobot Magician robot to develop an economical automated laboratory system that coordinates various instruments for experiments. The results obtained from this system were not statistically different from those obtained from manual experiments. 29

Other medical robotics

The medical field has witnessed a significant increase in the use of robotics, leading to the development of new types of robots and functions to cater to the requirements of doctors and patients. Other medical robotics include providing non-medical operational services, such as assisting nurses with guidance, transportation, cleaning, inspection, monitoring, and disinfection. Moreover, robotics could be available for daily home care, providing assistance, monitoring behavior and health, as well as offering companionship for older individuals. 30 Furthermore, there are robots specifically designed to train emergency personnel. These robots can simulate complex trauma scenarios with multiple injuries in a highly accurate manner. 31 Robotic surgery simulation practice can be combined with virtual reality (VR), 3D-printed organ tissue models, or anesthetized live animals to rapidly improve the robotic surgical skills required by novice surgeons. In addition, to pandemics such as Ebola and COVID-19, the use of sampling robotics can effectively reduce the risk of infection. There are robots also designed for emergency rescue, medical education, and training. 32 , 33 Soft robotics, bionic robotics, nanorobots, and other robotics suitable for various functional needs are also hot topics in current medical robotic research, and they exhibit the typical characteristics of specialization, personalization, remoteness, intelligence, and immersion.

Dental implant robotic system

Implantology is widely considered the preferred treatment for patients with partial or complete edentulous arches. 34 , 35 The success of the surgery in achieving good esthetic and functional outcomes is directly related to correct and prosthetically-driven implant placement. 36 Accurate implant placement is crucial to avoid potential complications such as excessive lateral forces, prosthetic misalignment, food impaction, secondary bone resorption, and peri-implantitis. 37 Any deviation during the implant placement can result in damage to the surrounding blood vessels, nerves, and adjacent tooth roots and even cause sinus perforation. 38 Therefore, preoperative planning must be implemented intraoperatively with utmost precision to ensure quality and minimize intraoperative and postoperative side effects. 39

Currently, implant treatment approaches are as follows: Free-handed implant placement, Static computer-aided implant placement, and dynamic computer-aided implant placement. The widely used free-handed implant placement provides less predictable accuracy and depends on the surgeon’s experience and expertise. 40 Deviation in implant placement is relatively large among surgeons with different levels of experience. When novice surgeons face complex cases, achieving satisfactory results can be challenging. A systematic review 41 based on six clinical studies indicated that the ranges of deviation of the platform, apex, and angle from the planned position with free-handed implant placement were (1.25 ± 0.62) mm–(2.77 ± 1.54) mm, (2.10 ± 1.00) mm–(2.91 ± 1.52) mm, and 6.90°± 4.40°–9.92°± 6.01°, respectively. Static guides could only provide accurate guidance for the initial implantation position. However, it is difficult to precisely control the depth and angle of osteotomies. 42 The lack of real-time feedback on drill positioning during surgery can limit the clinician’s ability to obtain necessary information. 42 , 43 , 44 Besides, surgical guides may also inhibit the cooling of the drills used for implant bed preparation, which may result in necrosis of the overheated bone. Moreover, the use of static guides is limited in patients with limited accessibility, especially for those with implants placed in the posterior area. Additionally, the use of guides cannot flexibly adjust the implant plan intraoperatively. With dynamic computer-aided implant placement, the positions of the patient and drills could be tracked in real-time and displayed on a computer screen along with the surgical plan, thus allowing the surgeon to adjust the drilling path if necessary. However, the surgeons may deviate from the plan or prepare beyond it without physical constraints. During surgery, the surgeon may focus more on the screen for visual information rather than the surgical site, which can lead to reduced tactile feedback. 45 The results of a meta-analysis showed that the platform deviation, apex deviation, and angular deviation were 0.91 mm (95% CI 0.79–1.03 mm), 1.26 mm (95% CI 1.14–1.38 mm), and 3.25° (95% CI 2.84°–3.66°) respectively with the static computer-aided implant placement, and 1.28 mm (95% CI 0.87–1.69 mm), 1.68 mm (95% CI 1.45–1.90 mm), and 3.79° (95% CI 1.87–5.70°), respectively, with dynamic computer-aided implant placement. The analysis results showed that both methods improved the accuracy compared to free-handed implant placement, but they still did not achieve ideal accuracy. 46 Gwangho et al. 47 believe that the key point of a surgical operation is still manually completed by surgeons, regardless of static guide or dynamic navigation, and the human factors (such as hand tremble, fatigue, and unskilled operation techniques) also affect the accuracy of implant placement.

Robotic-assisted implant surgery could provide accurate implant placement and help the surgeon control handpieces to avoid dangerous tool excursions during surgery. 48 Furthermore, compared to manual calibration, registration, and surgery execution, automatic calibration, registration, and drilling using the dental implant robotic system reduces human error factors. This, in turn, helps avoid deviations caused by surgeons’ factors, thereby enhancing surgical accuracy, safety, success rates, and efficiency while also reducing patient trauma. 7 With the continuous improvement of technology and reduction of costs, implant robotics are gradually becoming available for commercial use. Yomi (Neocis Inc., USA) has been approved by the Food and Drug Administration, while Yakebot (Yakebot Technology Co., Ltd., Beijing, China), Remebot (Baihui Weikang Technology Co., Ltd, Beijing, China), Cobot (Langyue dental surgery robot, Shecheng Co. Ltd., Shanghai, China), Theta (Hangzhou Jianjia robot Co., Ltd., Hangzhou, China), and Dcarer (Dcarer Medical Technology Co., Ltd, Suzhou, China) have been approved by the NMPA. Dencore (Lancet Robotics Co., Ltd., Hangzhou, China) is in the clinical trial stage in China.

Basic research on dental implant robotic system

Compared to other surgeries performed with general anesthesia, dental implant surgery can be completed under local anesthesia, with patients awake but unable to remain completely still throughout the entire procedure. Therefore, research related to dental implant robotic system, as one of the cutting-edge technologies, mainly focuses on acquiring intraoperative feedback information (including tactile and visual information), different surgical methods (automatic drilling and manual drilling), patient position following, and the simulation of surgeons’ tactile sensation.

Architecture of dental implant robotic system

The architecture of dental implant robotics primarily comprises the hardware utilized for surgical data acquisition and surgical execution (Fig. 4 ). Data acquisition involves perceiving, identifying, and understanding the surroundings and the information required for task execution through the encoders, tactile sensors, force sensors, and vision systems. Real-time information obtained also includes the robot’s surrounding environment, object positions, shapes, sizes, surface features, and other relevant information. The perception system assists the robot in comprehending its working environment and facilitates corresponding decision-making as well as actions.

figure 4

The architecture of dental implant robotics

During the initial stage of research on implant robotics, owing to the lack of sensory systems, fiducial markers and corresponding algorithms were used to calculate the transformation relationship between the robot’s and the model’s coordinate system. The robot was able to determine the actual position through coordinate conversions. Dutreuil et al. 49 proposed a new method for creating static guides on casts using robots based on the determined implant position. Subsequently, Boesecke et al. 50 developed a surgical planning method using linear interpolation between start and end points, as well as intermediate points. The surgeon performed the osteotomies by holding the handpieces, with the robot guidance based on preoperatively determined implant position. Sun et al. 51 and McKenzie et al. 52 registered cone-beam computed tomography (CBCT) images, the robot’s coordinate system, and the patient’s position using a coordinate measuring machine, which facilitated the transformation of preoperative implant planning into intraoperative actions.

Neocis has developed a dental implant robot system called Yomi (Neocis Inc.) 53 based on haptic perception and connects a mechanical joint measurement arm to the patient’s teeth to track their position. The joint encoder provides information on the drill position, while the haptic feedback of handpieces maneuvered by the surgeon constrains the direction and depth of implant placement.

Optical positioning is a commonly used localization method that offers high precision, a wide -field -of -view, and resistance to interference. 54 This makes it capable of providing accurate surgical guidance for robotics. Yu et al. 55 combined image-guided technology with robotic systems. They used a binocular camera to capture two images of the same target, extract pixel positions, and employ triangulation to obtain three-dimensional coordinates. This enabled perception of the relative positional relationship between the end-effector and the surrounding environment. Yeotikar et al. 56 suggested mounting a camera on the end-effector of the robotic arm, positioned as close to the drill as possible. By aligning the camera’s center with the drill’s line of sight at a specific height on the lower jaw surface, the camera’s center accurately aligns with the drill’s position in a two-dimensional space at a fixed height from the lower jaw. This alignment guides the robotic arm in drilling through specific anatomical landmarks in the oral cavity. Yan et al. 57 proposed that the use of “eye-in-hand” optical navigation systems during surgery may introduce errors when changing the handpiece at the end of the robotic arm. Additionally, owing to the narrow oral environment, customized markers may fall outside the camera’s field of view when the robotic arm moves to certain positions. 42 To tackle this problem, a dental implant robot system based on optical marker spatial registration and probe positioning strategies is designed. Zhao et al constructed a modular implant robotic system based on binocular visual navigation devices operating on the principles of visible light with “eye-to-hand” mode, allowing complete observation of markers and handpieces within the camera’s field of view, thereby ensuring greater flexibility and stability. 38 , 58

The dental implant robotics execution system comprises hardware such as motors, force sensors, actuators, controllers, and software components to perform tasks and actions during implant surgery. The system receives commands, controls the robot’s movements and behaviors, and executes the necessary tasks and actions. Presently, research on dental implant robotic systems primarily focuses on the mechanical arm structure and drilling methods.

The majority of dental implant robotic systems directly adopt serial-linked industrial robotic arms based on the successful application of industrial robots with the same robotic arm connection. 59 , 60 , 61 , 62 These studies not only establish implant robot platforms to validate implant accuracy and assess the influence of implant angles, depths, and diameters on initial stability but also simulate chewing processes and prepare natural root-shaped osteotomies based on volume decomposition. Presently, most dental implant robots in research employ a single robotic arm for surgery. Lai et al. 62 indicated that the stability of the handpieces during surgery and real-time feedback of patient movement are crucial factors affecting the accuracy of robot-assisted implant surgery. The former requires physical feedback, while the latter necessitates visual feedback. Hence, they employed a dual-arm robotic system where the main robotic arm was equipped with multi-axis force and torque sensors for performing osteotomies and implant placement. The auxiliary arm consisted of an infrared monocular probe used for visual system positioning to address visual occlusion issues arising from changes in arm angles during surgery.

The robots mentioned above use handpieces to execute osteotomies and implant placement. However, owing to limitations in patient mouth opening, performing osteotomies and placing implants in the posterior region can be challenging. To overcome the spatial constraints during osteotomies in implant surgery, Yuan et al. 63 proposed a robot system based on earlier research which is laser-assisted tooth preparation. This system involves a non-contact ultra-short pulse laser for preparing osteotomies. The preliminary findings confirmed the feasibility of robotically controlling ultra-short pulse lasers for osteotomies, introducing a novel method for a non-contact dental implant robotic system.

Position following of dental implant robotic system

It can be challenging for patients under local anesthesia to remain completely still during robot-assisted dental implant surgery. 52 , 64 , 65 , 66 , 67 Any significant micromovement in the patient’s position can severely affect clinical surgical outcomes, such as surgical efficiency, implant placement accuracy compared to the planned position, and patient safety. Intraoperative movement may necessitate re-registration for certain dental implant robotic systems. In order to guarantee safety and accuracy during surgery, the robot must detect any movement in the patient’s position and promptly adjust the position of the robotic arm in real time. Yakebot uses binocular vision to monitor visual markers placed outside the patient’s mouth and at the end of the robotic arm. This captures motion information and calculates relative position errors. The robot control system utilizes preoperatively planned positions, visual and force feedback, and robot kinematic models to calculate optimal control commands for guiding the robotic arm’s micromovements and tracking the patient’s micromovements during drilling. As the osteotomies are performed to the planned depth, the robotic arm compensates for the patient’s displacement through the position following the function. The Yakebot’s visual system continuously monitors the patient’s head movement in real time and issues control commands every 0.008 s. The robotic arm is capable of following the patient’s movements with a motion servo in just 0.2 s, ensuring precise and timely positioning.

The simulation of surgeons’ tactile sensation in dental implant robotic systems

Robot-assisted dental implant surgery requires the expertise and tactile sense of a surgeon to ensure accurate implantation. Experienced surgeons can perceive bone density through the resistance they feel in their hands and adjust the force magnitude or direction accordingly. This ensures proper drilling along the planned path. However, robotic systems lack perception and control, which may result in a preference for the bone side with lower density. This can lead to inaccurate positioning compared to the planned implant position. 61 , 62 Addressing this challenge, Li et al. 68 established force-deformation compensation curves in the X, Y, and Z directions for the robot’s end-effector based on the visual and force servo systems of the autonomous dental robotic system, Yakebot. Subsequently, a corresponding force-deformation compensation strategy was formulated for this robot, thus proving the effectiveness and accuracy of force and visual servo control through in vitro experiments. The implementation of this mixed control mode, which integrates visual and force servo systems, has improved the robot’s accuracy in implantation and ability to handle complex bone structures. Based on force and visual servo control systems, Chen et al. 69 have also explored the relationship between force sensing and the primary stability of implants placed using the Yakebot autonomous dental robotic system through an in vitro study. A significant correlation was found between Yakebot’s force sensing and the insertion torque of the implants. This correlation conforms to an interpretable mathematical model, which facilitates the predictable initial stability of the implants after placement.

During osteotomies with heat production (which is considered one of the leading causes of bone tissue injury), experienced surgeons could sense possible thermal exposure via their hand feeling. However, with free-handed implant placement surgery, it is challenging to perceive temperature changes during the surgical process and establish an effective temperature prediction model that relies solely on a surgeon’s tactile sense. Zhao et al. 70 , using the Yakebot robotic system, investigated the correlation between drilling-related mechanical data and heat production and established a clinically relevant surrogate for intraosseous temperature measurement using force/torque sensor-captured signals. They also established a real-time temperature prediction model based on real-time force sensor monitoring values. This model aims to effectively prevent the adverse effects of high temperatures on osseointegration, laying the foundation for the dental implant robotic system to autonomously control heat production and prevent bone damage during autonomous robotic implant surgery.

The innovative technologies mentioned above allow dental implant robotic systems to simulate the tactile sensation of a surgeon and even surpass the limitations of human experience. This advancement promises to address issues that free-handed implant placement techniques struggle to resolve. Moreover, this development indicates substantial progress and great potential for implantation.

Clinical research on dental implant robotic systems

Clinical workflow of dental implant robotic systems.

The robotic assistant dental implant surgery consists of three steps: preoperative planning, intraoperative phase, and postoperative phase (Fig. 5 ). For preoperative planning, it is necessary to obtain digital intraoral casts and CBCT data from the patient, which are then imported into preoperative planning software for 3D reconstruction and planning implant placement. For single or multiple tooth gaps using implant robotic systems (except Yakebot), 61 , 62 , 71 , 72 a universal registration device (such as the U-shaped tube) must be worn on the patients’ missing tooth site using a silicone impression material preoperatively to acquire CBCT data for registration. The software performs virtual placement of implant positions based on prosthetic and biological principles of implant surgery, taking into account the bone quality of the edentulous implant site to determine the drilling sequence, insertion depth of each drill, speed, and feed rate. For single or multiple tooth implants performed using Yakebot, there is no need for preoperative CBCT imaging with markers. However, it is necessary to design surgical accessories with registration holes, brackets for attaching visual markers, and devices for assisting mouth opening and suction within the software (Yakebot Technology Co., Ltd., Beijing, China). These accessories are manufactured using 3D printing technology.

figure 5

Clinical workflow of robotic-assisted dental implant placement

For the intraoperative phase, the first step is preoperative registration and calibration. For Yakebot, the end-effector marker is mounted to the robotic arm, and the spatial positions are recorded under the optical tracker. The calibration plate with the positioning points is then assembled into the implant handpiece for drill tip calibration. Then, the registration probe is inserted in the registration holes of the jaw positioning plate in turn for spatial registration of the jaw marker and the jaw. Robot-assisted dental implant surgery usually does not require flapped surgery, 73 , 74 , yet bone grafting due to insufficient bone volume in a single edentulous space or cases of complete edentulism requiring alveolar ridge preparation may require elevation of flaps. For full-arch robot-assisted implant surgery, a personalized template with a positioning marker is required and should be fixed with metallic pins for undergoing an intraoperative CBCT examination, thus facilitating the robot and the jaws registration in the visual space and allowing the surgical robot to track the patient’s motion. The safe deployment of a robot from the surgical site is an essential principle for robot-assisted implant surgery. In the case of most robots, such as Yomi, the surgeon needs to hold the handpieces to control and supervise the robot’s movement in real time and stop the robotic arm’s movement in case of any accidents. With Yakebot, the entire surgery is performed under the surgeon’s supervision, and immediate instructions are sent in response to possible emergencies via a foot pedal. Additionally, the recording of the entrance and exit of the patient’s mouth ensures that the instruments would not damage the patient’s surrounding tissues. The postoperative phase aims at postoperative CBCT acquisition and accuracy measurement.

In clinical surgical practice, robots with varying levels of autonomy perform implant surgeries differently. According to the autonomy levels classified by Yang et al. 6 , 8 , 33 for medical robots, commercial dental implant robotic systems (Table 2 ) currently operate at the level of robot assistance or task autonomy.

The robot-assistance dental implant robotic systems provide haptic, 75 visual or combined visual and tactile guidance during dental implant surgery. 46 , 76 , 77 Throughout the procedure, surgeons must maneuver handpieces attached to the robotic guidance arm and apply light force to prepare osteotomies. 62 The robotic arm constrains the 3D space of the drill as defined by the virtual plan, enabling surgeons to move the end of the mechanical arm horizontally or adjust its movement speed. However, during immediate implant placement or full-arch implant surgery, both surgeons and robots may struggle to accurately perceive poor bone quality, which should prompt adjustments at the time of implant placement. This can lead to incorrect final implant positions compared to the planned locations.

The task-autonomous dental implant robotic systems can autonomously perform partial surgical procedures, such as adjusting the position of the handpiece to the planned position and preparing the implant bed at a predetermined speed according to the pre-operative implant plan, and surgeons should send instructions, monitor the robot’s operation, and perform partial interventions as needed. For example, the Remebot 77 , 78 requires surgeons to drag the robotic arm into and out of the mouth during surgery, and the robot automatically performs osteotomies or places implants according to planned positions under the surgeon’s surveillance. The autonomous dental implant robot system, Yakebot, 73 , 79 , 80 can accurately reach the implant site and complete operations such as implant bed preparation and placement during surgery. It can be controlled by the surgeon using foot pedals and automatically stops drilling after reaching the termination position before returning to the initial position. Throughout the entire process, surgeons only need to send commands to the robot using foot pedals.

Clinical performance of robot-assisted implant surgery

Figure 6 shows the results of accuracy in vitro, in vivo, and clinical studies on robot-assisted implant surgery. 20 , 46 , 48 , 55 , 62 , 64 , 67 , 68 , 69 , 70 , 71 , 72 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 The results suggest that platform and apex deviation values are consistent across different studies. However, there are significant variations in angular deviations among different studies, which may be attributed to differences in the perception and responsiveness to bone quality variances among different robotic systems. Therefore, future development should focus on enhancing the autonomy of implant robots and improving their ability to recognize and respond to complex bone structures.

figure 6

Accuracy reported in studies on robotic-assisted implant placement

Xu et al. 77 conducted a phantom experimental study comparing the implant placement accuracy in three levels of dental implant robotics, namely passive robot (Dcarer, level 1), semi-active robot (Remebot, level 2), and active robot (Yakebot, level 2) (Fig. 7 ). The study found that active robot had the lowest deviations at the platform and apex of the planned and actual implant positions, While the semi-active robot also had the lowest angular deviations. Chen et al. 46 and Jia et al. 79 conducted clinical trials of robotic implant surgery in partially edentulous patients using a semi-active dental implant robotic system (level 1) and an autonomous dental implant robot (level 2). The deviations of the implant platform, apex, and angle were (0.53 ± 0.23) mm/(0.43 ± 0.18) mm, (0.53 ± 0.24) mm/(0.56 ± 0.18) mm and 2.81° ± 1.13°/1.48° ± 0.59°, respectively. These results consistently confirmed that robotic systems can achieve higher implant accuracy than static guidance and that there is no significant correlation between accuracy and implant site (such as anterior or posterior site). The platform and angle deviation of autonomous dental implant robots were smaller than those of semi-active dental implant robotic systems. Li et al. 73 reported the use of the autonomous dental implant robot (level 2) to complete the placement of two adjacent implants with immediate postoperative restoration. The interim prosthesis fabricated prior to implant placement was seated without any adjustment, and no adverse reactions occurred during the operation.

figure 7

Comparison of accuracy of dental implant robotics with different levels of autonomy (phantom experiments) (* P  < 0.05, ** P  < 0.01, *** P  < 0.001)

Bolding et al., 53 Li et al., 20 Jia et al., 79 and Xie et al. 90 used dental implant robots to conduct clinical trials in full-arch implant surgery with five or six implants placed in each jaw. The deviations of implant platform, apex, and angle are shown in Fig. 8 . The haptic dental implant robot (level 1) used by Bolding et al., 53 achieved more deviations compared to other studies that used semi-active (level 1) or active robots (level 2). As its handpiece must be maneuvered by the surgeon, human errors such as surgeon fatigue may not be avoided. Owing to the parallel common implant placement paths between various implant abutments, prefabricated temporary dentures could be seated smoothly, and some patients wore temporary complete dentures immediately after surgery. These results indicate that robotic systems can accurately locate and perform implant placement during surgery.

figure 8

Comparison of accuracy in robotic-assisted full-arch implant placement

As there are relatively few studies of implant robots in clinical applications, Tak ́acs et al. 91 conducted a meta-analysis under in vitro conditions with free-handed, static-guided, dynamic navigated, and robotic-assisted implant placements, as shown in Fig. 9 . It was found that, compared to free-handed, static guided and dynamic navigated implant placements, robotic-assisted implant placements have more advantages in terms of accuracy. However, in vitro studies cannot fully simulate the patients’ oral condition and bone quality. Recent clinical studies 89 , 92 , 93 have shown a lower deviation in robotic-assisted implant placements compared to static-guided and dynamic-navigated implant placements. Common reasons for deviations in static-guided and dynamic-navigated implant placements include the following: deflection caused by hand tremors due to dense bone during surgery, surgeons’ experience, and other human factors. Larger clinical studies will be needed in the future to evaluate the differences between robotic and conventional surgical approaches and to provide guidance for the further development and refinement of robotic techniques.

figure 9

Comparison of accuracy of free-handed, static, dynamic, and robotic-assisted implant placement. (FHIP free-hand implant placement, SCAIP static computer-aided implant placement, DCAIP dynamic computer-aided implant placement, RAIP robot-assisted implant placement)

For the long-term follow-up performance of robotic systems used in dental implant procedures, none of the comparative studies was longer than a year. One 1-year prospective clinical study by Xie et al. 90 showed that the peri-implant tissues after robot-assisted full arch surgery at 1-year visit remained stable. There is little evidence indicating clinical outcomes especially for patient-reported outcomes. A more detailed clinical assessment should be included for further research.

Current issues with dental implant robotic systems

Need for further simplification of robotic surgical procedures.

Although robotic-assisted dental implant surgery can improve accuracy and treatment quality, 94 it involves complex registration, calibration, and verification procedures that prolong the duration of surgery. These tedious processes may introduce new errors, 61 and lower work efficiency, especially in single tooth implant placement 62 that could extend visit times and affect patient satisfaction. 62 Besides, surgeons are required to undergo additional training to familiarize themselves with the robotic system. 87

Need for improved flexibility of dental implant robotic system

During implantation, the drill tips at the end of the robotic arms cannot be tilted, and this can increase the difficulty of using robots in posterior sections with limited occlusal space. 61 , 62 In addition, currently available marker systems require patients to wear additional devices to hold the marker in place. If these markers are contaminated or obstructed by blood, the visual system may not be able to detect them, limiting surgical maneuverability to some extent. During immediate implant placement or in cases of poor bone quality in the implant site, the drill tips may deviate towards the tooth sockets or areas of lower bone density, seriously affecting surgical precision.

Currently, only one study has developed a corresponding force-deformation compensation strategy for robots, 68 but clinical validation is still lacking. Additionally, the dental implant robotic system, along with other dental implant robots developed for prosthetics, endodontics, and orthodontics, is currently single-functional. Multi-functional robots are required for performing various dental treatments.

Difficulties in promoting the use of dental implant robotic system

Despite the enormous potential of robotic systems in the medical field, similar to the development of computer-aided design/computer-aided manufacturing technology, introducing and applying this technology faces multiple challenges in the initial stages. The high cost of robotic equipment may limit its promotion and application in certain regions or medical institutions. Surgeons require specialized technical training before operating robotic systems, which translates to additional training costs and time investment. 95

Prospects in the use of dental implant robotic system

Medical robots possess high-precision sensing and positioning capabilities, which enable precise operations at small scales. They are also equipped with safety mechanisms and stability controls to ensure the safety of medical procedures and reduce risks to patients. As technology evolves, hardware and algorithms are continuously updated, resulting in constant performance improvements. Today, medical robots are widely used in surgery, diagnosis, and rehabilitation. 7 They enable precise and minimally invasive operation, thus reducing patient trauma and pain, shortening hospitalization, and speeding recovery, as well as reducing the need for re-operations and blood transfusions. 96 In addition, medical robots can reduce radiation exposure for both surgeons and patients. By leveraging machine learning and artificial intelligence technologies, robots can provide personalized and intelligent treatment plans and recommendations based on large amounts of data, improving diagnostic efficiency. Robots with remote operation capabilities can enable remote surgeries or consultations across regions, facilitating access to medical services. Moreover, robots can work continuously, ensuring medical quality and consistency while reducing surgeons ’neck and back pain, 97 as well as numbness in the hands and wrists experienced by surgeons. 98 Besides, they also reduce mental and physical stress, improving surgeons’ quality of life and extending their career longevity.

From da Vinci surgical robotic system to dental implant robotic system, these innovative technologies are leading unprecedented changes in the medical field. Dental implant robotic system continuously improves software modules and optimizes operating procedures to become more intelligent, more flexible and easier to learn and use. In the future, more extensive clinical trials will be needed to continuously observe and evaluate the long-term outcomes of robot-assisted implant surgery, especially in multi-center clinical trials. Moreover, measured outcomes must include well-defined clinical outcomes (such as pathophysiology 99 ), technical outcomes (including those derived from robotic kinematic and haptic sensors 100 ), patient-reported outcomes (such as quality-of-life indicators and overall satisfaction with treatment 99 ), and wider outcomes that reflect potential robotic disruption (ergonomic benefits, impacts on accessibility to surgery 100 ) where relevant. In addition, the evaluation of dental implant robots requires the analysis of learning curves. Large prospective cohorts provide the first opportunity to capture real-world learning curves, which can be used to develop training mechanisms that shorten learning curves and minimize any negative impact on patients. 99 , 100

As a pioneering attempt, the dental implant robotic system provides an important exploration and paradigm for the application of another dental robotic system. As technology continues to advance, robotics and artificial intelligence will provide more precise diagnostic and treatment options, more intelligent medical decision support systems, as well as more flexible and precise surgical procedures. These revolutionary technologies will continue to drive advances in medicine and healthcare, opening up new possibilities for future clinical practice.

With novel technology advancements, medical robotics are bringing a new era to medicine. Innovative medical robotics can perform surgical procedures, aid rehabilitation, make diagnoses, achieve robotic laboratory automation and other robots suitable for various functional needs. In the field of dentistry, the most widely utilized robotic system presently is the dental implant robotic system. Implant robotic systems could offer a more flexible approach for the precise planning, and visual and haptic guidance of surgical procedures. Various clinical trials have confirmed the high accuracy of implant robotic-assisted surgery achieved and toward long-term implant success. However, there is still much room for improvement in terms of further simplification, the flexibility of robotic surgical procedures, and systematic education. By leveraging machine learning and artificial intelligence technologies, more precise diagnostic and treatment options, intelligent medical decision support systems, and flexible and precise surgical procedures will be provided for future clinical practice.

Fukuda, T., Dario, P. & Yang, G. Z. Humanoid robotics—history, current state of the art, and challenges. Sci. Robot. 2 , eaar4043 (2017).

Article   PubMed   Google Scholar  

Dong, J. What you should know about the history of robotics. Robot Ind. 1 , 108–114 (2015).

Google Scholar  

Standardization, I.O.F. Robots and robotic devices—vocabulary. ISO 8373:2021.

Liu, H. H., Li, L. J., Shi, B., Xu, C. W. & Luo, E. Robotic surgical systems in maxillofacial surgery: a review. Int. J. Oral. Sci. 9 , 63–73 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Kwoh, Y. S., Hou, J., Jonckheere, E. A. & Hayati, S. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans. Biomed. Eng. 35 , 153–160 (1988).

Article   CAS   PubMed   Google Scholar  

Troccaz, J., Dagnino, G. & Yang, G. Z. Frontiers of medical robotics: from concept to systems to clinical translation. Annu. Rev. Biomed. Eng. 21 , 193–218 (2019).

Dupont, P. E. et al. A decade retrospective of medical robotics research from 2010 to 2020. Sci. Robot. 6 , eabi8017 (2021).

Yang, G. Z. et al. Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci. Robot. 2 , eaam8638 (2017).

Yip, M. et al. Artificial intelligence meets medical robotics. Science 381 , 141–146 (2023).

Wang, T. M. et al. Medical Surgical Robotics. China Science Publishing & Media Ltd. http://find.nlc.cn/search/showDocDetails?docId=-9060319075756851951&dataSource=ucs01 (2013).

Liu, Y. et al. Fully automatic AI segmentation of dental implant surgery related tissues based on cone beam computed tomography images. Int. J. Oral Sci. (2024) (Accept for publication).

Alemzadeh, K. & Raabe, D. Prototyping artificial jaws for the robotic dental testing simulator. Proc. Inst. Mech. Eng. Part H 222 , 1209–1220 (2008).

Article   CAS   Google Scholar  

Kazanzides, P. et al. Surgical and interventional robotics: core concepts, technology, and design. IEEE Robot. Autom. Mag. 15 , 122–130 (2008).

Khan, K., Dobbs, T., Swan, M. C., Weinstein, G. S. & Goodacre, T. E. Trans-oral robotic cleft surgery (TORCS) for palate and posterior pharyngeal wall reconstruction: a feasibility study. J. Plast. Reconstr. Aesthet. Surg. 69 , 97–100 (2016).

Nadjmi, N. Transoral robotic cleft palate surgery. Cleft Palate Craniofac. J. 53 , 326–331 (2016).

Vicini, C. et al. Transoral robotic tongue base resection in obstructive sleep apnoea-hypopnoea syndrome: a preliminary report. ORL J. Otorhinolaryngol. Relat. Spec. 72 , 22–27 (2010).

Weinstein, G. S. et al. Transoral robotic surgery alone for oropharyngeal cancer: an analysis of local control. Arch. Otolaryngol. Head. Neck Surg. 138 , 628–634 (2012).

Kayhan, F. T., Kaya, H. & Yazici, Z. M. Transoral robotic surgery for tongue-base adenoid cystic carcinoma. J. Oral. Maxillofac. Surg. 69 , 2904–2908 (2011).

Olivetto, M., Bettoni, J., Testelin, S. & Lefranc, M. Zygomatic implant placement using a robot-assisted flapless protocol: proof of concept. Int. J. Oral. Maxillofac. Surg. 52 , 710–715 (2023).

Li, C. et al. Autonomous robotic surgery for zygomatic implant placement and immediately loaded implant-supported full-arch prosthesis: a preliminary research. Int. J. Implant. Dent. 9 , 12 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Saleh, M. A., Hanapiah, F. A. & Hashim, H. Robot applications for autism: a comprehensive review. Disabil. Rehabil. Assist. Technol. 16 , 580–602 (2021).

Chen, X. P. Advancement and challenges of medical robots from an interdisciplinary viewpoint. Chin. Bull. Life Sci. 34 , 965–973 (2022).

Winchester, P. et al. Changes in supraspinal activation patterns following robotic locomotor therapy in motor-incomplete spinal cord injury. Neurorehabil. Neural Repair 19 , 313–324 (2005).

Alashram, A. R., Annino, G. & Padua, E. Robot-assisted gait training in individuals with spinal cord injury: a systematic review for the clinical effectiveness of Lokomat. J. Clin. Neurosci. 91 , 260–269 (2021).

Topping, M. An overview of the development of Handy 1, a rehabilitation robot to assist the severely disabled. Artif. Life Robot. 4 , 188–192 (2000).

Article   Google Scholar  

Meng, F., Peng, X. Y. & Xu, Y. N. Analysis of and research on the development of lower limb wearable exoskeleton. J. Mech. Transm. 46 , 163–169 (2022).

Ezaki, S. et al. Analysis of gait motion changes by intervention using robot suit hybrid assistive limb (HAL) in myelopathy patients after decompression surgery for ossification of posterior longitudinal ligament. Front. Neurorobot. 15 , 650118 (2021).

Tegally, H., San, J. E., Giandhari, J. & de Oliveira, T. Unlocking the efficiency of genomics laboratories with robotic liquid-handling. BMC Genomics 21 , 729 (2020).

Rupp, N., Peschke, K., Koppl, M., Drissner, D. & Zuchner, T. Establishment of low-cost laboratory automation processes using AutoIt and 4-axis robots. SLAS Technol. 27 , 312–318 (2022).

Wu, Y. H., Fassert, C. & Rigaud, A. S. Designing robots for the elderly: appearance issue and beyond. Arch. Gerontol. Geriatr. 54 , 121–126 (2012).

Liu, Y. et al. Boosting framework via clinical monitoring data to predict the depth of anesthesia. Technol. Health Care 30 , 493–500 (2022).

Yang, G. Z. et al. Combating COVID-19-The role of robotics in managing public health and infectious diseases. Sci. Robot. 5 , eabb5589 (2020).

Gao, A. et al. Progress in robotics for combating infectious diseases. Sci. Robot. 6 , eabf1462 (2021).

Yu, H. et al. Management of systemic risk factors ahead of dental implant therapy: a beard well lathered is half shaved. J. Leukoc. Biol. 110 , 591–604 (2021).

Cheng, L. et al. [A review of peri-implant microbiology]. Hua XI Kou Qiang Yi Xue Za Zhi 37 , 7–12 (2019).

PubMed   Google Scholar  

Patel, R. & Clarkson, E. Implant surgery update for the general practitioner: dealing with common postimplant surgery complications. Dent. Clin. North Am. 65 , 125–134 (2021).

Herrera, D. et al. Prevention and treatment of peri-implant diseases — the EFP S3 level clinical practice guideline. J. Clin. Periodontol. 50 , 4–76 (2023).

Wu Q., Research on the creation and application of the spatial mapping devices of the dental implant robot system. Fourth Military Medical University. Vol. graduate. (2016).

Ruff, C., Richards, R., Ponniah, A., Witherow, H., Evans, R. & Dunaway, D. Computed maxillofacial image in surgical navigation. Int. J. Comput. Assist. Radiol. Surg. 2 , 412–418 (2007).

Tal, H. & Moses, O. A comparison of panoramic radiography with computed tomography in the planning of implant surgery. Dentomaxillofac. Radiol. 20 , 40–42 (1991).

Tattan, M., Chambrone, L., Gonzalez-Martin, O. & Avila-Ortiz, G. Static computer-aided, partially guided, and free-handed implant placement: a systematic review and meta-analysis of randomized controlled trials. Clin. Oral. Implant. Res. 31 , 889–916 (2020).

Shen, P. et al. Accuracy evaluation of computer-designed surgical guide template in oral implantology. J. Cranio Maxillofac. Surg. 43 , 2189–2194 (2015).

Vercruyssen, M., Fortin, T., Widmann, G., Jacobs, R. & Quirynen, M. Different techniques of static/dynamic guided implant surgery: modalities and indications. Periodontology 2000 66 , 214–227 (2014).

Zhao Y. Clinical study of an autonomous dental implant robot. In: 2021 Compendium of Papers from the Sixth National Oral and Maxillofacial Prosthodontics Annual Meeting of the Oral and Maxillofacial Prosthodontics Committee of the Chinese Dental Association . 7-8 https://doi.org/10.26914/c.cnkihy.2021.063176 (2021).

Kivovics, M., Takacs, A., Penzes, D., Nemeth, O. & Mijiritsky, E. Accuracy of dental implant placement using augmented reality-based navigation, static computer assisted implant surgery, and the free-hand method: an in vitro study. J. Dent. 119 , 104070 (2022).

Chen, W. et al. Accuracy of dental implant placement with a robotic system in partially edentulous patients: a prospective, single-arm clinical trial. Clin. Oral. Implant. Res. 34 , 707–718 (2023).

Gwangho, K., Hojin, S., Sungbeen, I., Dongwan, K. & Sanghwa, J. A study on simulator of human-robot cooperative manipulator for dental implant surgery. 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea (South) . 2159-2164 https://doi.org/10.1109/ISIE.2009.5222561 (2009).

Alqutaibi, A. Y., Hamadallah, H. H., Abu, Z. B., Aloufi, A. M. & Tarawah, R. A. Applications of robots in implant dentistry: a scoping review. J. Prosthet. Dent . 11:S0022-3913(23)00770-9. Epub ahead of print. https://doi.org/10.1016/j.prosdent.2023.11.019 (2023).

Dutreuil, J. G. F. L. Computer Assisted Dental Implantology: A New Method and a Clinical Validation. In: Niessen, W.J., Viergever, M.A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2001. MICCAI 2001. 2208 https://doi.org/10.1007/3-540-45468-3_46 (2001).

R. Boesecke, J. B. J. R. Robot Assistant for Dental Implantology . (Springer: Berlin, Heidelberg, 2001). .

Sun, X. et al. Automated dental implantation using image-guided robotics: registration results. Int. J. Comput. Assist. Radiol. Surg. 6 , 627–634 (2011).

Sun, X., Yoon, Y., Li, J. & Mckenzie, F. D. Automated image-guided surgery for common and complex dental implants. J. Med. Eng. Technol. 38 , 251–259 (2014).

Bolding, S. L. & Reebye, U. N. Accuracy of haptic robotic guidance of dental implant surgery for completely edentulous arches. J. Prosthet. Dent. 128 , 639–647 (2022).

Zhou, G. et al. Intraoperative localization of small pulmonary nodules to assist surgical resection: a novel approach using a surgical navigation puncture robot system. Thorac. Cancer 11 , 72–81 (2020).

K, Y. et al. Stereo vision based robot navigation system using modulated potential field for implant surgery. IEEE International Conference on Industrial Technology 493–498 https://doi.org/10.1109/ICIT.2015.7125147 (2015).

S, Y., A, M. P. & Y, V. D. R., Automation of end effector guidance of robotic arm for dental implantation using computer vision. IEEE Distributed Computing, VLSI, Electrical Circuits and Robotics 84–89. https://doi.org/10.1109/DISCOVER.2016.7806263 (2016).

Yan, B. et al. Optics-guided Robotic System for Dental Implant Surgery. Chin. J. Mech. Eng. 35 , 55 (2022).

Xie R. The study on accurary of the Dental Implantology Robotic System. Fourth Military Medical University . Vol. graduate. (2016).

Wilmes, B. & Drescher, D. Impact of insertion depth and predrilling diameter on primary stability of orthodontic mini-implants. Angle Orthod. 79 , 609–614 (2009).

Wilmes, B., Su, Y. Y. & Drescher, D. Insertion angle impact on primary stability of orthodontic mini-implants. Angle Orthod. 78 , 1065–1070 (2008).

Shi, J. Y. et al. Improved positional accuracy of dental implant placement using a haptic and machine-vision-controlled collaborative surgery robot: a pilot randomized controlled trial. J. Clin. Periodontol. 51 , 24–32 (2024).

Qiao, S. C., Wu, X. Y., Shi, J. Y., Tonetti, M. S. & Lai, H. C. Accuracy and safety of a haptic operated and machine vision controlled collaborative robot for dental implant placement: a translational study. Clin. Oral. Implant. Res. 34 , 839–849 (2023).

Yuan, F. S. et al. Preliminary study on the automatic preparation of dental implant socket controlled by micro-robot. Zhonghua Kou Qiang Yi Xue Za Zhi 53 , 524–528 (2018).

CAS   PubMed   Google Scholar  

Kan, T. S. et al. Evaluation of a custom-designed human-robot collaboration control system for dental implant robot. Int. J. Med. Robot. Comput. Assist. Surg. 18 , e2346 (2022).

Cheng, K. J. et al. Accuracy of dental implant surgery with robotic position feedback and registration algorithm: an in-vitro study. Comput. Biol. Med. 129 , 104153 (2021).

Feng, Y. et al. An image-guided hybrid robot system for dental implant surgery. Int. J. Comput. Assist. Radiol. Surg. 17 , 15–26 (2022).

Tao, B. et al. The accuracy of a novel image-guided hybrid robotic system for dental implant placement: an in vitro study. Int. J. Med. Robot. Comput. Assist. Surg. 19 , e2452 (2023).

Li Z. W. The study on accuracy of the Dental Implantology Robotic System. Air Force Medical University Vol. Graduate. (2021).

Chen, D., Chen, J., Wu, X., Chen, Z. & Liu, Q. Prediction of primary stability via the force feedback of an autonomous dental implant robot. J. Prosthet. Dent . S0022-3913(23)00755-2. Epub ahead of print. https://doi.org/10.1016/j.prosdent.2023.11.008 (2023).

Zhao, R. et al. Correlation between intraosseous thermal change and drilling impulse data during osteotomy within autonomous dental implant robotic system: an in vitro study. Clin. Oral Implant. Res. 35 , 258–267 (2023).

Yang, S. et al. Accuracy of autonomous robotic surgery for single-tooth implant placement: a case series. J. Dent. 132 , 104451 (2023).

Rawal, S., Tillery, D. J. & Brewer, P. Robotic-assisted prosthetically driven planning and immediate placement of a dental implant. Compend. Contin. Educ. Dent. 41 , 26–30 (2020).

Li, Z., Xie, R., Bai, S. & Zhao, Y. Implant placement with an autonomous dental implant robot: a clinical report. J. Prosthet. Dent . S0022-3913(23)00124-5. Epub ahead of print. https://doi.org/10.1016/j.prosdent.2023.02.014 (2023).

Talib, H. S., Wilkins, G. N. & Turkyilmaz, I. Flapless dental implant placement using a recently developed haptic robotic system. Br. J. Oral. Maxillofac. Surg. 60 , 1273–1275 (2022).

Ali, M. Flapless dental implant surgery enabled by haptic robotic guidance: a case report. Clin. Implant Dent. Relat. Res . Epub ahead of print. https://doi.org/10.1111/cid.13279 (2023).

Chen, J. et al. Comparison the accuracy of a novel implant robot surgery and dynamic navigation system in dental implant surgery: an in vitro pilot study. BMC Oral. Health 23 , 179 (2023).

Xu, Z. et al. Accuracy and efficiency of robotic dental implant surgery with different human-robot interactions: an in vitro study. J. Dent. 137 , 104642 (2023).

Yang, S., Chen, J., Li, A., Li, P. & Xu, S. Autonomous robotic surgery for immediately loaded implant-supported maxillary full-arch prosthesis: a case report. J. Clin. Med. 11 , 6594 (2022).

Jia, S., Wang, G., Zhao, Y. & Wang, X. Accuracy of an autonomous dental implant robotic system versus static guide-assisted implant surgery: a retrospective clinical study. J. Prosthet. Dent . S0022-3913(23)00284-6. Epub ahead of print. https://doi.org/10.1016/j.prosdent.2023.04.027 (2023).

Bai, S. Z. et al. Animal experiment on the accuracy of the Autonomous Dental Implant Robotic System. Zhonghua Kou Qiang Yi Xue Za Zhi 56 , 170–174 (2021).

Zhao, Y. et al. Effect of the number and distribution of fiducial markers on the accuracy of robot-guided implant surgery in edentulous mandibular arches: an in vitro study. J. Dent. 134 , 104529 (2023).

Mozer, P. S. Accuracy and deviation analysis of static and robotic guided implant surgery: a case study. Int. J. Oral. Maxillofac. Implants 35 , e86–e90 (2020).

Chen, J. et al. Accuracy of immediate dental implant placement with task-autonomous robotic system and navigation system: an in vitro study. Clin. Oral Implant. Res . Epub ahead of print. https://doi.org/10.1111/clr.14104 (2023).

Cao, Z. et al. Pilot study of a surgical robot system for zygomatic implant placement. Med. Eng. Phys. 75 , 72–78 (2020).

Zhang, K., Yu, M. L., C, C. & Xu, B. H. Preliminary research on the accuracy of implant surgery assisted by implant surgery robot. China Med. Device Inf. 27 , 25–28 (2021).

CAS   Google Scholar  

Tao, B. et al. Accuracy of dental implant surgery using dynamic navigation and robotic systems: an in vitro study. J. Dent. 123 , 104170 (2022).

Ding, Y. et al. Accuracy of a novel semi-autonomous robotic-assisted surgery system for single implant placement: a case series. J. Dent. 139 , 104766 (2023).

Li, P. et al. Accuracy of autonomous robotic surgery for dental implant placement in fully edentulous patients: a retrospective case series study. Clin. Oral. Implant. Res. 34 , 1428–1437 (2023).

Wang, W. et al. Accuracy of the Yakebot dental implant robotic system versus fully guided static computer-assisted implant surgery template in edentulous jaw implantation: a preliminary clinical study. Clin. Implant Dent. Relat. Res . https://doi.org/10.1111/cid.13278 . Epub ahead of print (2023).

Xie, R. et al. Clinical evaluation of autonomous robotic-assisted full-arch implant surgery: a 1-year prospective clinical study. Clin. Oral Implant. Res . Epub ahead of print. https://doi.org/10.1111/clr.14243 (2024).

Takacs, A. et al. Advancing accuracy in guided implant placement: a comprehensive meta-analysis: meta-analysis evaluation of the accuracy of available implant placement methods. J. Dent. 139 , 104748 (2023).

He, J. et al. In vitro and in vivo accuracy of autonomous robotic vs. fully guided static computer-assisted implant surgery. Clin. Implant Dent. Relat. Res. https://doi.org/10.1111/cid.13302 (2024). Epub ahead of print.

Zhang, S. et al. Accuracy of implant placement via dynamic navigation and autonomous robotic computer-assisted implant surgery methods: a retrospective study. Clin. Oral. Implant. Res. 35 , 220–229 (2024).

Shi, B. & Huang, H. Computational technology for nasal cartilage-related clinical research and application. Int. J. Oral. Sci. 12 , 21 (2020).

Zhou, L., Teng, W., Li, X. & Su, Y. Accuracy of an optical robotic computer-aided implant system and the trueness of virtual techniques for measuring robot accuracy evaluated with a coordinate measuring machine in vitro. J. Prosthet. Dent . 11:S0022-3913(23)00751-5. Epub ahead of print. https://doi.org/10.1016/j.prosdent.2023.11.004 (2023).

Forsmark, A. et al. Health economic analysis of open and robot-assisted laparoscopic surgery for prostate cancer within the prospective multicentre LAPPRO trial. Eur. Urol. 74 , 816–824 (2018).

Rokhshad, R., Keyhan, S. O. & Yousefi, P. Artificial intelligence applications and ethical challenges in oral and maxillo-facial cosmetic surgery: a narrative review. Maxillofac. Plast. Reconstr. Surg. 45 , 14 (2023).

Gofrit, O. N. et al. Surgeons’ perceptions and injuries during and after urologic laparoscopic surgery. Urology 71 , 404–407 (2008).

Tonetti, M. S. et al. Relevant domains, core outcome sets and measurements for implant dentistry clinical trials: the Implant Dentistry Core Outcome Set and Measurement (ID-COSM) international consensus report. J. Clin. Periodontol. 50 , 5–21 (2023).

Marcus, H. J. et al. The IDEAL framework for surgical robotics: development, comparative evaluation and long-term monitoring. Nat. Med. 30 , 61–75 (2024).

Sun, Z. J. & Tian, Z. M. Advances in neurosurgical surgical robotics. Chin. J. Minimally Invasive Neurosurg 5 , 238–240 (2008).

Abdul-Muhsin, H. P. V. History of Robotic Surgery. In: (eds Kim, K.) Robotics in General Surgery . (Springer, New York, NY, 2014).

Ewing, D. R., Pigazzi, A., Wang, Y. & Ballantyne, G. H. Robots in the operating room–the history. Semin. Laparosc. Surg. 11 , 63–71 (2004).

Leal, G. T. & Campos, C. O. 30 Years of robotic surgery. World J. Surg. 40 , 2550–2557 (2016).

Falcone, T., Goldberg, J., Garcia-Ruiz, A., Margossian, H. & Stevens, L. Full robotic assistance for laparoscopic tubal anastomosis: a case report. J. Laparoendosc. Adv. Surg. Tech. 9 , 107–113 (1999).

Maeso, S. et al. Efficacy of the Da Vinci surgical system in abdominal surgery compared with that of laparoscopy: a systematic review and meta-analysis. Ann. Surg. 252 , 254–262 (2010).

M, J. et al. The hands-on orthopaedic robot “acrobot”: Early clinical trials of total knee replacement surgery. IEEE Trans. Robot. Autom. 19 , 902–911 (2003).

Schweikard, A., Shiomi, H. & Adler, J. Respiration tracking in radiosurgery. Med. Phys. 31 , 2738–2741 (2004).

Lieberman, I. H. et al. Bone-mounted miniature robotic guidance for pedicle screw and translaminar facet screw placement: Part I—technical development and a test case result. Neurosurgery 59 , 641–650 (2006).

Reddy, V. Y. et al. View-synchronized robotic image-guided therapy for atrial fibrillation ablation: experimental validation and clinical feasibility. Circulation 115 , 2705–2714 (2007).

Subramanian, P., Wainwright, T. W., Bahadori, S. & Middleton, R. G. A review of the evolution of robotic-assisted total hip arthroplasty. Hip Int. 29 , 232–238 (2019).

S, V., G, P. H., J, F. M., J, A. L. & P, C. ViKY robotic scope holder: initial clinical experience and preliminary results using instrument tracking. IEEE/ASME Trans. Mechatron. 15 , 879–886 (2010).

Zhao, R. F., Li, Z. W. & Bai, S. Z. Application of surgical robots in stomatology. Chin. J. Robot. Surg. 3 , 351–366 (2022).

Riga, C. V., Bicknell, C. D., Rolls, A., Cheshire, N. J. & Hamady, M. S. Robot-assisted fenestrated endovascular aneurysm repair (FEVAR) using the Magellan system. J. Vasc. Interv. Radiol. 24 , 191–196 (2013).

Herry, Y. et al. Improved joint-line restitution in unicompartmental knee arthroplasty using a robotic-assisted surgical technique. Int. Orthop. 41 , 2265–2271 (2017).

Wu, Q. & Zhao, Y. M. Application of robotics in stomatology. Int. J. Comput. Dent. 45 , 615–620 (2018).

Lang, S. et al. A european multicenter study evaluating the flex robotic system in transoral robotic surgery. Laryngoscope 127 , 391–395 (2017).

Download references

This work was supported by the National Natural Science Foundation of China [grant number 81970987].

Author information

These authors contributed equally: Chen Liu, Yuchen Liu

Authors and Affiliations

State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Xi’an, China

Chen Liu, Yuchen Liu, Rui Xie, Zhiwen Li, Shizhu Bai & Yimin Zhao

National Clinical Research Center for Oral Diseases, Xi’an, China

Shaanxi Key Laboratory of Stomatology, Xi’an, China

Digital Center, School of Stomatology, The Fourth Military Medical University, Xi’an, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, Bai S.Z., and Zhao Y.M.; writing—original draft preparation, Liu C and Liu Y.C.; writing—review and editing, Xie R and Li Z.W., Bai S.Z., and Zhao Y.M. All authors have read and agreed to the published version of the paper.

Corresponding authors

Correspondence to Shizhu Bai or Yimin Zhao .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Liu, C., Liu, Y., Xie, R. et al. The evolution of robotics: research and application progress of dental implant robotic systems. Int J Oral Sci 16 , 28 (2024). https://doi.org/10.1038/s41368-024-00296-x

Download citation

Received : 15 January 2024

Revised : 11 March 2024

Accepted : 13 March 2024

Published : 08 April 2024

DOI : https://doi.org/10.1038/s41368-024-00296-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

recent research work in robotics

The Role of Robotics in the Future of Work

NIOSH established the Future of Work Initiative in response to rapid changes in the workplace, work, and workforce . The Initiative seeks to prompt research and practical approaches to address future occupational safety and health concerns. Increased use and rapid technologic advances in robotics fits within the sphere of changes in how work will increasingly be done.

The rapid increase in use, and new types of robots, has resulted in numerous knowledge gaps on how robotics technologies can best be used to benefit workers and ensure that their use does not contribute to worker harm. Technology advancements and use of robots are not waiting for these research questions to be answered. While researchers work to fill knowledge gaps, it is encouraged that 1) robotics manufacturers and integrators follow Prevention through Design principles; 2) consensus standards groups and others develop best practices; and 3) employers and workers follow these best practices.

Robots are not new to the workplace and have been used for decades in manufacturing.  However, the last decade has seen tremendous advances in robotics technology and application in work settings that are less controlled than manufacturing, such as agriculture, construction, and mining, as well as in settings where robots can come into contact with the public, such as in healthcare, retail, and transportation. Control technologies for traditional industrial robots that keep workers physically away from operating robots using cages and other engineering controls are obsolete for the new types of robots that by design are intended to work with, around, and even be worn by workers. Risk assessment methods and control strategies must be developed and tested for these new types of robots. On Wednesday, June 22 (2-3pm ET), 2022, NIOSH will host a free webinar : The Role of Robotics in the Future of Work (more details below) .

NIOSH established the Center for Occupational Robotics Research (Robotics Center) to provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and well-being. The Robotics Center defines robots broadly and addresses traditional industrial robots, collaborative robots, co-existing or mobile robots, wearable robots or powered exoskeletons, remotely controlled or autonomous vehicles, and future robots that will have increased autonomy using advanced artificial intelligence.

Increased use of robotics is a two-sided coin for worker safety, health, and well-being. On one side, the potential to improve worker safety results from robots—rather than humans—doing dangerous work, such as work at heights, in confined spaces, with infectious patients and disinfection, and work that stresses the human body. On the other side are concerns that robots could 1) physically injure workers through unanticipated contact; 2) distract workers from hazards; and 3) mentally stress workers due to their lack of understanding and trust in the robot’s capabilities or concerns about job displacement.

NIOSH identified broad occupational robotics research needs in its Strategic Plan that guides its scientists and extramural (external) researchers who apply for research funding from NIOSH and partners. Robotics related research needs are included in: Goal 3, Reduce immune, infectious and dermal disease; Goal 4, Reduce occupational musculoskeletal disorders; Goal 6, Improve workplace safety to reduce traumatic injuries; and Goal 7, Promote safe and healthy workplace design and well-being. Research goals are included for each of the four broad types of research conducted by NIOSH:

  • Surveillance: methods and techniques for systematic collection, analysis, and interpretation of data on injuries associated with robots.
  • Basic/etiologic: Risk factors contributing to robot-related injuries, such as the human robot interface.
  • Intervention: Evaluating robotics technologies as interventions to improve worker safety, and evaluation of control technologies to improve worker safety around robots.
  • Translation: Evaluating aids and barriers to translating occupational robotics research findings into practice.

More detailed research needs support these research goals and can be found on the Robotics Center Research Webpage and in The NIOSH Future of Work Initiative Research Agenda.

NIOSH is building a research portfolio to address occupational robotics research needs. This includes research conducted by NIOSH scientists and funding of research by external scientists:

  • State Fatality Assessment and Control Evaluation (FACE) programs that have investigated worker deaths and serious injuries associated with robotics technology and recommended prevention practices
  • Researcher Initiated Grants
  • Research at, and supported by, Education & Research Centers, National Construction Center (CPWR) , and Centers for Agricultural Safety and Health
  • NIOSH Mining Program Contracts
  • Partnering with the National Science Foundation to fund extramural occupational robotics safety and health research.

Descriptions of NIOSH supported research conducted by NIOSH and external scientists are on the Robotics Center Research Webpage . The research addresses multiple robotics technologies (i.e., collaborative robots, mobile robots, exoskeletons, autonomous ground vehicles and machines, and drones). Further, the portfolio includes robotics applications in multiple industries (i.e., agriculture, construction, mining, and healthcare), and activities range from pilot projects and small studies to larger scale projects. Much of this research is in process and some has been delayed by the COVID-19 pandemic. Related scientific articles and reports are available on the Robotics Center Publication Page .

In addition to working to fill occupational robotics research needs, NIOSH is working with partners to develop best practices. Through an Occupational Safety and Health Administration Alliance that includes the Association for Advancing Automation and NIOSH, the OSHA Technical Manual chapter, titled Industrial Robot Systems and Industrial Robot System Safety was recently updated with new guidance for assessing and working safely with robotic systems.  NIOSH researchers participate on several robotics related consensus standards committees, including ISO/TC 299- Robotics, ANSI/RIA R15.06- Industrial Robots and Robot System Safety, ANSI/R15.08- Industrial Mobile Robot Safety, and ASTM F48- Exoskeletons and Exosuits. NIOSH researchers also contributed to consensus documents that might lead to future standards: ANSI/ASSP/NSC Z15.3- Safety Management of Partially and Fully Automated Vehicles (Technical Report) and ANSI Unmanned Aircraft Systems Standardization Collaborative – UASSC . Occupational safety researchers and practitioners are similarly encouraged to participate in standards activities to bring their expertise to bear on these documents which are often used as, or lead to, best practices.

In summary, research and work to develop best practices for working safely with robots are needed to position the occupational safety and health community to proactively address the proliferation of robotics technologies that are a significant component of the future of work.

Would you like to learn more about the occupational safety and health implications of robotics? Join us on Wednesday, June 22, from 2:00-3:00 pm ET for a free webinar: The Role of Robotics in the Future of Work , featuring Ms. Dawn Castillo, Director of the Division of Safety Research at CDC/NIOSH and Dr. Chukwuma (Chuma) Nnaji, Assistant Professor in the Department of Civil, Construction, and Environmental Engineering at The University of Alabama. Register  here  to attend this webinar presented by the NIOSH Future of Work Initiative.

Missed previous webinars in the series? You can watch them here .

As NIOSH works to build an occupational robotics research portfolio and contribute to best practices that will improve worker safety now and in the future, we are interested in your experiences and perspectives. What trends are you seeing with new robotics technologies that NIOSH research should aim to address? What aspects of occupational robotics do you think are especially important to address through best practices and occupational safety and health guidance?

Dawn N. Castillo, MPH is the Director of the NIOSH Division of Safety Research and Manager of the NIOSH Center for Occupational Robotics Research.

Jacob L. Carr, PhD is the Team Leader for the Mining Technologies Team in the NIOSH Pittsburgh Mining Research Center and Coordinator of the NIOSH Center for Occupational Robotics Research.

W. Allen Robison, PhD is the Director of the NIOSH Office of Extramural Programs.

Related NIOSH Science Blogs

A Robot May Not Injure a Worker: Working safely with robots 

NIOSH Presents: An Occupational Safety and Health Perspective on Robotics Applications in the Workplace

FACE Investigations Make Recommendations to Improve the Safety of New Types of Robots

Wearable Exoskeletons to Reduce Physical Load at Work 

Industrial Exoskeletons 

Exoskeletons in Construction: Will they reduce or create hazards?

Exoskeletons and Occupational Health Equity 

Can Exoskeletons Reduce Musculoskeletal Disorders in Healthcare Workers? 

Can Drones Make Construction Safer?

Semi-Autonomous Motor Vehicles: What Are the Implications for Work-related Road Safety?

Preparing Your Fleet for Automated Vehicles

The Role of Technological Job Displacement in the Future of Work

3 comments on “The Role of Robotics in the Future of Work”

Comments listed below are posted by individuals not associated with CDC, unless otherwise stated. These comments do not represent the official views of CDC, and CDC does not guarantee that any information posted by individuals on this site is correct, and disclaims any liability for any loss or damage resulting from reliance on any such information. Read more about our comment policy » .

It’s no secret that robots are increasingly becoming a part of our lives. From manufacturing and logistics to healthcare and retail, they are slowly but surely taking on more and more tasks that were once done by human beings. But what does this mean for the future of work? There is no doubt that robotics will have a major impact on the way we live and work in the future. For many people, this change cannot come soon enough. The repetitive and physically demanding nature of many jobs make them perfect candidates for automation. In fact, it is estimated that up to 50% of all jobs could be automated in the next 20 years. This would free up humans to focus on more creative and higher-level tasks, leading to a more productive and efficient workforce. Of course, there will be some challenges that need to be addressed, such as job loss and the displacement of workers. But if we embrace the changes that are coming, there is no doubt that robotics will have a positive impact on the future of work.

Best, Isaac Robertson

Robotics stands poised to profoundly impact the future of work. On the one hand, it automates repetitive and dangerous tasks, increasing productivity and saving lives. Robots will handle assembly lines, operate in hazardous environments, and assist in intricate surgeries, freeing humans for more creative and strategic roles. This shift will require reskilling and upskilling initiatives to ensure a smooth transition for workers whose jobs become obsolete.

On the other hand, the rise of automation also raises concerns about job displacement. As robots become increasingly sophisticated, they may take over tasks currently performed by humans, leading to unemployment and economic hardship. To mitigate this, policymakers and businesses need to invest in education and training programs that equip individuals with the skills necessary to thrive in a robotized workplace. Additionally, social safety nets must be strengthened to support those who lose their jobs due to automation. The future of work will necessitate a collaborative approach, embracing the benefits of robotics while ensuring a just and equitable transition for all.

Thanks for sharing your insights and perspectives.

Post a Comment

Cancel reply.

Your email address will not be published. Required fields are marked *

  • 50th Anniversary Blog Series
  • Additive Manufacturing
  • Aging Workers
  • Agriculture
  • Artificial Intelligence
  • Back Injury
  • Bloodborne pathogens
  • Cardiovascular Disease
  • cold stress
  • commercial fishing
  • Communication
  • Construction
  • Cross Cultural Communication
  • Dermal Exposure
  • Education and Research Centers
  • Electrical Safety
  • Emergency Response/Public Sector
  • Engineering Control
  • Environment/Green Jobs
  • Epidemiology
  • Fire Fighting
  • Food Service
  • Future of Work and OSH
  • Healthy Work Design
  • Hearing Loss
  • Heat Stress
  • Holiday Themes
  • Hydraulic Fracturing
  • Infectious Disease Resources
  • International
  • Landscaping
  • Law Enforcement
  • Manufacturing
  • Manufacturing Mondays Series
  • Mental Health
  • Motor Vehicle Safety
  • Musculoskeletal Disorders
  • Nanotechnology
  • National Occupational Research Agenda
  • Needlestick Prevention
  • NIOSH-funded Research
  • Nonstandard Work Arrangements
  • Observances
  • Occupational Health Equity
  • Oil and Gas
  • Outdoor Work
  • Partnership
  • Personal Protective Equipment
  • Physical activity
  • Policy and Programs
  • Prevention Through Design
  • Prioritizing Research
  • Reproductive Health
  • Research to practice r2p
  • Researcher Spotlights
  • Respirators
  • Respiratory Health
  • Risk Assessment
  • Safety and Health Data
  • Service Sector
  • Small Business
  • Social Determinants of Health
  • Spanish translations
  • Sports and Entertainment
  • Strategic Foresight
  • Struck-by injuries
  • Student Training
  • Substance Use Disorder
  • Surveillance
  • Synthetic Biology
  • Systematic review
  • Take Home Exposures
  • Teachers/School Workers
  • Temporary/Contingent Workers
  • Total Worker Health
  • Translations (other than Spanish)
  • Transportation
  • Uncategorized
  • Veterinarians
  • Wearable Technologies
  • Wholesale and Retail Trade
  • Work Schedules
  • Workers' Compensation
  • Workplace Medical Mystery
  • Workplace Supported Recovery
  • World Trade Center Health Program
  • Young Workers

To receive email updates about this page, enter your email address:

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Artif Intell
  • PMC10500442

Logo of frontai

Mobile robotics in smart farming: current trends and applications

Darío fernando yépez-ponce.

1 Instituto Universitario de Automática e Informática Industrial, Universitat Politècnica de València, Valencia, Spain

2 Facultad de Ingeniería en Ciencias Aplicadas, Universidad Técnica del Norte, Ibarra, Ecuador

José Vicente Salcedo

Paúl d. rosero-montalvo.

3 Computer Science Department, IT University of Copenhagen, Copenhagen, Denmark

Javier Sanchis

In recent years, precision agriculture and smart farming have been deployed by leaps and bounds as arable land has become increasingly scarce. According to the Food and Agriculture Organization (FAO), by the year 2050, farming in the world should grow by about one-third above current levels. Therefore, farmers have intensively used fertilizers to promote crop growth and yields, which has adversely affected the nutritional improvement of foodstuffs. To address challenges related to productivity, environmental impact, food safety, crop losses, and sustainability, mobile robots in agriculture have proliferated, integrating mainly path planning and crop information gathering processes. Current agricultural robotic systems are large in size and cost because they use a computer as a server and mobile robots as clients. This article reviews the use of mobile robotics in farming to reduce costs, reduce environmental impact, and optimize harvests. The current status of mobile robotics, the technologies employed, the algorithms applied, and the relevant results obtained in smart farming are established. Finally, challenges to be faced in new smart farming techniques are also presented: environmental conditions, implementation costs, technical requirements, process automation, connectivity, and processing potential. As part of the contributions of this article, it was possible to conclude that the leading technologies for the implementation of smart farming are as follows: the Internet of Things (IoT), mobile robotics, artificial intelligence, artificial vision, multi-objective control, and big data. One technological solution that could be implemented is developing a fully autonomous, low-cost agricultural mobile robotic system that does not depend on a server.

1. Introduction

In recent years, the global population has increased unprecedentedly, leading to significant changes in food demand (Dhumale and Bhaskar, 2021 ). As we move into the future, it is expected that the demand for food will continue to rise, driven by factors such as population growth, urbanization, and changing dietary preferences. In addition, the effects of climate change have also impacted food demand and supply, creating new challenges for the food industry (Dutta et al., 2021 ). In Springmann et al. ( 2018 ), it is mentioned that by 2050, the food chain might increase production by 50%. Besides, the FAO shows that the world population will reach approximately 10 billion by that year (Ahmed et al., 2018 ). This population increase affects the environmental conditions, which changes the harvesting process forcing farmers to use fertilizers and pesticides (Shafi et al., 2019 ). The residuals of those chemical pollutants contaminate water (Rajeshwari et al., 2021 ). Another concern is the nutritional outcome that offers food since the previous statement that the environmental condition worsens, creating floods and droughts. Therefore, humans are not receiving enough nutrients to be healthy by eating processed food, requiring pills and supplements (Mostari et al., 2021 ). The Intergovernmental Panel on Climate Change (IPCC) warns that global warming reduces the nutritional value of crops due to the intensive use of fertilizers to boost crop yields; they also predict that in the incoming years, people might suffer from zinc deficiency, causing even their psychological and cognitive disorders (Ryan et al., 2021 ).

Technology in the food production industry is a significant challenge that impedes progress and innovation in this critical sector. With the rapidly growing global population and increasing demand for food, it has become imperative to adopt technological advancements to improve food production and distribution (Ferrag et al., 2021 ). However, in many parts of the world, particularly in developing countries, technology in food production still needs to be improved, resulting in low productivity, high food losses, and reduced efficiency. Given that a big part of food production is from developing countries, exists a lack of advanced agricultural technologies (Khan et al., 2021 ). They face significant financial constraints and limited access to modern technologies, which can impede their ability to improve their food production processes. This concern also extends to the education and training of the workforce, who may not have the knowledge and skills to operate and maintain technological tools and equipment effectively (Xuan, 2021 ).

To mitigate the concerns mentioned above about food supply, FAO proposes four bullet points to guarantee food quality in the incoming years, which they closely related to the use of technology since information plays a fundamental role in ensuring the economic and sustainability impacts of new cutting-edge techniques in the food production process (Mooney, 2020 ).

Implementing emerging technologies in agriculture is often called smart farming, which aims to improve productivity, efficiency, and sustainability (Raj et al., 2021 ). In Belhadi et al. ( 2021 ), mention that smart farming might use trend technologies such as robotics, artificial intelligence, and the IoT. Therefore, these devices can gather data from crops to extract intrinsic knowledge from plants to improve agricultural decision-making and reduce environmental impact (Megeto et al., 2021 ). However, the full exploitation of the potential of smart farming presents several challenges and technical, socio-economic, and administrative constraints (Mengoli et al., 2021 ). Works such as Ahmed et al. ( 2016 ), Jawad et al. ( 2017 ), Bermeo-Almeida et al. ( 2018 ), Kamilaris and Prenafeta-Boldu ( 2018 ), and Rahmadian and Widyartono ( 2020 ) present broad approaches to smart farming and trend technologies without focusing only on robots. These studies do not include a detailed discussion of the tools and techniques used to develop the different mobile systems and their level of maturity. It is relevant to discuss the use of mobile robotics in smart farming from different perspectives and describe their corresponding nuances.

This article stands out from others of a similar nature because it offers a broad overview of the challenges and opportunities presented by precision agriculture and robotic farming. The article focuses on the use of robotics and precision agriculture in agriculture 4.0 and provides a detailed description of the many types of agricultural robots used, as well as the techniques and hardware used for their operation and monitoring. Additionally, the article highlights the areas where literature is least developed and suggests potential solutions to address these challenges. Future trends in precision agriculture and robotics are also discussed, including the use of multi-objective control algorithms and artificial intelligence in low-cost mobile robots for planning the best path while accounting for energy efficiency, soil type, and obstacles, as well as for evaluating and managing pests and diseases that affect crops.

This work aims to present an overview of mobile robotics implemented for agricultural production related to smart farming techniques. The main contribution of this work is showing the existing frameworks, tools, and applications where robots are currently used. Also, it presents shortcomings in smart farming applications, which might provide future trends in robots. The rest of the manuscript is structured as follows: Section 2 gives the smart farming background and provides a detailed overview of the leading mobile robots with existing technologies. Section 3 presents the discussion highlighting the technical and socio-economic obstacles to successfully integrating mobile robotics in agriculture. Section 4 presents the future trends related to mobile robotics in agriculture. Finally, Section 5 presents the conclusions.

2. Research methodology

A systematic literature review (SLR) was performed to manage the diverse knowledge and identify research related to the raised topic (Ahmed et al., 2016 ), especially to investigate the status of mobile robotics in precision agriculture. In particular, we searched for papers on “mobile robotics” with the term “agriculture 4.0” in the title, abstract or keywords. Prior to the SLR, a review protocol was defined to ensure a transparent, high quality and comprehensive research process (Page et al., 2021 ) including three steps: formulating the research questions, defining the search strategy, and specifying the inclusion and exclusion criteria. The preferred reporting approach for systematic reviews and meta-analyses (PRISMA) was used to conduct the SLR.

2.1. Review protocol

Before starting the bibliographic analysis, a review protocol was defined to identify, evaluate and interpret the relevant results of the research topic (see Table 1 ). The first step was to formulate research questions to identify the studies published on the subject of interest from different approaches. The appropriate keywords were then identified order to formulate search strings to obtain relevant information using four databases: IEEE Xplore, Web of Science, Scopus, and ScienceDirect. To refine the search results, inclusion and exclusion criteria were defined to evaluate the content of the publications and used as a preliminary filter of the metadata sources and limit the scope of the research.

Review protocol for SLR.

After performing the SLR, 69 research articles were obtained on the proposed topic. After the PRISMA selection and eligibility steps with the help of the Mendeley bibliographic reference manager, similar files were identified and eliminated, leaving a total 65 research papers, as can be seen in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is frai-06-1213330-g0001.jpg

Three-steps evaluation of literature search process (PRISMA).

2.2. Trends in agriculture

The distribution of the 65 articles by year, about 38% of the most recent scientific papers were published in 2021, reflecting the considerable progress of agriculture in the context of mobile robotics, although the pace can still be considered slow compared to other domains such as healthcare, the manufacturing, the mining, the automation, the energy, among others (Araújo et al., 2021 ).

Figure 2 gives the breakdown of publications on the five most common activities carried out by agriculture 4.0 and the type of mobile robot employed. The multiple tasks in the field category include activities such as row recognition and tracking, obstacle detection and avoidance, and information gathering and reporting in both outdoor and greenhouse agriculture.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1213330-g0002.jpg

Mobile robotics activities in agriculture.

According to the International Federation of Robotics (IFR), the top five service robot applications for professional use sold during 2019 and 2020 are: transportation and logistics, professional cleaning, medical robotics, hospitality, and agriculture (International Federation of Robotics, 2021 ). Figure 3 gives the percentage of robots employed in each of these areas.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1213330-g0003.jpg

Percentage of top five applications of service robots in 2020.

3. Background and related works

Smart farming is a technique that uses advanced technology to optimize yield and efficiency in agricultural production. In Lohchab et al. ( 2018 ), explored the application of IoT technologies in smart agriculture. Subsequently, in a 2020 review article, Sharma et al. ( 2020 ) focused on the use of artificial intelligence and machine learning in smart agriculture. Furthermore, in a 2021 review article, Ratnaparkhi et al. ( 2020 ) discussed the implementation of sensor technologies and Geographic Information Systems (GIS) for smart agriculture. Finally, in a recent 2022 review article, Botta et al. ( 2022 ) examined the integration of robotics and automation in smart agriculture. Some topics that very little has been addressed in smart agriculture are: the integration of smart agriculture with the circular economy and environmental sustainability, the development and application of artificial intelligence and machine learning technologies in pest and disease identification and management, increased focus on optimizing water use and irrigation management in response to climate change and limited water availability, improved connectivity and interoperability of systems to facilitate large-scale adoption and implementation, and the development of specific low-cost solutions for small farms and rural communities in developing countries to improve food security and reduce rural poverty.

3.1. Smart farming

Smart farming is based on the information provided by sensors placed on an agricultural field (Ahmed et al., 2016 ); Machine Learning (ML) models could learn patterns to support the farmers' decision-making (Mammarella et al., 2020 ; Shorewala et al., 2021 ). These sensors joined with a microcontroller sending data constantly, are considered part of the IoT. Besides, data might be processed in big servers allocated in the cloud (cloud computing). However, IoT devices are often a rigid solution since they are placed in a single location. Therefore, Autonomous Robotic Systems (ARS) can walk around crops taking data from the whole farm and providing accurate information (Ozdogan et al., 2017 ; Kamilaris and Prenafeta-Boldu, 2018 ). This combination of sensors, data analysis, and robots provides farmers with a smart farming application with diverse tools to address challenges related to productivity, environmental impact, food safety, crop losses, and sustainability. The objectives of smart farming are to increase crop yields, minimize costs, and improve product quality through using a modern system (Araújo et al., 2021 ). In the last years, with technological evolution, different types of sensors have been developed that make it possible to collect data in almost any location, allowing real-time monitoring of agricultural fields without wiring. Therefore, the three leading technologies that contribute significantly to this field are as follows:

  • Drones: These are small flying robots commonly used for crop monitoring, food infrastructure inspection, supply chain monitoring, and food safety surveillance (Costa et al., 2021 ).
  • Autonomous tractors: These are generally Unmanned Ground Vehicles (UGV) incorporating sensors and actuators that enable crop monitoring, irrigation, harvesting, and disease control (Lisbinski et al., 2020 ).
  • Software for decision making: These are platforms where data acquired by drones and/or UGV sensors are visualized and analyzed. They generally provide information on weather, soil, crop yields, and other factors relevant to agricultural production to improve decision-making (Ojeda-Beltran, 2022 ).

3.2. Mobile robotics in agriculture

The emerging field of agricultural mobile robotics is UGV and UAV (Prakash et al., 2020 ). The main applications of mobile robotics in farming are:

  • Identify the state of the crop and corresponding application of chemical products, fumigation, or harvesting, as required by the fruit or plant.
  • Mobile handling through collaborative arms (harvesting, fruit handling).
  • Collection and conversion of helpful information for the farmer.
  • Selective application of pesticides and avoidance of food waste.

UGV and UAV have limited available power. Therefore, their design and control optimization is paramount for their application in smart farming. Therefore, research on the cooperation between UGV and UAV is being carried out to cover large agricultural areas. These autonomous robots are intelligent machines capable of performing tasks, making decisions, and acting in real-time with a high degree of autonomy (Rahmadian and Widyartono, 2020 ). Interest in mobile robotics in agriculture has grown considerably in the last few years due to its ability to automate tasks such as planting, irrigation, fertilization, spraying, environmental monitoring, disease detection, harvesting, and weed and pest control (Araújo et al., 2021 ). Furthermore, mobile robotics in smart farming uses a combination of emerging technologies to improve the productivity and quality of agricultural products (Bechar and Vigneault, 2016 ).

UGV are robots that control can be remote (controlled by a human operator through an interface) or fully autonomous (operated without the need for a human controller based on AI technologies) (Araújo et al., 2021 ). The main components of UGV are locomotive, manipulator and supervisory control systems, sensors for navigation, and communication links for information exchange between devices. The main locomotion systems used are wheels, tracks, or legs. To properly operate UGV in the field, they must meet size, maneuverability, efficiency, human-friendly interface, and safety requirements. Table 2 summarizes the diverse range of UGVs designed for agricultural operations.

Different types of UGV in agriculture 4.0.

The main issue of mobile robotics in agricultural fields is to perform multiple tasks (obstacle avoidance, tracking, path planning, crop data collection, disease detection, among others) autonomously with reduced hardware for low-cost robots that can be acquired and implemented by farmers. Most UGV presented above have a wheeled locomotion system, offering easy construction and control. Some UGV incorporate low-cost computer vision systems, i.e., using conventional cameras. UGV might employ heuristic algorithms still in the conceptual or prototyping phase. Due to the limitations of UGV and to cover larger areas and less time, in the last years, the UGV-UAV collaboration has been developed (Khanna et al., 2015 ). The UGV operates in the areas selected by the UAV, which also cooperates in the generation of 3D maps of the environment with centimeter accuracy; however, merging the maps generated by UAV and UGV in an agricultural climate is a complex task since the generated maps present inaccuracies and scale errors due to local inconsistencies, missing data, occlusions, and global deformations (Gawel et al., 2017 ; Potena et al., 2019 ). Table 3 reviews some collaborations between UGV and UAV in smart farming.

Collaborations between UGV and UAV in smart farming.

Most collaborative systems between UAV and UGV are in the conceptual (simulation) phase.

3.3. Multi-objective control in smart farming

Agricultural systems use and produce energy in the form of bioenergy and play a vital role in the global economy and food security. Modern agricultural systems might therefore consider economic, energy, and environmental factors simultaneously (Banasik et al., 2017 ). Multi-objective control is an important tool in smart farming to simultaneously run and optimize multiple objectives, such as productivity, water use efficiency, product quality, and economic profitability. Some cases of multi-objective control in smart farming are presented in Table 4 , which shows their primary function, control techniques, and the hardware deployed. However, there are few studies since this topic is new in smart farming applications with mobile robots. Furthermore, path planning is an essential application of smart agriculture that focuses on optimizing routes and movements of agricultural machinery to improve efficiency and reduce production costs (Nazarahari et al., 2019 ).

Multi-objective control in agriculture 4.0.

Another application of multi-objective control in path planning is the optimization of fertilization and pesticide application in crops. According to a study by Zhao et al. ( 2023 ), multi-objective control can optimize the routing of pesticide and fertilizer application machinery to reduce the number of inputs used and improve application efficiency. In addition, multi-target control can also improve product quality and reduce environmental pollution by accurately applying crop inputs.

Finally, a study proposes a Residual-like Soft Actor-Critic (R-SAC) algorithm for agricultural scenarios to realize safe obstacle avoidance and intelligent path planning of robots. The study also proposes an offline expert experience pre-training method to improve the training efficiency of reinforcement learning. Experiments verify that this method has stable performance in static and dynamic obstacle environments and is superior to other reinforcement learning algorithms (Yang et al., 2022 ).

4. Discussion

With the information mentioned above about mobile robots in smart farming, this section aims to show the future steps in this research field related to its challenges. Given the new UGV and UAV trends in Table 2 , the multi-objective control has yet to be widely explored in smart farming applications. It might be due to its complex setup and the expensive computational resources needed. However, multi-objective applications might be doable in incoming robots with the increasing microcontrollers and microprocessor development. Conversely, IoT devices that collect data from farms are extensively deployed in several applications. However, there are new concerns about their confidentiality and the risk that data is exposed when traveling by communication channels (Pylianidis et al., 2021 ).

Smart farming needs final devices with robust systems working in harsh conditions in outdoor scenarios. However, several works have shown prototypes with their tentative functionalities. Building robots may need several debugging rounds to solve issues with the hardware and software. Consequently, since the robot links people and plants, farmers, considered experts in smart farming, must work closely with the robot's developer. However, the variety of plant and crop species makes it challenging to develop a multi-task robot (Selmani et al., 2019 ).

The main challenges and future research for deploying smart farming are presented. The present study sought to articulate mobile robotics with smart farming. Looking at Table 4 , it can be seen that multi-objective control has not been significantly explored in smart farming. One of the reasons could be that applying advanced technologies with complex operations can be costly. Hence, the development of these technologies in smart farming should increase in the coming years. Also, the IoT is widely deployed in agriculture for crop monitoring and tracking. Therefore, it can be said that IoT is a research trend within smart farming. However, only a few studies have considered data security and reliability, scalability, and interoperability when developing a smart farming system (Pylianidis et al., 2021 ).

The results presented also show that most of the use cases are in the prototype phase. One possible reason could be that smart farming links people, animals, and plants making it more difficult than creating systems for non-living things. Another reason could be that the technology is due to the transdisciplinarity of this field, and therefore for the development of intelligent systems, farmers should be familiar with these technologies. Finally, the variety of plant and crop species makes implementing technology in agricultural fields complex (Selmani et al., 2019 ). The results also show that most systems developed are for free-range farms. In addition, it is also evident that research is limited to soil management, fruit detection, and crop quality management. With this, it is corroborated that work must be done on research and development of systems that guarantee the deployment of smart farming at affordable costs. The natural complexity of agricultural fields presents a number of obstacles that prevent the full integration of mobile robotics in smart farming. Therefore, from the analysis, blockages at the technical and socio-economic levels have been identified and classified.

4.1. Technical roadblocks

  • Interoperability . To establish effective communication between heterogeneous devices, they need to be interconnected, and interoperable (Aydin and Aydin, 2020 ).
  • Dataquality . Lack of decentralized systems impedes the deployment of smart farming (Liu et al., 2022 ).
  • Hardware . A suitable casing must be constructed that is robust and durable enough to withstand actual field conditions (Villa-Henriksen et al., 2020 ).
  • Power sources . A proper energy-saving scheme is necessary as instant battery replacement is complicated. A possible solution to optimize power consumption is using low-power hardware and proper communications management (Jawad et al., 2017 ).
  • Wireless architectures . Wireless communication networks and technologies offer several advantages in terms of low cost, wide area coverage, network flexibility, and high scalability (Brinis and Saidane, 2016 ).
  • Security . The nature of agricultural fields leads to risks to data privacy, integrity, and availability (Chen et al., 2017 ).
  • User interface . Most graphical user interfaces are designed so that only experts can use them (Del Cerro et al., 2021 ).

4.2. Socio-economic roadblocks

  • Costs . Costs associated with adopting robotic technologies and systems are the biggest drawback to deploying smart farming (Sinha and Dhanalakshmi, 2022 ).
  • Return on investment . When implementing new technologies, farmers are concerned about the payback time and the difficulties in assessing the benefits (Miranda et al., 2019 ).
  • Gap between farmers and researchers . Farmer involvement is paramount to the success of smart farming. Farmers face many problems during the production process that technology could solve (Bacco et al., 2019 ).

Finally, in Charatsari et al. ( 2022 ) discusses the importance of responsibility in the process of technological innovation in the agrifood industry. It highlights the need to consider not only technical aspects but also social implications and societal values when introducing innovative technologies. The authors argue that the perception of responsible innovation is limited in various industrial sectors, making it challenging to implement responsible innovation approaches. The complexity of responsible innovation in the agrifood industry requires addressing the multiple scales and levels of interaction between actors and the constant evolution of agrifood systems. Therefore, the article emphasizes the need to adopt responsible innovation practices that consider the social, ethical, and environmental implications of technological innovations in the industry.

5. Future trends

The upcoming initiatives related to using robots represent significant improvements in smart farming. Government initiatives, public-private sectors, and research work in this field might contribute to establishing the right conditions to add new hardware to crops. However, there are some challenges to consider when developing mobile robots in agriculture such as: navigation on uneven terrain (loose soil and unpredictable obstacles) without damaging plants or compromising their own safety, energy efficiency so that they can operate for long periods of time avoiding constant human intervention, crop manipulation, integration with farm management systems and adaptability to different crops and conditions.

For instance, a robotic system can be developed for smart farming, starting from a basic architecture with few components and simple functionality that allows the gradual addition of features and functionality to create a complex system. Future trends in smart farming involve using multi-objective control algorithms and artificial intelligence in low-cost mobile robots to plan the best trajectory considering energy efficiency, soil type, and obstacles while monitoring crop growth and assessing and controlling crop pests and diseases. To ensure good connectivity and live transmission of crop data, 5G technology needs to be widely explored. 5G technology minimizes internet costs and increases information management by remotely performing accurate inspections of agricultural fields (Abbasi et al., 2021 ). Finally, blockchain, combined with IoT and other technologies, should be applied to address the challenge of information privacy and security (Bermeo-Almeida et al., 2018 ).

As seen in the tables in the previous sections, most of the UGV have a computer, which increases the cost of this type of robots. Table 5 shows several state-of-the-art boards that could deploy smart farming at affordable prices for farmers.

Boards for agriculture 4.0.

Finally, in Figure 4 , we can see the future of agriculture, for which a correct 5G network deployment and path planning/tracking is essential. Artificial intelligence, machine learning, machine vision, IoT, and cloud computing are needed in each of the activities carried out in agricultural fields.

An external file that holds a picture, illustration, etc.
Object name is frai-06-1213330-g0004.jpg

Future of agriculture.

6. Conclusions

Growing concerns about global food security have accelerated the need to incorporate mobile robots in agriculture. The scientific community and researchers are integrating disruptive technologies into conventional agricultural systems to increase crop quality and yields, minimize costs, and reduce waste generation. This article analyzes the current state and challenges of smart farming. Considering the impact of farming on climate change and healthy food production, it is vital to provide the agricultural sector with low-cost, functional mobile robots. Research questions were posed and answered regarding the use of mobile robotics in agriculture, the technologies, methods, and tools used in agricultural fields, and the main challenges of multi-target control in this area. Several conclusions were drawn, such as the integration of scalable mobile robots incorporating efficient systems. It should be noted that most cases address a specific problem and are in the prototype phase.

From the SLR conducted, it was identified that research on the following topics is limited:

  • The implementation of digital twins for robot-based production lines
  • Ingenious software project management while narrowing the impact aspect.
  • Blockchain in agriculture.
  • Context-aware wireless sensor network suitable for precision agriculture.
  • Internet of Things (IoT) for smart precision agriculture and farming in rural areas.
  • Semantic and syntactic interoperability for agricultural open-data platforms in the context of IoT using crop-specific trait.
  • Multi-objective path planner for an agricultural mobile robot in a virtual and real greenhouse environment.
  • Closing loops in agricultural supply chains using multi-objective optimization.
  • New control approaches for trajectory tracking and motion planning of unmanned tracked robots.

These areas require further research to improve the efficiency and effectiveness of precision agriculture. Likewise, the information gathered in this article makes it clear that the emerging fields of research are:

  • Autonomous navigation . Planning, tracking of trajectories, and task planning should be considered in this area.
  • Energy efficiency . Good navigation autonomy is not the only thing that must be taken into account, but also the design and all components that make up the mobile robot since its size and cost directly influence the deployment of smart farming.
  • Communication . Due to the number of devices involved in smart farming, middleware that improves communication between field devices and the station is important to ensure the reliability and security of information.

The interdependence of these challenges means that a practical solution must be sought with a suitable compromise between the theoretically optimal path that facilitates information exchange and overall system energy optimization. Moreover, the following questions must be considered: the kinematic and dynamic design of the mobile robot, the terrain traversability, the computational complexity of the various algorithms to ensure real-time performance, the use of sensors and low-energy control boards, and the sending and receiving of information. It also identifies the leading technical and socioeconomic obstacles that must be overcome to deploy smart farming successfully. We can see leaps and bounds being made in this area, but there is still a long way to go to mitigate the impact of farming on the environment in the coming years. Finally, one of the areas to be investigated is multi-objective heuristic optimization for autonomous navigation, communication, and energy efficiency of mobile robots.

Finally, numerous international political organizations play a crucial role in spreading awareness of the technologies involved in precision agriculture and advocating for their successful implementation. These organizations are:

  • The FAO promotes the use of advanced agricultural technologies through programs and projects, providing technical assistance, training, and resource access for farmers.
  • The European Union (EU) supports agricultural modernization and the adoption of innovative technologies in the industry through its Agricultural Common Policy (ACP). Additionally, the UE funds research and development projects in precision agriculture, agricultural robotics, and digital solutions to increase efficiency and sustainability.
  • The Department of Agriculture (USDA) of the United States places emphasis on the adoption of cutting-edge agricultural technologies. The USDA supports the implementation of precise agriculture systems, the integration of sensors and IoT devices into agricultural operations, and the promotion of digitalization in the industry through its funding and grant program.
  • The focus of AGRA is to encourage the use of contemporary agricultural technologies across the African continent. AGRA works in close partnership with governments, regional organizations, and the private sector to increase access to and availability of improved seeds, fertilizers, and digital farming technologies that boost agricultural productivity and sustainability.
  • The World Economic Forum (WEF) has established initiatives and projects to advance precision agriculture. The WEF brings together many actors-including political leaders, business executives, and members of civil society-through its platform “Shaping the Future of Food Security and Agriculture” to develop innovative and collaborative solutions that foster the digital transformation of agriculture.

These political organizations play a crucial role in the spread of advanced agricultural technologies, and they are actively working to promote the adoption of “agriculture 4.0” on a global scale with the aim of enhancing the efficiency, productivity, and sustainability of the agricultural sector.

Author contributions

JVS supervised this project. PR-M and JS contributed in this project. DY-P made the first version of the article under the guidelines of the other authors; likewise, he made the corrections to the observations made by the reviewers and shared them with the other authors for their respective review and subsequent approval. All authors contributed to the article and approved the submitted version.

Funding Statement

This work was supported by Generalitat Valenciana regional government through project CIAICO/2021/064.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Abbasi R., Yanes A. R., Villanuera E. M., Ahmad R. (2021). “Real-time implementation of digital twin for robot based production line,” in Proceedings of the Conference on Learning Factories (CLF) (Elsevier: ), 55–60. [ Google Scholar ]
  • Ahmed M. A., Ahsan I., Abbas M. (2016). “Systematic literature review: ingenious software project management while narrowing the impact aspect,” in Proceedings of the International Conference on Research in Adaptive and Convergent Systems, RACS '16 (New York, NY: Association for Computing Machinery; ), 165–168. [ Google Scholar ]
  • Ahmed N., De D., Hussain I. (2018). Internet of things (IoT) for smart precision agriculture and farming in rural areas . IEEE Internet Things J. 5 , 4890–4899. 10.1109/JIOT.2018.2879579 [ CrossRef ] [ Google Scholar ]
  • Araújo S. O., Peres R. S., Barata J., Lidon F., Ramalho J. C. (2021). Characterising the agriculture 4.0 landscape—emerging trends, challenges and opportunities . Agronomy 11 , 1–37. 10.3390/agronomy11040667 [ CrossRef ] [ Google Scholar ]
  • Arindam S., Anjan K. R., Arun-Baran S. (2018). “Grid-based UGV navigation in a dynamic environment using neural network,” in 2018 International Conference on Inventive Research in Computing Applications (ICIRCA) , 509–514. 10.1109/ICIRCA.2018.8597389 [ CrossRef ] [ Google Scholar ]
  • Aydin S., Aydin M. N. (2020). Semantic and syntactic interoperability for agricultural open-data platforms in the context of IoT using crop-specific trait ontologies . Appl. Sci. 10 , 1–27. 10.3390/app10134460 [ CrossRef ] [ Google Scholar ]
  • Azimi-Mahmud M. S., Zainal-Abidin M. S., Mohamed Z., Iida M. (2019). Multi-objective path planner for an agricultural mobile robot in a virtual greenhouse environment . Comput. Electron. Agric. 157 , 488–499. 10.1016/j.compag.2019.01.016 [ CrossRef ] [ Google Scholar ]
  • Bacco M., Barsocchi P., Ferro E., Gotta A., Ruggeri M. (2019). The digitisation of agriculture: a survey of research activities on smart farming . Array 3 , 100009. 10.1016/j.array.2019.100009 [ CrossRef ] [ Google Scholar ]
  • Bacheti V. P., Brandao A. S., Sarcinelli-Filho M. (2021). “Path-following by a UGV-UAV formation based on null space,” in 14th IEEE International Conference on Industry Applications (INDUSCON) , 1266–1273. 10.1109/INDUSCON51756.2021.9529472 [ CrossRef ] [ Google Scholar ]
  • Banasik A., Kanellopoulos A., Claassen G., Bloemhof-Ruwaard J. (2017). Closing loops in agricultural supply chains using multi-objective optimization: a case study of an industrial mushroom supply chain . Int. J. Product. Econ. 183 , 409–420. 10.1016/j.ijpe.2016.08.012 [ CrossRef ] [ Google Scholar ]
  • Banihani S., Hayajneh M., Al-Jarrah A., Mutawe S. (2021). New control approaches for trajectory tracking and motion planning of unmanned tracked robot . Adv. Electric. Electron. Eng. 19 , 42–56. 10.15598/aeee.v19i1.4006 [ CrossRef ] [ Google Scholar ]
  • Bawden O., Kulk J., Russell R., McCool C., English A., Dayoub F., et al.. (2017). Robot for weed species plant-specific management . J. Field Robot. 34 , 1179–1199. 10.1002/rob.21727 [ CrossRef ] [ Google Scholar ]
  • Bechar A., Vigneault C. (2016). Agricultural robots for field operations: concepts and components . Biosyst. Eng. 149 , 94–111. 10.1016/j.biosystemseng.2016.06.014 [ CrossRef ] [ Google Scholar ]
  • Belhadi A., Kamble S. S., Mani V., Benkhati I., Touriki F. E. (2021). An ensemble machine learning approach for forecasting credit risk of agricultural SMEs' investments in agriculture 4.0 through supply chain finance . Ann. Operat. Res. 1–29. 10.1007/s10479-021-04366-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Berenstein R., Edan Y. (2018). Automatic adjustable spraying device for site-specific agricultural application . IEEE Trans. Automat. Sci. Eng. 15 , 641–650. 10.1109/TASE.2017.2656143 [ CrossRef ] [ Google Scholar ]
  • Bermeo-Almeida O., Cardenas-Rodriguez M., Samaniego-Cobo T., Ferruzola-Gómez E., Cabezas-Cabezas R., Bazán-Vera W. (2018). “Blockchain in agriculture: a systematic literature review,” in International Conference on Technologies and Innovation (Cham: Springer International Publishing; ), 44–56. [ Google Scholar ]
  • Birrell S., Hughes J., Cai J. Y., Lida F. (2020). A field-tested robotic harvesting system for iceberg lettuce . J. Field Robot. 37 , 225–245. 10.1002/rob.21888 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Botta A., Cavallone P., Baglieri L., Colucci G., Tagliavini L., Quaglia G. (2022). A review of robots, perception, and tasks in precision agriculture . Appl. Mech. 3 , 830–854. 10.3390/applmech3030049 [ CrossRef ] [ Google Scholar ]
  • Brinis N., Saidane L. A. (2016). Context aware wireless sensor network suitable for precision agriculture . Wireless Sensor Netw. 8 , 1–12. 10.4236/wsn.2016.81001 [ CrossRef ] [ Google Scholar ]
  • Changho Y., Hak-Jin K., Chan-Woo J., Minseok G., Won S. L., Jong G. H. (2021). Stereovision-based ridge-furrow detection and tracking for auto-guided cultivator . Comput. Electron. Agric. 191 , 106490. 10.1016/j.compag.2021.106490 [ CrossRef ] [ Google Scholar ]
  • Charatsari C., Lioutas E. D., De Rosa M., Vecchio Y. (2022). Technological innovation and agrifood systems resilience: the potential and perils of three different strategies . Front. Sustain. Food Syst. 6 , 872706. 10.3389/fsufs.2022.872706 [ CrossRef ] [ Google Scholar ]
  • Chen L., Thombre S., Jarvinen K., Lohan E. S., Alén-Savikko A., Leppakoski H., Bhuiyan M. Z. H., et al.. (2017). Robustness, security and privacy in location-based services for future IoT: a survey . IEEE Access 5 , 8956–8977. 10.1109/ACCESS.2017.2695525 [ CrossRef ] [ Google Scholar ]
  • Chirala V. S., Venkatachalam S., Smereka J. M., Kassoumeh S. (2021). A multi-objective optimization approach for multi-vehicle path planning problems considering human–robot interactions . J. Auton. Vehicles Syst. 1 , 041002. 10.1115/1.4053426 [ CrossRef ] [ Google Scholar ]
  • Costa E., Martins M. B., Vendruscolo E. P., Silva A. G., Zoz T., Binotti F. F. S., et al.. (2021). Greenhouses within the agricultura 4.0 interface . Revista Ciência Agronômica , (Universidade Federal do Ceará) 51, e20207703. 10.5935/1806-6690.20200089 [ CrossRef ] [ Google Scholar ]
  • Del Cerro J., Cruz-Ulloa C., Barrientos A., De León-Rivas J. (2021). Unmanned aerial vehicles in agriculture: a survey . Agronomy 11 , 1–19. 10.3390/agronomy11020203 [ CrossRef ] [ Google Scholar ]
  • Dhumale N. R., Bhaskar P. C. (2021). “Smart agricultural robot for spraying pesticide with image processing based disease classification technique,” in 2021 International Conference on Emerging Smart Computing and Informatics, ESCI 2021 , 604–609. 10.1109/ESCI50559.2021.9396959 [ CrossRef ] [ Google Scholar ]
  • Dutta A., Roy S., Kreidl O. P., Bölöni L. (2021). Multi-robot information gathering for precision agriculture: current state, scope, and challenges . IEEE Access 9 , 161416–161430. 10.1109/ACCESS.2021.3130900 [ CrossRef ] [ Google Scholar ]
  • Edmonds M., Yigit T., Yi J. (2021). “Resolution-optimal, energy-constrained mission planning for unmanned aerial/ground crop inspections,” in IEEE 17th International Conference on Automation Science and Engineering (CASE) , 2235–2240. [ Google Scholar ]
  • Ferrag M. A., Shu L., Djallel H., Choo K.-K. R. (2021). Deep learning-based intrusion detection for distributed denial of service attack in agriculture 4.0 . Electronics 10 , 1257. 10.3390/electronics10111257 [ CrossRef ] [ Google Scholar ]
  • Gai J., Tang L., Steward B. L. (2020). Automated crop plant detection based on the fusion of color and depth images for robotic weed control . J. Field Robot. 37 , 35–52. 10.1002/rob.21897 [ CrossRef ] [ Google Scholar ]
  • Galán-Martín A., Vaskan P., Antón A., Jiménez-Esteller L., Guillén-Gosálbez G. (2017). Multi-objective optimization of rainfed and irrigated agricultural areas considering production and environmental criteria: a case study of wheat production in Spain . J. Clean. Product. 140 , 816–830. 10.1016/j.jclepro.2016.06.099 [ CrossRef ] [ Google Scholar ]
  • Gawel A., Dubé R., Surmann H., Nieto J. I., Siegwart R. Y., Cadena C. (2017). “3D registration of aerial and ground robots for disaster response: an evaluation of features, descriptors, and transformation estimation,” in IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) , 27–34. 10.1109/SSRR.2017.8088136 [ CrossRef ] [ Google Scholar ]
  • Gentilini L., Rossi S., Mengoli D., Eusebi A., Marconi L. (2021). “Trajectory planning ROS service for an autonomous agricultural robot,” in 2021 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor) , 384–389. 10.1109/MetroAgriFor52389.2021.9628620 [ CrossRef ] [ Google Scholar ]
  • International Federation of Robotics (2021). World Robotics 2021 . [ Google Scholar ]
  • Jain S., Ramesh D., Bhattacharya D. (2021). A multi-objective algorithm for crop pattern optimization in agriculture . Appl. Soft Comput. 112 , 107772. 10.1016/j.asoc.2021.107772 [ CrossRef ] [ Google Scholar ]
  • Jawad H. M., Nordin R., Gharghan S. K., Jawad A. M., Ismail M. (2017). Energy-efficient wireless sensor networks for precision agriculture: a review . Sensors 17 , 1–45. 10.3390/s17081781 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kamilaris A., Prenafeta-Boldu F. X. (2018). A review of the use of convolutional neural networks in agriculture . J. Agric. Sci. 156 , 312–322. 10.1017/S0021859618000436 [ CrossRef ] [ Google Scholar ]
  • Khan N., Ray R. L., Sargani G. R., Ihtisham M., Khayyam M., Ismail S. (2021). Current progress and future prospects of agriculture technology: Gateway to sustainable agriculture . Sustainability 13 , 1–31. 10.3390/su13094883 [ CrossRef ] [ Google Scholar ]
  • Khan S., Guivant J., Li X. (2022). Design and experimental validation of a robust model predictive control for the optimal trajectory tracking of a small-scale autonomous bulldozer . Robot. Auton. Syst. 147 , 103903. 10.1016/j.robot.2021.103903 [ CrossRef ] [ Google Scholar ]
  • Khanna R., Möller M., Pfeifer J., Liebisch F., Walter A., Siegwart R. (2015). “Beyond point clouds - 3D mapping and field parameter measurements using UAVs,” in IEEE 20th Conference on Emerging Technologies Factory Automation (ETFA) , 1–4. [ Google Scholar ]
  • Kim J., Son H. I. (2020). A voronoi diagram-based workspace partition for weak cooperation of multi-robot system in orchard . IEEE Access 8 , 20676–20686. 10.1109/ACCESS.2020.2969449 [ CrossRef ] [ Google Scholar ]
  • Li M., Fu Q., Singh V. P., Liu D., Li T. (2019). Stochastic multi-objective modeling for optimization of water-food-energy nexus of irrigated agriculture . Adv. Water Resour. 127 , 209–224. 10.1016/j.advwatres.2019.03.015 [ CrossRef ] [ Google Scholar ]
  • Li Y., Yu J., Guo X., Sun J. (2020). “Path tracking method of unmanned agricultural vehicle based on compound fuzzy control,” in 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC) , 1301–1305. 10.1109/ITAIC49862.2020.9338981 [ CrossRef ] [ Google Scholar ]
  • Liang X., Zhao S., Chen G., Meng G., Wang Y. (2021). Design and development of ground station for UAV/UGV heterogeneous collaborative system . Ain Shams Eng. J. 12 , 3879–3889. 10.1016/j.asej.2021.04.025 [ CrossRef ] [ Google Scholar ]
  • Lisbinski F. C., Mühl D. D., Oliveira L. d., Coronel D. A. (2020). Perspectivas e desafios da agricultura 4.0 para o setor agrícola . Anais.[do] VIII Simpósio da Ciência do Agronegócio . 422–433. [ Google Scholar ]
  • Liu J., Anavatti S., Garratt M., Abbass H. A. (2022). Modified continuous ant colony optimisation for multiple unmanned ground vehicle path planning . Expert Syst. Appl. 196 , 116605. 10.1016/j.eswa.2022.116605 [ CrossRef ] [ Google Scholar ]
  • Lohchab V., Kumar M., Suryan G., Gautam V., Das R. K. (2018). “A review of IoT based smart farm monitoring,” in 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT) , 1620–1625. [ Google Scholar ]
  • Luo Y., Li J., Yu C., Xu B., Li Y., Hsu L.-T., et al.. (2019). Research on time-correlated errors using allan variance in a Kalman filter applicable to vector-tracking-based GNSS software-defined receiver for autonomous ground vehicle navigation . Remote Sens. 11 , 1–39. 10.3390/rs11091026 [ CrossRef ] [ Google Scholar ]
  • Mac T. T., Copot C., Tran D. T., De Keyser R. (2017). A hierarchical global path planning approach for mobile robots based on multi-objective particle swarm optimization . Appl. Soft Comput. 59 , 68–76. 10.1016/j.asoc.2017.05.012 [ CrossRef ] [ Google Scholar ]
  • Mammarella M., Comba L., Biglia A., Dabbene F., Gay P. (2020). “Cooperative agricultural operations of aerial and ground unmanned vehicles,” in 2020 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor) , 224–229. [ Google Scholar ]
  • Megeto G. A. S., Silva A. G., Bulgarelli R. F., Bublitz C. F., Valente A. C., Costa D. A. G. (2021). Artificial intelligence applications in the agriculture 4.0 . Revista Ciência Agronômica , (Universidade Federal do Ceará) 51, e20207701. 10.5935/1806-6690.20200084 [ CrossRef ] [ Google Scholar ]
  • Mengoli D., Eusebi A., Rossi S., Tazzari R., Marconi L. (2021). “Robust autonomous row-change maneuvers for agricultural robotic platform,” in 2021 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor) , 390–395. [ Google Scholar ]
  • Miranda J., Ponce P., Molina A., Wright P. (2019). Sensing, smart and sustainable technologies for Agri-Food 4.0 . Comput. Industry 108 , 21–36. 10.1016/j.compind.2019.02.002 [ CrossRef ] [ Google Scholar ]
  • Mooney P. (2020). La insostenible agricultura 4.0 digitalización y poder corporativo en la cadena alimentaria . [ Google Scholar ]
  • Mostari A., Benabdeli K., Ferah T. (2021). Assessment of the impact of urbanisation on agricultural and forest areas in the coastal zone of Mostaganem (Western Algeria) . Ekologia 40 , 230–239. 10.2478/eko-2021-0025 [ CrossRef ] [ Google Scholar ]
  • Nazarahari M., Khanmirza E., Doostie S. (2019). Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm . Expert Syst. Appl. 115 , 106–120. 10.1016/j.eswa.2018.08.008 [ CrossRef ] [ Google Scholar ]
  • Nerlekar V., Mamtura T., Parihar S. (2022). “Implementation of A* algorithm for optimal path planning for mobile robots,” in 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT) , 382–390. 10.1109/ICSSIT53264.2022.971649 [ CrossRef ] [ Google Scholar ]
  • Ojeda-Beltran A. (2022). Plataformas tecnologicas en la agricultura 4.0: una mirada al desarrollo en colombia . Comput. Electron. Sci. Theory Appl. 3 , 9–18. 10.17981/cesta.03.01.2022.02 [ CrossRef ] [ Google Scholar ]
  • Ozdogan B., Gacar A., Aktas H. (2017). Digital agriculture practices in the context of agriculture 4.0 . JEFA 4 , 184–191. 10.17261/Pressacademia.2017.448 [ CrossRef ] [ Google Scholar ]
  • Page M. J., McKenzie J. E., Bossuyt P. M., Boutron I., Hoffmann T. C., Mulrow C. D., et al.. (2021). Declaración PRISMA 2020: una guía actualizada para la publicación de revisiones sistemáticas . Revista Española de Cardiología 74 , 790–799. 10.1016/j.rec.2021.07.010 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pak J., Kim J., Park Y., Son H. I. (2022). Field evaluation of path-planning algorithms for autonomous mobile robot in smart farms . IEEE Access 10 , 60253–60266. 10.1109/ACCESS.2022.3181131 [ CrossRef ] [ Google Scholar ]
  • Potena C., Khanna R., Nieto J., Siegwart R., Nardi D., Pretto A. (2019). AgriColMap: aerial-ground collaborative 3D mapping for precision farming . IEEE Robot. Automat. Lett. 4 , 1085–1092. 10.1109/LRA.2019.2894468 [ CrossRef ] [ Google Scholar ]
  • Prakash R., Dheer D. K., Kumar M. (2020). “Path planning of UGV using sampling-based method and PSO in 2D map configuration: a comparative analysis,” in 2020 International Conference on Emerging Frontiers in Electrical and Electronic Technologies (ICEFEET) , 1–6. [ Google Scholar ]
  • Pylianidis C., Osinga S., Athanasiadis L. N. (2021). Introducing digital twins to agriculture . Comput. Electron. Agric. 184 , 105942. 10.1016/j.compag.2020.105942 [ CrossRef ] [ Google Scholar ]
  • Quaglia G., Cavallone P., Visconte C. (2018). “Agri_q: agriculture UGV for monitoring and drone landing,” in IFToMM Symposium on Mechanism Design for Robotics (Cham: Springer International Publishing; ), 413–423. [ Google Scholar ]
  • Radmanesh M., Sharma B., Kumar M., French D. (2021). PDE solution to UAV/UGV trajectory planning problem by spatio-temporal estimation during wildfires . Chin. J. Aeronaut. 34 , 601–616. 10.1016/j.cja.2020.11.002 [ CrossRef ] [ Google Scholar ]
  • Rahmadian R., Widyartono M. (2020). “Autonomous robotic in agriculture: a review,” in 2020 Third International Conference on Vocational Education and Electrical Engineering (ICVEE) (IEEE: ), 1–6. 10.1109/ICVEE50212.2020.9243253 [ CrossRef ] [ Google Scholar ]
  • Raj M., Gupta S., Chamola V., Elhence A., Garg T., Atiquzzaman M., et al.. (2021). A survey on the role of internet of things for adopting and promoting agriculture 4.0 . J. Netw. Comput. Appl. 187 , 103107. 10.1016/j.jnca.2021.103107 [ CrossRef ] [ Google Scholar ]
  • Rajeshwari T., Vardhini P. H., Reddy K. M. K., Priya K. K., Sreeja K. (2021). “Smart agriculture implementation using IoT and leaf disease detection using logistic regression,” in 2021 4th International Conference on Recent Developments in Control, Automation & Power Engineering (RDCAPE) (IEEE: ), 619–623. 10.1109/RDCAPE52977.2021.9633608 [ CrossRef ] [ Google Scholar ]
  • Ratnaparkhi S., Khan S., Arya C., Khapre S., Singh P., Diwakar M., et al.. (2020). “Withdrawn: smart agriculture sensors in IoT: a review,” in Materials Today: Proceedings . [ Google Scholar ]
  • Romeo L., Petitti A., Colella R., Valecce G., Boccadoro P., Milella A., et al.. (2020). “Automated deployment of IoT networks in outdoor scenarios using an unmanned ground vehicle,” in 2020 IEEE International Conference on Industrial Technology (ICIT) (IEEE: ), 369–374. 10.1109/ICIT45562.2020.9067099 [ CrossRef ] [ Google Scholar ]
  • Rucco A., Sujit P., Aguiar A. P., De Sousa J. B., Pereira F. L. (2017). Optimal rendezvous trajectory for unmanned aerial-ground vehicles . IEEE Trans. Aerospace Electron. Syst. 54 , 834–847. 10.1109/TAES.2017.2767958 [ CrossRef ] [ Google Scholar ]
  • Ryan S. J., Carlson C. J., Tesla B., Bonds M. H., Ngonghala C. N., Mordecai E. A., et al.. (2021). Warming temperatures could expose more than 1.3 billion new people to Zika virus risk by 2050 . Glob. Change Biol. 27 , 84–93. 10.1111/gcb.15384 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Selmani A., Oubehar H., Outanoute M., Ed-Dahhak A., Guerbaoui M., Lachhab A., et al.. (2019). Agricultural cyber-physical system enabled for remote management of solar-powered precision irrigation . Biosyst. Eng. 177 , 18–30. 10.1016/j.biosystemseng.2018.06.007 [ CrossRef ] [ Google Scholar ]
  • Shafi U., Mumtaz R., García-Nieto J., Hassan S. A., Zaidi S. A. R., Iqbal N. (2019). Precision agriculture techniques and practices: from considerations to applications . Sensors 19 , 1–25. 10.3390/s19173796 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shamshirband S., Khoshnevisan B., Yousefi M., Bolandnazar E., Anuar N. B., Wahid A., et al.. (2015). A multi-objective evolutionary algorithm for energy management of agricultural systems—A case study in Iran . Renew. Sustain. Energy Rev. 44 , 457–465. 10.1016/j.rser.2014.12.038 [ CrossRef ] [ Google Scholar ]
  • Sharma A., Jain A., Gupta P., Chowdary V. (2020). Machine learning applications for precision agriculture: a comprehensive review . IEEE Access 9 , 4843–4873. 10.1109/ACCESS.2020.3048415 [ CrossRef ] [ Google Scholar ]
  • Shorewala S., Ashfaque A., Sidharth R., Verma U. (2021). Weed density and distribution estimation for precision agriculture using semi-supervised learning . IEEE Access 9 , 27971–27986. 10.1109/ACCESS.2021.3057912 [ CrossRef ] [ Google Scholar ]
  • Sinha B. B., Dhanalakshmi R. (2022). Recent advancements and challenges of Internet of Things in smart agriculture: a survey . Future Gen. Comput. Syst. 126 , 169–184. 10.1016/j.future.2021.08.006 [ CrossRef ] [ Google Scholar ]
  • Springmann M., Clark M., Mason-D'croz D., Wiebe K., Bodirsky B. L., Lassaletta L., et al.. (2018). Options for keeping the food system within environmental limits . Nature 562 , 519–525. 10.1038/s41586-018-0594-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Srinivas A., Sangeetha J. (2021). Smart irrigation and precision farming of paddy field using unmanned ground vehicle and internet of things system . Int. J. Adv. Comput. Sci. Appl. 12 , 407–414. 10.14569/IJACSA.2021.0121254 [ CrossRef ] [ Google Scholar ]
  • Sun Y. P., Liang Y. C. (2022). “Vector field path-following control for a small unmanned ground vehicle with Kalman filter estimation,” in Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 236 , 1885–1899. 10.1177/0954405420977347 [ CrossRef ] [ Google Scholar ]
  • Tazzari R., Mengoli D., Marconi L. (2020). “Design concept and modelling of a tracked UGV for orchard precision agriculture,” in 2020 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor) , 207–212. [ Google Scholar ]
  • Tsiogkas N., Lane D. M. (2018). An evolutionary algorithm for online, resource-constrained, multivehicle sensing mission planning . IEEE Robot. Automat. Lett. 3 , 1199–1206. 10.1109/LRA.2018.2794578 [ CrossRef ] [ Google Scholar ]
  • Villa-Henriksen A., Edwards G. T., Pesonen L. A., Green O., Sórensen C. A. G. (2020). Internet of Things in arable farming: implementation, applications, challenges and potential . Biosyst. Eng. 191 , 60–84. 10.1016/j.biosystemseng.2019.12.013 [ CrossRef ] [ Google Scholar ]
  • Wang T., Huang P., Dong G. (2021). Modeling and path planning for persistent surveillance by unmanned ground vehicle . IEEE Trans. Automat. Sci. Eng. 18 , 1615–1625. 10.1109/TASE.2020.3013288 [ CrossRef ] [ Google Scholar ]
  • Xie J., Chen J. (2020). “Multi-regional coverage path planning for robots with energy constraint,” in 2020 IEEE 16th International Conference on Control & Automation (ICCA) , 1372–1377. 10.1109/ICCA51439.2020.9264472 [ CrossRef ] [ Google Scholar ]
  • Xuan B. B. (2021). Consumer preference for eco-labelled aquaculture products in vietnam . Aquaculture 532 , 736111. 10.1016/j.aquaculture.2020.736111 [ CrossRef ] [ Google Scholar ]
  • Yang J., Ni J., Li Y., Wen J., Chen D. (2022). The intelligent path planning system of agricultural robot via reinforcement learning . Sensors 22 , 1–19. 10.3390/s22124316 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhai Z., Martínez-Ortega J.-F., Lucas-Martínez N., Rodríguez-Molina J. (2018). A mission planning approach fors precision farming systems based on multi-objective optimization . Sensors 18 , 1–32. 10.3390/s18061795 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhang K., Yang Y., Fu M., Wang M. (2019). Traversability assessment and trajectory planning of unmanned ground vehicles with suspension systems on rough terrain . Sensors 19 , 1–28. 10.3390/s19204372 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhao J., Yu Y., Lei J., Liu J. (2023). Multi-objective lower irrigation limit simulation and optimization model for Lycium barbarum based on NSGA-III and ANN . Water 15 , 1–16. 10.3390/w15040783 [ CrossRef ] [ Google Scholar ]
  • Zhao Y., Ding F., Li J., Guo L., Qi W. (2019). The intelligent obstacle sensing and recognizing method based on D–S evidence theory for UGV . Future Gen. Comput. Syst. 97 , 21–29. 10.1016/j.future.2019.02.003 [ CrossRef ] [ Google Scholar ]

The Robot Report

Collaborative Robotics raises $100M in Series B for mysterious mobile manipulator

By Mike Oitzman | April 10, 2024

Collaborative Robotics has raised Series B funding.

Collaborative Robotics has been developing a system for trustworthy operations. Source: Adobe Stock, Photoshopped by The Robot Report

Collaborative Robotics today closed a $100 million Series B round on the road to commercializing its autonomous mobile manipulator. The Santa Clara, Calif.-based company said it is developing robots that can safely and affordably work alongside people in varied manufacturing, supply chain, and healthcare workflows. In many cases, this is the same work that humanoid robots are jockeying for.

Brad Porter, a former distinguished engineer and vice president of robotics at Amazon, founded Collaborative Robotics in 2022. The Cobot team includes robotics and artificial intelligence experts from Amazon, Apple, Meta, Google, Microsoft, NASA, Waymo, and more.

“Getting our first robots in the field earlier this year, coupled with today’s investment, are major milestones as we bring cobots with human-level capability into the industries of today,” stated Porter. “We see a virtuous cycle, where more robots in the field lead to improved AI and a more cost-effective supply chain. This funding will help us accelerate getting more robots into the real world.”

The Robot Report   caught up with Porter to learn more about the company and its product since our last conversation in July 2023, when Cobot raised its $30 million Series A .

Nothing to see here

Collaborative Robotics has been secretive about the design of its robot. You won’t find any photos of the cobot on the company’s site or anywhere else on the Web yet.

However, Porter told The Robot Report that it is already in trials with several pilot customers, including a global logistics company. He described the machine as a mobile manipulator , with roughly the stature of a human. However, it’s not a humanoid, nor does it have a six degree-of-freedom arm or a hand with fingers.

“When talking about general-purpose robots versus special-purpose robots, we know what humanoids look like, but with a new morphology, we want to protect it for a while,” he said. “We’ve been looking at humanoids for a long time, but in manufacturing, secondary material flow is designed around humans and carts. Hospitals, airports, and stadiums are usually designed around people flow. A huge amount of people is still moving boxes, totes, and carts around the world.”

The new cobot’s base is capable of omnidirectional motion with four wheels and a swerve-drive design, along with a central structure that can acquire, carry, and place totes and boxes around the warehouse. It is just under 6 ft. (2 m) tall and can carry up to 75 lb. (34 kg), said Porter.

The robot can also engage and move existing carts with payloads weighing up to 1,500 lb. (680 kg) around the warehouse. How the robot engages carts remains part of the mystery. But by automating long-distance moves and using existing cart infrastructure, Porter said he believes that the Collaborative Robotics system is differentiated from both mobile robot platforms and humanoid competitors.

“We looked at use cases for humanoids at Amazon, but you don’t actually want the complexity of a humanoid; you want something that’s stable and could move faster than people,” Porter added. “There are orders of magnitude more mobile robots than humanoids in day-to-day use, and at $300,000 to $600,000 per robot, the capital to build the first 10 humanoids is very high. We want to get robots into the field faster.”

pixelated, unrecognizable image of a mobile robot pushing a cart in a warehouse.

Collaborative Robotics has kept its actual robot out of public view. | Source: Adobe Stock image Photoshopped by The Robot Report

Robots must be trustworthy

Porter said that he “believes that robots need to be trustworthy, in addition to being safe. This philosophy is driving the design and user-interface decisions that the company has made so far. Users need to understand what the robot should do by looking at it, unlike some of the existing designs of mobile robots currently on the market.”

In addition to a human-centered design approach, Collaborative Robotics is using off-the-shelf parts to reduce the robot bill of materials cost and simplify the supply chain as it begins the process of commercialization. It is also taking a “building-block” approach to hardware and plans to adjust software and machine learning for navigation and learning new tasks.

“The robot we’ve designed is 70% off-the-shelf parts, and we can design around existing motors, while every humanoid company is hand-winding its own motors to find advanced actuation capabilities,” Porter noted. “We designed the system digitally, so we don’t have to hand-tweak a bunch of things. By using 3D lidar, we know the state of the art of the technology, and it’s easier to safety-qualify.”

With large language models (LLMs), Porter said he sees the day when someone in a hospital or another facility can just tell a robot to go away. “It’s about user interaction rather than just safety, which is table stakes,” he said. “We think a lot about trustworthiness.”

SITE AD for the 2024 Robotics Summit registration.

Collaborative Robotics preps for commercialization

General Catalyst led Collaborative Robotics’ Series B round, with participation from Bison Ventures, Lux Capital, and Industry Ventures. Existing investors Sequoia Capital, Khosla Ventures, Mayo Clinic, Neo, 1984 Ventures, MVP Ventures, and Calibrate Ventures also participated.

Since its founding in 2022, Cobot said it has raised more than $140 million. The company plans to grow its headcount from 35, adding production, sales, and support staffers.

In addition, Collaborative Robotics announced that Teresa Carlson will be joining it as an advisor on go to market at scale and industry transformation. She held leadership roles at Amazon Web Services, Microsoft, Splunk, and Flexport.

“I’m super-excited to be working with Teresa,” said Porter. “We’ve kept up since Amazon, and she thinks a lot about digital transformation at a very large scale — federal government and industry. She brings a wealth of knowledge about economics that will elevate the scope of what we’re doing.”

Paul Kwan , managing director at General Catalyst, is joining Alfred Lin from Sequoia on Collaborative Robotics’ board of directors. 

“In our view, Brad and Cobot are spearheading the future of human-robot interaction,” said Kwan. “We believe the Cobot team is world-class at building the necessary hardware, software, and institutional trust to achieve their vision.”

Editor’s note:  Eugene Demaitre contributed to this article.

About The Author

' src=

Mike Oitzman

Mike Oitzman is Senior Editor of WTWH's Robotics Group and founder of the Mobile Robot Guide. Oitzman is a robotics industry veteran with 25-plus years of experience at various high-tech companies in the roles of marketing, sales and product management. He can be reached at [email protected].

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Related Articles Read More >

image of sanctuary humanoid robot showing hand detail, full body and torso.

Sanctuary AI enters strategic relationship with Magna to build embodied AI robots

The FoxBot ATL will move pallets in Walmart loading docks.

Walmart makes multiyear agreement with Fox Robotics, takes a stake

top robotics stories of March 2024.

Top 10 robotics news stories of March 2024

image of Phoenix humanoid robot, full body, not a render.

Accenture invests in humanoid maker Sanctuary AI

Search the robot report.

recent research work in robotics

  • Batteries / Power Supplies
  • Cameras / Imaging / Vision
  • Controllers
  • End Effectors
  • Microprocessors / SoCs
  • Motion Control
  • Soft Robotics
  • Software / Simulation
  • Artificial Intelligence
  • Human Robot Interaction / Haptics
  • Mobility / Navigation
  • Collaborative Robots
  • Self-Driving Vehicles
  • Unmanned Maritime Systems
  • Investments
  • Mergers & Acquisitions
  • Agriculture
  • Manufacturing
  • RBR50 Winners 2024
  • RBR50 Winners 2023
  • RBR50 Winners 2022
  • RBR50 Winners 2021
  • Realizing a Data-Driven Warehouse
  • Supporting the Growth of AGVs and AMRs in the Warehouse
  • Digital Issues
  • Automated Warehouse
  • Collaborative Robotics Trends
  • Mobile Robot Guide
  • Search Robotics Database
  • Webinars / Digital Events
  • RoboBusiness
  • Robotics Summit & Expo
  • DeviceTalks
  • R&D 100
  • Robotics Weeks

Collaborative Robotics raises $100 mln amid robots funding boom

  • Medium Text

The Technology Roundup newsletter brings the latest news and trends straight to your inbox. Sign up here.

Reporting by Krystal Hu in New York; Editing by Shounak Dasgupta

Our Standards: The Thomson Reuters Trust Principles. New Tab , opens new tab

recent research work in robotics

Thomson Reuters

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

Illustration shows TSMC (Taiwan Semiconductor Manufacturing Company) logo

Technology Chevron

Illustration picture of AI chatbot Ernie Bot

Baidu says AI chatbot 'Ernie Bot' has amassed 200 million users

China's Baidu said on Tuesday its artificial intelligence chatbot “Ernie Bot” has garnered more than 200 million users as it seeks to remain China’s most popular ChatGPT-like chatbot amid increasingly fierce competition.

Arielle Zuckerberg, younger sister of Meta chief Mark Zuckerberg, has invested in French space technology company Dark via her venture capital firm Long Journey, the latest instance of a surge in funding for companies in the space sector.

Latin American e-commerce giant MercadoLibre is planning to hire 6,500 people in Brazil this year, President Luiz Inacio Lula da Silva's office said on Monday, after he met with the firm's top executive in the country.

Recent advances in human–robot interaction: robophobia or synergy

  • Published: 09 April 2024

Cite this article

  • Andrius Dzedzickis   ORCID: orcid.org/0000-0002-0665-8829 1 ,
  • Gediminas Vaičiūnas   ORCID: orcid.org/0000-0002-5584-9779 1 ,
  • Karolina Lapkauskaitė   ORCID: orcid.org/0000-0002-2649-4024 1 ,
  • Darius Viržonis   ORCID: orcid.org/0000-0002-9416-3526 1 &
  • Vytautas Bučinskas   ORCID: orcid.org/0000-0002-2458-7243 1  

77 Accesses

Explore all metrics

Recent developments and general penetration of society by relations between robots and human beings generate multiple feelings, opinions, and reactions. Such a situation develops a request to analyze this area; multiple references to facts indicate that the situation differs from public opinion. This paper provides a detailed analysis performed on the wide area of human–robot interaction (HRI). It delivers an original classification of HRI with respect to human emotion, technical means, human reaction prediction, and the general cooperation-collaboration field. Analysis was executed using reference outcome sorting and reasoning into separate groups, provided in separate tables. Finally, the analysis is finished by developing a big picture of the situation with strong points and general tendencies outlined. The paper concludes that HRI still lacks methodology and training techniques for the initial stage of human–robot cooperation. Also, in the paper, instrumentation for HRI is analyzed, and it is inferred that the main bottlenecks remain in the process of being understood, lacking an intuitive interface and HRI rules formulation, which are suggested for future work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

recent research work in robotics

Similar content being viewed by others

recent research work in robotics

Human-Robot Interaction

recent research work in robotics

A conceptual framework to evaluate human-robot collaboration

Riccardo Gervasi, Luca Mastrogiacomo & Fiorenzo Franceschini

recent research work in robotics

Human-Robot Social Interaction

Akalin, N., Kristoffersson, A., & Loutfi, A. (2022). Do you feel safe with your robot? Factors influencing perceived safety in human-robot interaction based on subjective and objective measures. International Journal of Human-Computer Studies, 158 , 102744. https://doi.org/10.1016/j.ijhcs.2021.102744

Article   Google Scholar  

Alarcon, G. M., Gibson, A. M., Jessup, S. A., & Capiola, A. (2021). Exploring the differential effects of trust violations in human-human and human-robot interactions. Applied Ergonomics, 93 , 103350. https://doi.org/10.1016/j.apergo.2020.103350

Archer, M. S. (2021). Can humans and AI robots be friends? In Post-Human Futures: Human Enhancement, Artificial Intelligence and Social Theory (pp. 132–152). Taylor and Francis. https://doi.org/10.4324/9781351189958-7

Bajcsy, A., Herbert, S. L., Fridovich-Keil, D., Fisac, J. F., Deglurkar, S., Dragan, A. D., & Tomlin, C. J. (2018). A scalable framework for real-time multi-robot, multi-human collision avoidance. In Proceedings - IEEE International Conference on Robotics and Automation , 2019-May , 936–943.

Ballen-Moreno, F., Bautista, M., Provot, T., Bourgain, M., Cifuentes, C. A., & Múnera, M. (2022). Development of a 3D relative motion method for human-robot interaction assessment. Sensors, 22 (6), 2411. https://doi.org/10.3390/s22062411

Bänziger, T., Kunz, A., & Wegener, K. (2020). Optimizing human–robot task allocation using a simulation tool based on standardized work descriptions. Journal of Intelligent Manufacturing, 31 , 1635–1648. https://doi.org/10.1007/s10845-018-1411-1

Bhattacharya, S. (2021). A note on robotics and artificial intelligence in pharmacy. Applied Drug Research, Clinical Trials and Regulatory Affairs, 8 (2), 125–134. https://doi.org/10.2174/2667337108666211206151551

Bi, L., & Feleke, A. (2019). A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration. Biomedical Signal Processing and Control, 51 , 113–127. https://doi.org/10.1016/j.bspc.2019.02.01

Boschetti, G., Bottin, M., Faccio, M., & Minto, R. (2021). Multi-robot multi-operator collaborative assembly systems: A performance evaluation model. Journal of Intelligent Manufacturing, 32 , 1455–1470. https://doi.org/10.1007/s10845-020-01714-7

Buerkle, A., Eaton, W., Lohse, N., Bamber, T., & Ferreira, P. (2021). EEG based arm movement intention recognition towards enhanced safety in symbiotic Human-Robot Collaboration. Robotics and Computer-Integrated Manufacturing, 70 , 102137. https://doi.org/10.1016/j.rcim.2021.102137

Cacace, J., Caccavale, R., Finzi, A., & Grieco, R. (2023). Combining human guidance and structured task execution during physical human–robot collaboration. Journal of Intelligent Manufacturing, 34 , 3053–3067. https://doi.org/10.1007/s10845-022-01989-y

Caccavale, R., Saveriano, M., Finzi, A., & Lee, D. (2019). Kinesthetic teaching and attentional supervision of structured tasks in human–robot interaction. Autonomous Robots, 43 (6), 1291–1307. https://doi.org/10.1007/s10514-018-9706-9

Canuto, C., Freire, E. O., Molina, L., Carvalho, E. A. N., & Givigi, S. N. (2022). Intuitiveness level: Frustration-based methodology for human-robot interaction gesture elicitation. IEEE Access, 10 , 17145–17154. https://doi.org/10.1109/ACCESS.2022.3146838

Caporaso, T., Grazioso, S., & di Gironimo, G. (2022). Development of an integrated virtual reality system with wearable sensors for ergonomic evaluation of human-robot cooperative workplaces. Sensors, 22 (6), 2413. https://doi.org/10.3390/s22062413

Choi, Y., Choi, M., Oh, M., & Kim, S. (2020). Service robots in hotels: Understanding the service quality perceptions of human-robot interaction. Journal of Hospitality Marketing & Management, 29 (6), 613–635. https://doi.org/10.1080/19368623.2020.1703871

Chutima, P. (2022). A comprehensive review of robotic assembly line balancing problem. Journal of Intelligent Manufacturing, 33 (1), 1–34. https://doi.org/10.1007/s10845-020-01641-7

Davey, G. (1997). Phobias : A Handbook of Theory, Research, and Treatment . Wiley.

Google Scholar  

De Groote, F., & Falisse, A. (2021). Perspective on musculoskeletal modelling and predictive simulations of human movement to assess the neuromechanics of gait. Proceedings of the Royal Society B: Biological Sciences, 288 (1946), 20202432. https://doi.org/10.1098/rspb.2020.2432

De Visser, E. J., Topoglu, Y., Joshi, S., Krueger, F., Phillips, E., Gratch, J., Tossell, C. C., & Ayaz, H. (2022). Designing man’s new best friend: Enhancing human-robot dog interaction through dog-like framing and appearance. Sensors . https://doi.org/10.3390/S22031287

Desideri, L., Ottaviani, C., Malavasi, M., di Marzio, R., & Bonifacci, P. (2019). Emotional processes in human-robot interaction during brief cognitive testing. Computers in Human Behavior, 90 , 331–342. https://doi.org/10.1016/j.chb.2018.08.013

Duarte, N. F., Rakovic, M., Tasevski, J., Coco, M. I., Billard, A., & Santos-Victor, J. (2018). Action anticipation: Reading the intentions of humans and robots. IEEE Robotics and Automation Letters, 3 (4), 4132–4139. https://doi.org/10.1109/LRA.2018.2861569

Dzedzickis, A., Kaklauskas, A., & Bucinskas, V. (2020). Human emotion recognition: Review of sensors and methods. Sensors, 20 (3), 592. https://doi.org/10.3390/s20030592

Dzedzickis, A., Subačiūtė-Žemaitienė, J., Šutinys, E., Samukaitė-Bubnienė, U., & Bučinskas, V. (2021). Advanced applications of industrial robotics: New trends and possibilities. Applied Sciences, 12 (1), 135. https://doi.org/10.3390/app12010135

Ekman, P. (1992). An argument for basic emotions. Cognition and Emotion, 6 (3–4), 169–200. https://doi.org/10.1080/02699939208411068

Etemad-Sajadi, R., Soussan, A., & Schöpfer, T. (2022). How ethical issues raised by human-robot interaction can impact the intention to use the robot? International Journal of Social Robotics, 14 (4), 1103–1115. https://doi.org/10.1007/S12369-021-00857-8

Faccio, M., Granata, I., Menini, A., Milanese, M., Rossato, C., Bottin, M., Minto, R., Pluchino, P., Gamberini, L., Boschetti, G., & Rosati, G. (2023). Human factors in cobot era: A review of modern production systems features. Journal of Intelligent Manufacturing, 34 (1), 85–106. https://doi.org/10.1007/s10845-022-01953-w

Fisac, J. F., Bajcsy, A., Herbert, S. L., Fridovich-Keil, D., Wang, S., Tomlin, C. J., & Dragan, A. D. (2018). Probabilistically safe robot planning with confidence-based human predictions. Robotics: Science and Systems . https://doi.org/10.48550/arxiv.1806.00109

Fischer, K. (2022). Tracking anthropomorphizing behavior in human-robot interaction. ACM Transactions on Human-Robot Interaction, 11 (1), 1–28. https://doi.org/10.1145/3442677

Fraune, M. R., Sherrin, S., Šabanović, S., & Smith, E. R. (2019). Is human-robot interaction more competitive between groups than between individuals?. In 2019 14th acm/ieee international conference on human-robot interaction (hri) (pp. 104–113). IEEE. https://doi.org/10.1109/HRI.2019.8673241

Fukumori, T., Cai, C., Zhang, Y., el Hafi, L., Hagiwara, Y., Nishiura, T., & Taniguchi, T. (2022). Optical laser microphone for human-robot interaction: Speech recognition in extremely noisy service environments. Advanced Robotics, 36 (5–6), 304–317. https://doi.org/10.1080/01691864.2021.2023629

Gaggioli, A., Chirico, A., di Lernia, D., Maggioni, M. A., Malighetti, C., Manzi, F., Marchetti, A., Massaro, D., Rea, F., Rossignoli, D., Sandini, G., Villani, D., Wiederhold, B. K., Riva, G., & Sciutti, A. (2021). Machines like us and people like you: Toward human-robot shared experience. Cyberpsychology, Behavior, and Social Networking, 24 (5), 357–361. https://doi.org/10.1089/cyber.2021.29216.aga

Gomez Chavez, A., Ranieri, A., Chiarella, D., Zereik, E., Babić, A., & Birk, A. (2019). CADDY underwater stereo-vision dataset for human-robot interaction (HRI) in the context of diver activities. Journal of Marine Science and Engineering, 7 (1), 16. https://doi.org/10.3390/jmse7010016

Gualtieri, L., Rauch, E., & Vidoni, R. (2022). Development and validation of guidelines for safety in human-robot collaborative assembly systems. Computers & Industrial Engineering, 163 , 107801. https://doi.org/10.1016/j.cie.2021.107801

Gui, L.-Y., Zhang, K., Wang, Y.-X., Liang, X., Moura, J. M. F., & Veloso, M. (2018). teaching robots to predict human motion. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018 , 562–567. https://doi.org/10.1109/IROS.2018.8594452

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: THe Journal of the Human Factors and Ergonomics Society, 53 (5), 517–527. https://doi.org/10.1177/0018720811417254

Hatfield, E., Bensman, L., Thornton, P. D., & Rapson, R. L. (2014). New perspectives on emotional contagion: A review of classic and recent research on facial mimicry and contagion. Interpersona an International Journal on Personal Relationships, 8 (2), 159–179. https://doi.org/10.5964/ijpr.v8i2.162

Hayashi, Y., & Wakabayashi, K. (2018). Influence of robophobia on decision making in a court scenario. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction , 121–122. https://doi.org/10.1145/3173386.3176988

Hellström, T., & Bensch, S. (2018). Understandable Robots. Paladyn, 9 (1), 110–123. https://doi.org/10.1515/PJBR-2018-0009

Hentout, A., Aouache, M., Maoudj, A., & Akli, I. (2019). Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Advanced Robotics, 33 (15–16), 764–799. https://doi.org/10.1080/01691864.2019.1636714

Higgins, P., Kebe, G. Y., Berlier, A. J., Darvish, K., Engel, D., & Ferraro, F. (2021). Towards Making Virtual Human-Robot Interaction a Reality . https://doi.org/10.13016/M2LHCH-CUZP

Hjorth, S., & Chrysostomou, D. (2022). Human–robot collaboration in industrial environments: A literature review on non-destructive disassembly. Robotics and Computer-Integrated Manufacturing, 73 , 102208. https://doi.org/10.1016/j.rcim.2021.102208

Horstmann, A. C., & Krämer, N. C. (2022). The fundamental attribution error in human-robot interaction: An experimental investigation on attributing responsibility to a social robot for its pre-programmed behavior. International Journal of Social Robotics, 14 (5), 1137–1153. https://doi.org/10.1007/S12369-021-00856-9

Hu, H., & Fisac, J. F. (2022). Active Uncertainty Reduction for Human-Robot Interaction: An Implicit Dual Control Approach . http://arxiv.org/abs/2202.07720

Hu, Y., Abe, N., Benallegue, M., Yamanobe, N., Venture, G., & Yoshida, E. (2022). Toward active physical human-robot interaction: Quantifying the human state during interactions. IEEE Transactions on Human-Machine Systems, 52 (3), 367–378. https://doi.org/10.1109/THMS.2021.3138684

Huang, R., Cheng, H., Qiu, J., & Zhang, J. (2019). Learning physical human-robot interaction with coupled cooperative primitives for a lower exoskeleton. IEEE Transactions on Automation Science and Engineering, 16 (4), 1566–1574. https://doi.org/10.1109/TASE.2018.2886376

Innes, J. M., & Morrison, W. B. (2021). Experimental studies of human-robot interaction: Threats to valid interpretation from methodological constraints associated with experimental manipulations. International Journal of Social Robotics, 13 (4), 765–773. https://doi.org/10.1007/s12369-020-00671-8

Hofstede Insights, (2023) Retrieved December 4, 2023, from https://www.hofstede-insights.com/country-comparison-tool?countries=denmark,greece

Jørgensen, J., Bojesen, K. B., & Jochum, E. (2022). Is a soft robot more “Natural”? Exploring the perception of soft robotics in human-robot interaction. International Journal of Social Robotics, 14 (1), 95–113. https://doi.org/10.1007/s12369-021-00761-1

Kaonain, T. E., Rahman, M. A. A., Ariff, M. H. M., Yahya, W. J., & Mondal, K. (2021). Collaborative robot safety for human-robot interaction in domestic simulated environments. IOP Conference Series: Materials Science and Engineering, 1096 (1), 012029. https://doi.org/10.1088/1757-899X/1096/1/012029

Katsanis, I. A., & Moulianitis, V. C. (2021). An architecture for safe child-robot interactions in autism interventions. Robotics, 10 (1), 20. https://doi.org/10.3390/robotics10010020

Kempt, H. (2020). Social Reverberations. In Social Reverberations (pp. 137–173).

Kempt, H. (2022). Social Integration. In Synthetic Friends (pp. 163–1834). Berlin: Springer.

Book   Google Scholar  

Khairuddin, I. M., Sidek, S. N., Majeed, A. P. P. A., Razman, M. A. M., Puzi, A. A., & Yusof, H. M. (2021). The classification of movement intention through machine learning models: The identification of significant time-domain EMG features. PeerJ Computer Science, 7 , 1–15. https://doi.org/10.7717/PEERJ-CS.379/SUPP-2

Kitagawa, R., Liu, Y., & Kanda, T. (2021). Human-inspired motion planning for omni-directional social robots. In ACM/IEEE International Conference on Human-Robot Interaction , 34–42. https://doi.org/10.1145/3434073.3444679

Kousi, N., Stoubos, C., Gkournelos, C., Michalos, G., & Makris, S. (2019). Enabling Human robot interaction in flexible robotic assembly lines: An augmented reality based software suite. Procedia CIRP, 81 , 1429–1434. https://doi.org/10.1016/j.procir.2019.04.328

Krueger, F., Mitchell, K. C., Deshpande, G., & Katz, J. S. (2021). Human–dog relationships as a working framework for exploring human–robot attachment: A multidisciplinary review. Animal Cognition, 24 (2), 371–385. https://doi.org/10.1007/S10071-021-01472-W

Kuhail, M. A., Berengueres, J., Taher, F., Alkuwaiti, M., & Khan, S. Z. (2023). Haptic systems: Trends and lessons learned for haptics in spacesuits. Electronics, 12 , 1888. https://doi.org/10.3390/electronics12081888

Kulke, L., Feyerabend, D., & Schacht, A. (2020). A comparison of the affectiva iMotions facial expression analysis software with emg for identifying facial expressions of Emotion. Frontiers in Psychology, 11 , 329. https://doi.org/10.3389/FPSYG.2020.00329

Lai, Y., Paul, G., Cui, Y., & Matsubara, T. (2022). User intent estimation during robot learning using physical human robot interaction primitives. Autonomous Robots, 46 (2), 421–436. https://doi.org/10.1007/S10514-021-10030-9

Lavit Nicora, M., Ambrosetti, R., Wiens, G. J., & Fassi, I. (2021). Human-robot collaboration in smart manufacturing: Robot reactive behavior intelligence. Journal of Manufacturing Science and Engineering, Transactions of the ASME, . https://doi.org/10.1115/1.4048950/1089694

Lee, J.-E. R., & Nass, C. I. (2010). Trust in Computers. In Trust and Technology in a Ubiquitous Modern Environment (pp. 1–15). IGI Global. https://doi.org/10.4018/978-1-61520-901-9.ch001

Lee, S. A., & Liang, Y. J. (2019). Robotic foot-in-the-door: Using sequential-request persuasive strategies in human-robot interaction. Computers in Human Behavior, 90 , 351–356. https://doi.org/10.1016/j.chb.2018.08.026

Leichtmann, B., Nitsch, V., & Mara, M. (2022). Crisis ahead? Why human-robot Interaction user studies may have replicability problems and directions for improvement. Frontiers in Robotics and A, I , 9. https://doi.org/10.3389/frobt.2022.838116

Lestingi, L., Askarpour, M., Bersani, M. M., & Rossi, M. (2021). A deployment framework for formally verified human-robot interactions. IEEE Access, 9 , 136616–136635. https://doi.org/10.1109/ACCESS.2021.3117852

Lewandowska, A., Rejer, I., Bortko, K., & Jankowski, J. (2022). Eye-tracker study of influence of affective disruptive content on user’s visual attention and emotional state. Sensors, 22 (2), 547. https://doi.org/10.3390/s22020547

Li, G., Li, Z., & Kan, Z. (2022). Assimilation control of a robotic exoskeleton for physical human-robot interaction. IEEE Robotics and Automation Letters, 7 (2), 2977–2984. https://doi.org/10.1109/LRA.2022.3144537

Li, J., Lu, L., Zhao, L., Wang, C., & Li, J. (2021). An integrated approach for robotic Sit-To-Stand assistance: Control framework design and human intention recognition. Control Engineering Practice, 107 , 104680. https://doi.org/10.1016/j.conengprac.2020.104680

Li, W., Hu, Y., Zhou, Y., & Pham, D. T. (2023). Safe human–robot collaboration for industrial settings: A survey. Journal of Intelligent Manufacturing . https://doi.org/10.1007/s10845-023-02159-4

Lim, J. Z., Mountstephens, J., & Teo, J. (2020). Emotion recognition using eye-tracking: Taxonomy. Review and Current Challenges. Sensors, 20 (8), 2384. https://doi.org/10.3390/s20082384

Lim, Y., Pongsakornsathien, N., Gardi, A., Sabatini, R., Kistan, T., Ezer, N., & Bursch, D. J. (2021). Adaptive human-robot interactions for multiple unmanned aerial vehicles. Robotics, 10 (1), 12. https://doi.org/10.3390/robotics10010012

Liu, Z., Lyu, K., Wu, S., Chen, H., Hao, Y., & Ji, S. (2021). Aggregated multi-GANs for controlled 3D human motion prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 35 (3), 2225–2232.

Londoño, L., Röfer, A., Welschehold, T., & Valada, A. (2022). Doing Right by Not Doing Wrong in Human-Robot Collaboration . https://doi.org/10.48550/arxiv.2202.02654

Maccarini, A. M. (2021). The social meanings of perfection: Human self-understanding in a post-human society. In What is Essential to Being Human?: Can AI Robots Not Share It? (pp. 197–213). Taylor and Francis. https://doi.org/10.4324/9780429351563-10

Maggioni, M. A., & Rossignoli, D. (2023). If it looks like a human and speaks like a human. Communication and cooperation in strategic Human-Robot interactions. Journal of Behavioral and Experimental Economics, 104 , 102011. https://doi.org/10.1016/j.socec.2023.102011

Malik, A. A., & Bilberg, A. (2019). Developing a reference model for human–robot interaction. International Journal on Interactive Design and Manufacturing (IJIDeM), 13 (4), 1541–1547. https://doi.org/10.1007/s12008-019-00591-6

Maroger, I., Ramuzat, N., Stasse, O., & Watier, B. (2021). Human trajectory prediction model and its coupling with a walking pattern generator of a humanoid robot. IEEE Robotics and Automation Letters, 6 (4), 6361–6369. https://doi.org/10.1109/LRA.2021.3092750

Matheson, E., Minto, R., Zampieri, E. G. G., Faccio, M., & Rosati, G. (2019). Human-robot collaboration in manufacturing applications: A review. Robotics, 8 (4), 100. https://doi.org/10.3390/ROBOTICS8040100

Mazhar, O., Navarro, B., Ramdani, S., Passama, R., & Cherubini, A. (2019). A real-time human-robot interaction framework with robust background invariant hand gesture detection. Robotics and Computer-Integrated Manufacturing, 60 , 34–48. https://doi.org/10.1016/j.rcim.2019.05.008

Melchiorre, M., Scimmi, L. S., Mauro, S., & Pastorelli, S. P. (2021). Vision-based control architecture for human–robot hand-over applications. Asian Journal of Control, 23 (1), 105–117. https://doi.org/10.1002/asjc.2480

Moro, C., Lin, S., Nejat, G., & Mihailidis, A. (2019). Social robots and seniors: A comparative study on the influence of dynamic social features on human-robot interaction. International Journal of Social Robotics, 11 (1), 5–24. https://doi.org/10.1007/s12369-018-0488-1

Mugisha, S., Guda, V. K., Chevallereau, C., Zoppi, M., Molfino, R., & Chablat, D. (2022). Improving haptic response for contextual human robot interaction. Sensors, 22 (5), 2040. https://doi.org/10.3390/s22052040

Müller, M., Ruppert, T., Jazdi, N., & Weyrich, M. (2023). Self-improving situation awareness for human–robot-collaboration using intelligent Digital Twin. Journal of Intelligent Manufacturing . https://doi.org/10.1007/s10845-023-02138-9

Murata, S., Yamashita, Y., Arie, H., Ogata, T., Sugano, S., & Tani, J. (2017). Learning to perceive the world as probabilistic or deterministic via interaction with others: A neuro-robotics experiment. IEEE Transactions on Neural Networks and Learning Systems, 28 (4), 830–848. https://doi.org/10.1109/TNNLS.2015.2492140

Neto, P., Simão, M., Mendes, N., & Safeea, M. (2019). Gesture-based human-robot interaction for human assistance in manufacturing. The International Journal of Advanced Manufacturing Technology, 101 (1–4), 119–135. https://doi.org/10.1007/s00170-018-2788-x

Noroozi, F., Corneanu, C. A., Kaminska, D., Sapinski, T., Escalera, S., & Anbarjafari, G. (2021). Survey on emotional body gesture recognition. IEEE Transactions on Affective Computing, 12 (2), 505–523. https://doi.org/10.1109/TAFFC.2018.2874986

Obo, T., & Takizawa, K. (2022). Analysis of timing and effect of visual cue on turn-taking in human-robot interaction. Journal of Robotics and Mechatronics, 34 (2), 55–63.

Oliveira, R., Arriaga, P., & Paiva, A. (2021). Human-robot interaction in groups: Methodological and research practices. Multimodal Technologies and Interaction, 5 (10), 59. https://doi.org/10.3390/mti5100059

Páez, J., & González, E. (2022). Human-robot scaffolding: An architecture to foster problem-solving skills. ACM Transactions on Human-Robot Interaction, 11 (3), 1–17. https://doi.org/10.1145/3526109

Park, S., & Whang, M. (2022). Empathy in human-robot interaction: Designing for social robots. International Journal of Environmental Research and Public Health . https://doi.org/10.3390/IJERPH19031889

Pathi, S. K., Kiselev, A., & Loutfi, A. (2022). Detecting groups and estimating F-formations for social human-robot interactions. Multimodal Technologies and Interaction, 6 (3), 18. https://doi.org/10.3390/mti6030018

Porpora, D. (2021). On robophilia and robophobia. What Is Essential to Being Human?: Can AI Robots Not Share It? , 26–39. https://doi.org/10.4324/9780429351563-2

Qian, K., Xu, X., Liu, H., Bai, J., & Luo, S. (2022). Environment-adaptive learning from demonstration for proactive assistance in human–robot collaborative tasks. Robotics and Autonomous Systems, 151 , 104046. https://doi.org/10.1016/j.robot.2022.104046

Qu, W., Li, J., Zhang, R., Liu, S., & Bao, J. (2023). Adaptive planning of human–robot collaborative disassembly for end-of-life lithium-ion batteries based on digital twin. Journal of Intelligent Manufacturing . https://doi.org/10.1007/s10845-023-02081-9

Rabb, N., Law, T., Chita-Tegmark, M., & Scheutz, M. (2022). An attachment framework for human-robot interaction. International Journal of Social Robotics, 14 (2), 539–559. https://doi.org/10.1007/s12369-021-00802-9

Rahman, S. M. M. (2021). Machine learning-based cognitive position and force controls for power-assisted human-robot collaborative manipulation. Machines, 9 (2), 28. https://doi.org/10.3390/machines9020028

Richards, L. E., & Matuszek, C. (2021). Learning to Understand Non-Categorical Physical Language for Human Robot Interactions . https://doi.org/10.13016/m2lbuq-ulee

Richardson, S. (2020). Affective computing in the modern workplace. Business Information Review, 37 (2), 78–85. https://doi.org/10.1177/0266382120930866

Roesler, E., Naendrup-Poell, L., Manzey, D., & Onnasch, L. (2022). Why context matters: The influence of application domain on preferred degree of anthropomorphism and gender attribution in human-robot interaction. International Journal of Social Robotics, 14 (5), 1155–1166. https://doi.org/10.1007/S12369-021-00860-Z

Ruhland, K., Peters, C. E., Andrist, S., Badler, J. B., Badler, N. I., Gleicher, M., Mutlu, B., & McDonnell, R. (2015). A review of eye gaze in virtual agents, social robotics and HCI: Behaviour generation, user interaction and perception. Computer Graphics Forum, 34 (6), 299–326. https://doi.org/10.1111/cgf.12603

Sanders, T., Kaplan, A., Koch, R., Schwartz, M., & Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors: THe Journal of the Human Factors and Ergonomics Society, 61 (4), 614–626. https://doi.org/10.1177/0018720818816838

Saunderson, S., & Nejat, G. (2019). How robots influence humans: A survey of nonverbal communication in social human-robot interaction. International Journal of Social Robotics, 11 (4), 575–608. https://doi.org/10.1007/s12369-019-00523-0

Schydlo, P., Rakovic, M., Jamone, L., & Santos-Victor, J. (2018). Anticipation in human-robot cooperation: A recurrent neural network approach for multiple action sequences prediction. IEEE International Conference on Robotics and Automation (ICRA), 2018 , 1–6. https://doi.org/10.1109/ICRA.2018.8460924

Shi, D., Zhang, W., Zhang, W., Ju, L., & Ding, X. (2021). Human-centred adaptive control of lower limb rehabilitation robot based on human–robot interaction dynamic model. Mechanism and Machine Theory, 162 , 104340. https://doi.org/10.1016/j.mechmachtheory.2021.104340

Song, C. S., & Kim, Y. K. (2022). The role of the human-robot interaction in consumers’ acceptance of humanoid retail service robots. Journal of Business Research, 146 , 489–503. https://doi.org/10.1016/j.jbusres.2022.03.087

Song, S., Kidziński, Ł, Peng, X. B., Ong, C., Hicks, J., Levine, S., Atkeson, C. G., & Delp, S. L. (2021). Deep reinforcement learning for modeling human locomotion control in neuromechanical simulation. Journal of NeuroEngineering and Rehabilitation, 18 (1), 126. https://doi.org/10.1186/s12984-021-00919-y4

Spatola, N., & Wudarczyk, O. A. (2021). Implicit attitudes towards robots predict explicit attitudes, semantic distance between robots and humans, anthropomorphism, and prosocial behavior: From attitudes to human-robot interaction. International Journal of Social Robotics, 13 (5), 1149–1159. https://doi.org/10.1007/S12369-020-00701-5

Story, M., Webb, P., Fletcher, S. R., Tang, G., Jaksic, C., & Carberry, J. (2022). Do speed and proximity affect human-robot collaboration with an industrial robot arm? International Journal of Social Robotics, 14 (4), 1087–1102. https://doi.org/10.1007/S12369-021-00853-Y

Strazdas, D., Hintz, J., Khalifa, A., Abdelrahman, A. A., Hempel, T., & Al-Hamadi, A. (2022). Robot system assistant (RoSA): Towards intuitive multi-modal and multi-device human-robot interaction. Sensors, 22 (3), 923. https://doi.org/10.3390/s22030923

Toichoa Eyam, A., Mohammed, W. M., & Martinez Lastra, J. L. (2021). Emotion-driven analysis and control of human-robot interactions in collaborative applications. Sensors, 21 (14), 4626. https://doi.org/10.3390/s21144626

Umbrico, A., Orlandini, A., Cesta, A., Faroni, M., Beschi, M., Pedrocchi, N., & Makris, S. (2022). Design of advanced human–robot collaborative cells for personalized human–robot collaborations. Applied Sciences, 12 (14), 6839. https://doi.org/10.3390/app12146839

Van Maris, A., Zook, N., Dogramadzi, S., Studley, M., Winfield, A., & Caleb-Solly, P. (2021). A new perspective on robot ethics through investigating human-robot interactions with older adults. Applied Sciences, 11 (21), 10136. https://doi.org/10.3390/app112110136

Vasconez, J. P., Kantor, G. A., & Auat Cheein, F. A. (2019). Human–robot interaction in agriculture: A survey and current challenges. Biosystems Engineering, 179 , 35–48. https://doi.org/10.1016/j.biosystemseng.2018.12.005

Vianello, L., Ivaldi, S., Aubry, A., & Peternel, L. (2023). The effects of role transitions and adaptation in human–cobot collaboration. Journal of Intelligent Manufacturing . https://doi.org/10.1007/s10845-023-02104-5

Vianello, L., Mouret, J.-B., Dalin, E., Aubry, A., & Ivaldi, S. (2021). Human posture prediction during physical human-robot interaction. IEEE Robotics and Automation Letters, 6 (3), 6046–6053. https://doi.org/10.1109/LRA.2021.3086666

Wan, S., Gu, Z., & Ni, Q. (2020). Cognitive computing and wireless communications on the edge for healthcare service robots. Computer Communications, 149 , 99–106. https://doi.org/10.1016/j.comcom.2019.10.012

Wang, W., Chen, Y., Li, R., & Jia, Y. (2019). Learning and comfort in human-robot interaction: A review. Applied Sciences, 9 (23), 5152. https://doi.org/10.3390/app9235152

Weis, P. P., & Herbert, C. (2022). Do I still like myself? Human-robot collaboration entails emotional consequences. Computers in Human Behavior, 127 , 107060. https://doi.org/10.1016/j.chb.2021.107060

Willemse, C. J. A. M., & van Erp, J. B. F. (2019). Social touch in human-robot interaction: Robot-initiated touches can induce positive responses without extensive prior bonding. International Journal of Social Robotics, 11 (2), 285–304. https://doi.org/10.1007/s12369-018-0500-9

Xiao, C., Fan, Y., Zhang, J., & Zhou, R. (2022). People do not automatically take the level-1 visual perspective of humanoid robot avatars. International Journal of Social Robotics, 14 (1), 165–176. https://doi.org/10.1007/s12369-021-00773-x

Xiong, J., Chen, J., & Lee, P. S. (2021). Functional fibers and fabrics for soft robotics, wearables, and human-robot interface. Advanced Materials, 33 (19), 2002640. https://doi.org/10.1002/adma.202002640

Yao, X., Ma, N., Zhang, J., Wang, K., Yang, E., & Faccio, M. (2022). Enhancing wisdom manufacturing as industrial metaverse for industry and society 5.0. Journal of Intelligent Manufacturing, 35 (1), 235–255. https://doi.org/10.1007/s10845-022-02027-7

Yu, J., Gao, H., Chen, Y., Zhou, D., Liu, J., & Ju, Z. (2022). Deep object detector with attentional spatiotemporal LSTM for space human-robot interaction. IEEE Transactions on Human-Machine Systems, 52 (4), 784–793. https://doi.org/10.1109/THMS.2022.3144951

Zacharaki, N., Dimitropoulos, N., & Makris, S. (2022). Challenges in human-robot collaborative assembly in shipbuilding and ship maintenance, repair and conversion (SMRC) industry. Procedia CIRP, 106 , 120–125. https://doi.org/10.1016/j.procir.2022.02.165

Download references

This project has received financial support from the Research Council of Lithuania (LMTLT), Nr. P-LLT-21–6, State Education Development Agency of Latvia, Ministry of Science and Technology (MOST)) of Taiwan.

Author information

Authors and affiliations.

Department of Mechatronics, Robotics, and Digital Manufacturing, Vilnius Gediminas Technical University, Plytinės Str. 25, 03224, Vilnius, Lithuania

Andrius Dzedzickis, Gediminas Vaičiūnas, Karolina Lapkauskaitė, Darius Viržonis & Vytautas Bučinskas

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, AD. and VB; methodology, VB and DV; validation, AD, GV and VB; formal analysis, DV and VB; investigation, GV and KL; resources, VB; data curation, GV and KL; writing—original draft preparation, DV; AD; KL.; writing—review and editing, DV, VB. and AD; visualization, AD; supervision, VB; funding acquisition, VB. All authors have read and agreed to the published version of the manuscript.

Corresponding authors

Correspondence to Andrius Dzedzickis or Vytautas Bučinskas .

Ethics declarations

Conflict of interest.

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Dzedzickis, A., Vaičiūnas, G., Lapkauskaitė, K. et al. Recent advances in human–robot interaction: robophobia or synergy. J Intell Manuf (2024). https://doi.org/10.1007/s10845-024-02362-x

Download citation

Received : 28 February 2023

Accepted : 28 February 2024

Published : 09 April 2024

DOI : https://doi.org/10.1007/s10845-024-02362-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Human–robot collaboration
  • Human emotions
  • Instrumental methods
  • Human safety
  • Psychological comfort
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. 2021: new robotics research centre

    recent research work in robotics

  2. Robotics and Autonomous Systems Research

    recent research work in robotics

  3. Autonomous Robotic Manipulation

    recent research work in robotics

  4. Engineers Developing Advanced Robotic Systems That Will Become Surgeon

    recent research work in robotics

  5. Robot coworkers: How AI impacts the future of work

    recent research work in robotics

  6. ABB demonstrates concept of mobile laboratory robot for Hospital of the

    recent research work in robotics

VIDEO

  1. ICRA 2023: The best robots that will change the world!

  2. Inside Amazon’s robot revolution

  3. Boston Dynamics' New Robot Makes Soldiers Obsolete, Here's Why

  4. Controlling Robotic Swarms

  5. Future of AI

  6. ICRA 2023: The best robots that will change the world!

COMMENTS

  1. Robotics

    MIT engineers design flexible "skeletons" for soft, muscle-powered robots. New modular, spring-like devices maximize the work of live muscle fibers so they can be harnessed to power biohybrid bots. April 8, 2024. Read full story.

  2. Robotics Research News -- ScienceDaily

    Robotic-Assisted Surgery for Gallbladder Cancer as Effective as Traditional Surgery. Mar. 5, 2024 — Each year, approximately 2,000 people die annually of gallbladder cancer (GBC) in the U.S ...

  3. Shaping the future of advanced robotics

    These tasks, straightforward for humans, require a high-level understanding of the world for robots. Today we're announcing a suite of advances in robotics research that bring us a step closer to this future. AutoRT, SARA-RT, and RT-Trajectory build on our historic Robotics Transformers work to help robots make decisions faster, and better ...

  4. Growth in AI and robotics research accelerates

    Growth in AI and robotics research accelerates. By filtering across the disciplinary spectrum, the fields continue to hit new heights. It may not be unusual for burgeoning areas of science ...

  5. Three Trends in Stanford Robotics Research

    Following are three trends in robotics research at Stanford. Read the full report. More adaptive robots: New robotic learning techniques - some involving learning from human demonstration, adaptive learning, optimization, and more - are leading to more useful robotics. Robot capabilities have grown to become more adaptable to dynamically ...

  6. Towards next generation digital twin in robotics: Trends, scopes

    The most recent research trends of DT in soft robotics aims towards the augmented or extended reality. Apart from establishing effective HRI systems, which will be discussed in the next subsection, virtual and extended reality has exhibited promising results in construction, development and functionality of soft robots in recent years [49], [50].

  7. Advancements in Humanoid Robots: A Comprehensive Review and Future

    This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview of the ...

  8. Robotics

    Robotics. Having a machine learning agent interact with its environment requires true unsupervised learning, skill acquisition, active learning, exploration and reinforcement, all ingredients of human learning that are still not well understood or exploited through the supervised approaches that dominate deep learning today.

  9. A Review of NASA Human-Robot Interaction in Space

    Purpose of Review This review provides an overview of the motivation, challenges, state-of-the-art, and recent research for human-robot interaction (HRI) in space. For context, we focus on NASA space missions, use cases, and systems (both flight and research). However, the discussion is broadly applicable to all activities in space that require or make use of human-robot teams. Recent Findings ...

  10. Recent Advances in Perceptive Intelligence for Soft Robotics

    Over the past decade, soft robot research has expanded to diverse fields, including biomedicine, bionics, service robots, human-robot interaction, and artificial intelligence. Much work has been done in modeling the kinematics and dynamics of soft robots, but closed-loop control is still in its early stages due to limited sensory feedback.

  11. Advances and perspectives in collaborative robotics: a ...

    This review paper provides a literature survey of collaborative robots, or cobots, and their use in various industries. Cobots have gained popularity due to their ability to work with humans in a safe manner. The paper covers different aspects of cobots, including their design, control strategies, safety features, and human-robot interaction. The paper starts with a brief history and ...

  12. What is robotics made of? The interdisciplinary politics of robotics

    This work intersects with and builds on several of BRL's stated research themes including Assisted Living, Safe Human-Robot Interaction, Swarm Robotics and Verification and Validation for Safety.

  13. Deep Learning in Robotics: A Review of Recent Research

    Applying deep learning to robotics is an active research area, with at least thirty papers published on the subject from 2014 through the time of this writing. This review presents a summary of this recent research with particular emphasis on the benefits and challenges vis-à-vis robotics. A primer on deep learning is followed by a discussion of 2

  14. Human-centered AI and robotics

    If humans and robots work together in such a close way, then it is required that humans have a certain trust in the technology and also an impression of understanding what the robot is doing and why. ... Sheridan TB (2020) A review of recent research in social robotics. Curr Opin Psychol 36:7-12. Article Google Scholar Schwartz T, Feld M ...

  15. Engineers design soft and flexible 'skeletons' for muscle-powered robots

    Engineers designed modular, spring-like devices to maximize the work of live muscle fibers so they can be harnessed to power biohybrid robots. Our muscles are nature's perfect actuators -- devices ...

  16. Contemporary research trends in response robotics

    The multidisciplinary nature of response robotics has brought about a diversified research community with extended expertise. Motivated by the recent accelerated rate of publications in the field, this paper analyzes the research trends, statistics, and implications of the literature from bibliometric standpoints. The aim is to study the global progress of response robotics research and ...

  17. 500 research papers and projects in robotics

    The recent history of robotics is full of fascinating moments that accelerated the rapid technological advances in artificial intelligence, automation, engineering, energy storage, and machine learning.The result transformed the capabilities of robots and their ability to take over tasks once carried out by humans at factories, hospitals, farms, etc.

  18. The International Journal of Robotics Research: Sage Journals

    International Journal of Robotics Research (IJRR) was the first scholarly publication on robotics research; it continues to supply scientists and students in robotics and related fields - artificial intelligence, applied mathematics, computer science, electrical and mechanical engineering - with timely, multidisciplinary material... This journal is peer-reviewed and is a member of the ...

  19. The evolution of robotics: research and application progress ...

    The use of robots to augment human capabilities and assist in work has long been an aspiration. Robotics has been developing since the 1960s when the first industrial robot was introduced. As ...

  20. Robotics

    This literature review presents a comprehensive analysis of the use and potential application scenarios of collaborative robots in the industrial working world, focusing on their impact on human work, safety, and health in the context of Industry 4.0. The aim is to provide a holistic evaluation of the employment of collaborative robots in the current and future working world, which is being ...

  21. (PDF) Recent advancements of robotics in construction

    In the past two decades, robotics in construction (RiC) has become an interdisciplinary research eld that in-. tegrates a large number of urgent technologies (e.g., additive manufacturing, deep ...

  22. The Role of Robotics in the Future of Work

    On Wednesday, June 22 (2-3pm ET), 2022, NIOSH will host a : The Role of Robotics in the Future of Work (more details below). NIOSH established the (Robotics Center) to provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and well-being. The Robotics Center defines robots ...

  23. Mobile robotics in smart farming: current trends and applications

    2. Research methodology. A systematic literature review (SLR) was performed to manage the diverse knowledge and identify research related to the raised topic (Ahmed et al., 2016), especially to investigate the status of mobile robotics in precision agriculture.In particular, we searched for papers on "mobile robotics" with the term "agriculture 4.0" in the title, abstract or keywords.

  24. Quantitative analysis of transfer and incremental learning for image

    Incremental and transfer learning are becoming increasingly popular and important because of its advantageous nature in data scarcity scenarios. This work entails a quantitative analysis of the incremental learning approach along with various transfer learning methods using the task of image classification.

  25. Recent Trends in Robotic Patrolling

    Recent Trends. Today's research on robotic patrolling follows approaches that can be seen as in steady continuity with some of the challenges and methods that were introduced by the first seminal works. One of the papers introducing the patrolling problem to the multi-agent community was proposed in [ 49 ].

  26. Collaborative Robotics raises $100M in Series B for mysterious mobile

    In many cases, this is the same work that humanoid robots are jockeying for. Brad Porter, a former distinguished engineer and vice president of robotics at Amazon, founded Collaborative Robotics in 2022. The Cobot team includes robotics and artificial intelligence experts from Amazon, Apple, Meta, Google, Microsoft, NASA, Waymo, and more.

  27. Polymers

    The flexibility and adaptability of soft robots enable them to perform various tasks in changing environments, such as flower picking, fruit harvesting, in vivo targeted treatment, and information feedback. However, these fulfilled functions are discrepant, based on the varied working environments, driving methods, and materials. To further understand the working principle and research ...

  28. Collaborative Robotics raises $100 mln amid robots funding boom

    U.S. startup Collaborative Robotics (Cobot) said on Wednesday it has raised $100 million in a Series B funding round, as investors bet on a new generation of robots incorporated with artificial ...

  29. Recent advances in human-robot interaction: robophobia or synergy

    Concerning Table 7, the most recent research on human-robot motion synchronization indicates that the latter is more applicable in home appliances and general service robots than in industrial ones. Such a situation could be explained by the industry's low popularity of collaborative robots and strict work safety regulations. Nevertheless ...