Next Article in Journal
Numerical Analysis of a Full-Scale Thermophilic Biological System and Investigation of Nitrate and Ammonia Fates
Previous Article in Journal
Performance Evolution of Recycled Aggregate Concrete under the Coupled Effect of Freeze–Thaw Cycles and Sulfate Attack
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives

Faculty of Computer Science, Electronics and Telecommunications, AGH University of Science and Technology, Aleja Adama Mickiewicza 30, 30-059 Kraków, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(14), 6951; https://doi.org/10.3390/app12146951
Submission received: 25 May 2022 / Revised: 3 July 2022 / Accepted: 7 July 2022 / Published: 8 July 2022

Abstract

:
Intelligent mobile robots that can move independently were laid out in the real world around 100 years ago during the second world war after advancements in computer science. Since then, mobile robot research has transformed robotics and information engineering. For example, robots were crucial in military applications, especially in teleoperations, when they emerged during the second world war era. Furthermore, after the implementation of artificial intelligence (AI) in robotics, they became autonomous or more intelligent. Currently, mobile robots have been implemented in many applications like defense, security, freight, pattern recognition, medical treatment, mail delivery, infrastructure inspection and developments, passenger travel, and many more because they are more intelligent nowadays with artificial intelligence technology. To study the developments of mobile robots, we have studied an extensive literature survey of the last 50 years. In this article, we discuss a complete century of mobile robotics research, major sensors used in robotics, some major applications of mobile robots, and their impact on our lives and in applied engineering.

1. Introduction

The word ‘robot’ was first introduced in the real world in 1920 through the play ‘Rossum’s Universal Robots’ written by the Czech Karel Capek [1]. Robots become intelligent and autonomous after the implications of computer software and cybernetics in the field of robotics science [2]. An intelligent, vigorous robot is a machine that operates in a relatively unknown and unpredictable environment. It is also known as an autonomous mobile robot. Any robot will be intelligent or autonomous if the robot has the potential to navigate freely without obstruction and the intelligence to circumvent any obstacle situated within the surrounding movement. The most famous definition of a robot was given by the Robot Institute of America, which states: ‘A robot is a multifunctional, reprogrammable manipulator designed to move materials, tools, specialized devices or parts, through motions integrated with variable programming for the performance of different tasks’ [3]. A mobile robot performs any predefined task with the help of an artificial intelligence algorithm. A mobile robot is a special type of software-controlled machine which utilizes sensors and other technologies to recognize its environment and follow its predefined task. The autonomous robot usually follows three steps, perception (sense), planning and interpretation (process), and movement (action), to do a predefined task. Although a mobile vehicle performs those tasks that are associated with humans, there are some common functional components during the performance of a similar task by a human and a mobile robot. A mobile robot does not need to look or act like a human [4]. Perhaps the most important and interesting aspect of mobile robots is the spectrum of underlying problems that range from complex mathematical problems to extremely logical ones [5]. In the last five decades, research in the field of robotics science has been extensively carried out. This paper presents a scenario of the development of mobile robots and their implications in the real world. The first humanoid robot was built in 1950 by Tony Sale in the United Kingdom. The name of the robot was “George”, with a height of 6 feet. The robot George was able to walk and talk [6]. The first humanoid robot was just built for £15 using metal collected from a crashed bomber plane called Wellington. When Mr. Tony Sale built this robot, he was only 19 years old. Figure 1 illustrates the internal structure of the robot ‘George’.
The history of mobile robots cannot be illustrated without discussing Shakey the robot, which was developed in the late 1960s. It was the first mobile robot which had the capacity to apprehend and cause regarding its surroundings. It was developed by a group of engineers at Stanford Research Institute (SRI) supervised by Charles Rosen, and the work was funded by the Defense Advanced Research Projects Agency (DARPA) [7]. Figure 2 is the image of the robot, Shakey, with Charles Rosen in 1983.
An American mathematician Norbert Wiener has an important contribution to the development of mobile robots because he developed the science of cybernetics. Cybernetics plays an efficient role in the development of intelligent robots. Claude Elwood Shannon was an American electrical engineer who developed information theory. Shannon’s mouse was created in 1950. Shannon’s mouse was a mechanical mouse that was able to move around a labyrinth because it was controlled by a circuit of the electromechanical relay. This mouse is considered the first artificial learning device. The Ducrocq family invented Miso, a cybernetic animal and remote-controlled vehicle, in 1953. The cybernetic animal Miso was developed in five different types known as M1, M2, M3, M4, and M5, consecutively [8]. Figure 3 is an image of the robot Miso-1 with his developer Albert Ducrocq captured in the 1950s.
Modern robotics industries are producing more advanced mobile robots with advancement and research in the fields of electronics devices, software, artificial intelligence (AI), computers, science, and technology. Autonomous robots have become more applicable in industrial and commercial settings today. The most advanced and intelligent humanoid robot, ‘Ameca’, was developed by Engineered Arts and entered the public domain in January 2022 at the Consumer Electronics Show (CES) [9]. This robot has face detection technology and cameras in its eyes. It can understand language and speak back. The most interesting feature of this robot is that it can mimic the facial expressions of human beings and have human-robot interaction properties. Figure 4 shows an image of the robot Ameca. This article provides an outstanding comprehensive idea about the historical development, functionality, and future research perspectives of mobile robots.
AI was established as a discipline in academics in 1956. Standard objectives of research in the field of AI include knowledge, perception, reasoning, learning, representation, planning, natural language processing, and the ability to move independently and manipulate objects [10]. Researchers and technologists in AI fields have reshaped and integrated a broad range of problem-solving techniques, including mathematical and search optimization, artificial neural networks, formal logic, and methods depending on probability, economics, and statistics. AI techniques are also applied in other fields of study and research such as psychology, philosophy, computer science, linguistics, robotic science, and many more fields. In recent years, AI integration in almost every field of study and research has increased rapidly, which is why every researcher in this world is excited to work with this technology.
The application of AI techniques in the field of robotic science plays a crucial role in developing smart robots. Robotics and AI are a powerful ideal integration for automating work outside and inside the industrial setting. In the last decades, AI has achieved a rapidly growing common presence in robotic solutions, introducing learning and flexibility abilities in previously rigorous applications [11]. Modern drone fighter planes are the best example of modern robots that use AI technology in their operations. AI technology has two subcategories, which are machine learning (ML) and deep learning (DL). ML and DL can be implemented in robots to obtain more precise and better outcomes. These technologies are also implemented in robotics science for a wide range of military application purposes. Figure 5 shows the relationship between AI, ML, and DL.
The word ‘robotics’ was used by a writer, Isaac Asimov, in his short essay ‘Runabout’ in 1942. Asimov described robotics’ role in human society in a better way. Mobile robotics’ working laws have been well described by Asminov. Asimov presented three laws of robotics, and these laws are still meaningful [12].
  • First law: A robot should not harm a human being or, with inaction, permit a human being to harm them.
  • Second law: A robot should follow the tasks specified by a human except in the case where the first law conflicts with the situation.
  • Third law: A robot should save its existence except in the cases where the first and second laws conflict with the situation.
In the last 100 years, various research has been conducted in the field of mobile robotics. Figure 6 illustrates some major research and events that have happened in the last 100 years in the field of robotics engineering.

2. Literature Survey

A variety of research has already been done in the last five decades. We have surveyed the most famous research carried out in the last five decades for almost every area of issues related to mobile robots, according to the IEEE database.

2.1. From 1970 to 1980

In a research paper, Kirk et al. (1970) [13] proposed a dual-mode algorithm to send an intelligent mobile rover to other planets to explore uncertain terrain. Gaussian functions for probability density were used to find the optimal path for the rover in uncertain terrain. Cahn et al. (1975) [14] developed an algorithm to solve problems in robot navigation and the avoidance of obstacles with the help of information of the range defined in the environment. This algorithm system requires a very small amount of memory from minicomputers. McGhee et al. (1979) [15] presented an extension of limb coordination for mobile robots in the case of terrain that constitutes regions not fit for weight-bearing. The computer simulation study formulates a heuristic algorithm to avoid the regions which are not fit for weight-bearing.

2.2. From 1981 to 1990

Blidberg (1981) [16] studied the effect of the microprocessor type on the intelligent mobile robot and reviewed the increasing memory size and distributed processing and the consequent effect on communication, mission capabilities, navigation, and control. Thorpe (1983) [17] presented a study on the CMU rover which was developed in the Robotics Institute of Carnegie-Mellon University at that time. This article explained the FIDO Vision and navigation system of the CMU rover, as well as the working principle of the rover. Harmon (1984) [18] provided a one-person perspective on the problem of planning the route in unknown natural terrains for the operation of autonomous mobile robots. This paper explained, in brief, obstacle sensing problems and the reasoning for dynamic situations, non-geometric conditions, and geometric conditions. Computer-aided design was presented in Meystel et al. (1984) [19] based on the database relationship provided at the initialization of the design process. This approach is implemented in mobile robots and multi-link manipulators. Keirsey et al. (1985) [20] presented work to develop intelligent vehicle control technology. The proposed work was focused on the implementation of the artificial intelligence technique in an autonomous system. The main goal of this technique was the development of a comprehensive capability that can be expertise to a DoD application.
Harmon et al. (1987) [21] proposed a robot for ground surveillance that can move freely in unknown terrain. The ground surveillance robot followed, fusing information from acoustic and vision-ranging sensors into avoidance points and local goals. The strategy for avoiding obstacles with the help of ultrasonic sensors was described in Borenstein et al. (1988) [22], and the ultrasonic sensors and their limitations on the algorithm of obstacle avoidance were explained in detail. Fujimura et al. (1989) [23] presented a method for path planning in the case of moving obstacles. Here, a set of polygonal obstacles was taken and focused on creating a path for the navigation of autonomous robots in the plane of two dimensions. Time is the main factor in this methodology. Luo et al. (1989) [24] presented a methodology to integrate multisensory technology into an intelligent system to enhance the overall capabilities of the system. A survey was done on this aspect such that the integration of multisensory and fusion with the intelligent system has increased in recent years. Additionally, this article focused on minimizing modeling errors or uncertainty in the fusion and integration process. Griswold et al. (1990) [25] proposed an optimization control approach that was capable of the deceleration and acceleration of mobile robots. The optimization control approach was applied to mobile robots in the case of moving objects for the avoidance of collisions between the mobile robot and moving objects. The collision probabilities were taken as constraints of the objective function, and a collision-free environment was developed by simulation results.

2.3. From 1991 to 2000

Shiller et al. (1991) [26] presented a method for the motion planning of autonomous robots moving on regular terrain. The proposed methodology obtained the mobile robot’s speeds and the geometric path that minimize the time of motion considering terrain topography, surface mobility, obstacles, and vehicle dynamics. The avoidance of dynamic obstacles by the intelligent mobile robot was carried out by the hidden Markov model, which was proposed by Zhu (1991) [27]. The hidden Markov model-based algorithm for stochastic motion control was developed. Additionally, the characteristics that differentiated the motion control and the visual computation requirements were discussed in dynamic domains from those in static domains. Manigel et al. (1992) [28] proposed a method for controlling a mobile robot by computer vision. The method provided proper guidance for mobile robots along roadways depending on given visual signals. The algorithms were developed to utilize a Kalman filter and geometric coordinate transformation to locate the relative position of the mobile robot on the road and identify the curvature of the road ahead. Yuh et al. (1993) [29] presented a neural network-based controller for remotely operated vehicles. Three learning algorithms of a neural network controller for online implementation were described with an equation of criticism. The performance of the learning algorithms was compared and analyzed with computer simulation. Gruver et al. (1994) [30] provided a comprehensive study of recent developments of autonomous robotics in the manufacturing systems, services, and aids of robotics for disabled human beings. The references highlighted the advances in robot control, sensor integration, walking machines, mechanical hands, powered prostheses, and manufacturing automation. Guldner et al. (1995) [31] introduced a controlling strategy in a sliding mode to track the gradient of the artificial potential field. The methodology of control applies to robotic systems of a fully actuated holonomic with freedom of n-degrees. A brief study of autonomous robots introduced the equilibrium point placement method for designing harmonic planners for potential fields in the circular form of obstacle security zones.
Campion et al. (1996) [32] analyzed the structure of the dynamic and kinematic models of autonomous robots. For each model, mobility, controllability, reducibility, holonomy, feedback equivalence, and configuration of the motorization were discussed. Hall et al. (1997) [33] described multisensory data fusion. This paper provided a tutorial on data fusion, process models, an introduction to data fusion applications, and identification of applicable techniques. Divelbiss et al. (1997) [34] presented the results of experimental data on the tracking of a car trailer system’s control. This experiment had three steps: generating an offline path with the help of an algorithm of a path space iterative, linearizing the model of kinematics about a trajectory, which was created with the help of path, and finally, applying a linear time-varying quadratic regulator to find the trajectory. Dias et al. (1998) [35] described a pursuit simulation. This article provided a solution for the difficulty in pursuit of moving objects on a plane with the help of a mobile robot and a system of active vision and addressed problems of visual smoothness and visual fixation pursuit, navigation with the help of compensation for movements of systems, and visual feedback. Algorithms for control and visual processing were described. Suzumori et al. (1999) [36] developed a robot for micro inspection for 1-in pipes. The diameter and length of the robot were 23 mm and 110 mm, respectively. The robot was equipped with a high-quality micro-CCD camera (charge-coupled device) and a dual hand to operate compact objects in pipes. The proposed robot for micro inspection could travel through both curved and vertical sections of pipes. The performance and design of these microrobots were described in detail. Gasper et al. (2000) [37] proposed a methodology for vision-based navigation of an autonomous robot with the help of a single omnidirectional camera in indoor environments. The mobile robot was controlled to follow a predefined path with higher accuracy by locating the visual landmarks on the ground plane with the help of bird’s eye views through the camera.

2.4. From 2001 to 2010

Martinez et al. (2001) [38] drew a comparison between principal components analysis (PCA) and linear discriminant analysis (LDA). As far as previous research showed, object recognition by LDA-based algorithms was superior to PCA-based algorithms. However, this research explained that this case is not always true. In the case of a small training data set, PCA is superior to LDA. DeSouza et al. (2002) [39] studied the developments in the domain of vision for navigation of mobile robots in the last 20 years. The survey included both outdoor and indoor navigation. The cases of navigation using object recognition, optical flows, and techniques of paradigm appearance based on environments were discussed in this article. Lee et al. (2003) [40] proposed a novel approach to locate the position of an autonomous robot with the help of a moving object image. This approach integrated data from the observed location with the help of the estimated position and the dead reckoning sensors, using captured moving objects images by a stationary camera to locate the position of a mobile robot. This approach was applied to a moving object on the wall to describe the reduction of uncertainty while locating the location of a mobile robot. The human-robot interaction in the field of rescue robotics was discussed in Murphy (2004) [41]. The article presented a short description of the major human-robot interaction issues in the number of human reductions, which includes control of a mobile robot, encouraging the acceptance in the domain of existing social structure, and performing maintenance with geographically dispersed teams in intermittent communications. Burgard et al. (2005) [42] proposed a technique for the exploration of coordinated multi-robots. This approach is for the coordination of multiple mobile robots, which simultaneously increases the cost of finalizing a target point and its utility. The results of the proposed algorithm demonstrated that this approach effectively and quickly distributed the mobile robots across the environment and permitted them to quickly complete their mission.
Rentschler et al. (2006) [43] presented a theoretical and practical analysis of wheeled miniature in vivo mobile robots for laparoscopy support. The main objective of this approach was to develop a mobile imaging robot integrated with wireless technology that could be placed in the abdominal cavity whenever surgery occurs. These robots provided multiple-angle views of the surgical environment to the surgeon. Davison et al. (2007) [44] presented an algorithm of real-time that could recover the monocular camera’s 3D trajectory, moving quickly in a previously unknown scene. The main factor of this approach was the sparse creation of a steady map of environmental landmarks within the range of a probabilistic framework online. This approach expanded the range of mobile vehicle systems in which SLAM might be applied usefully, nonetheless, and also opened new areas. The MonoSLAM applications for real-time mapping and 3D localization for a humanoid robot were implemented in this proposed work. Wood (2008) [45] proposed a technique to develop a small-sized insect-type micro air mobile robot using biological principle exploring. The proposed principles provided an idea on how to create better thrust to hold the flight for micro-scale vehicles. This approach showed how new paradigms of manufacturing enable the generation of the aeromechanical and mechanical subsystems of any micro mobile robotics device capable of trajectories of the Diptera wing. Choi et al. (2009) [46] proposed two decentralized algorithms to address the allocation of a task to coordinate several autonomous robots. These algorithms were CBBA (consensus-based bundle algorithm) and CBAA (consensus-based auction algorithm). These algorithms utilized the market-based decision strategy as the decentralized task selection mechanism and used a local communication-based consensus routine as the conflict resolution mechanism to ensure the winning values of the bid. Glaser et al. (2010) [47] proposed an algorithm designed to be executed in an embedded failsafe environment with very low computation power, such as the engine control unit, to be implementable in future commercial vehicles. Additional performance indicators such as traffic rules, comfort, travel time, and consumption can be improved by optimizing trajectories through the proposed algorithm.

2.5. From 2011 to 2021

Song et al. (2011) [48] presented a methodology for the invention and characterization of a robot for home security surveillance with automatic recharging and docking capabilities. The presented system had a docking station and a surveillance robot. The size of the surveillance robot was palm-sized, and it had three wheels with a triangular shape. The success rate of this robot was 90% after 60 different docking attempts. Stephan et al. (2012) [49] presented a review report on the social implications of technology (SSIT) society contributions in this world since the IEEE society’s founding in 1982. In addition to surveys, they studied the main important key technologies which have significant future social impacts. Security and military technologies were studied significantly in this survey. Broggi et al. (2013) [50] presented future automated vehicles which have the vision of the Intelligent Systems Laboratory (VisLab) and artificial vision, ranging from the selection of sensors up to their exclusive testing. The options of VisLab’s design were discussed with the help of the BRAiVE intelligent mobile robot as an example of a prototype. This methodology also presented final remarks on the perspective of VisLabs on future vehicles. Endres et al. (2014) [51] proposed a new mapping system that created a 3D map of higher accuracy with the help of an RGB-D camera. Sensors and odometry were not required in this approach. The impact of various parameters was discussed and analyzed, such as various visual features, the feature selection of the descriptor, and methods of validation.
Dong et al. (2015) [52] investigated design problems and formation control analysis for UAV swarm systems (unmanned aerial vehicles) to obtain time-dependent formations. First, the formation protocols were given for UAV swarm systems to get predefined time-dependent formations. The theoretical results obtained in this approach were verified on the platform of quadrotor formation. Zeng et al. (2016) [53] considered a new technique of autonomous relaying, where the relay nodes were mounted on UAVs (unmanned aerial vehicles), and therefore, the UAVs could move at a very high speed. In this approach, the authors studied the problem of maximization in systems of mobile relaying by the transmit power of source/relay optimization across the trajectory of the relay, subject to the mobility constraints in practical terms, as well as the constraint of information-causality at the relay. An iterative algorithm was proposed for relay trajectory and power allocations optimization, alternately. Rasekhipour et al. (2017) [54] presented a path planning controller for model prediction. The dynamics of the vehicle and the potential function were parts of the objective function. Therefore, during optimal path planning using vehicle dynamics, the path planning system could handle different obstacles and road structures distinctly. The controller of path planning was simulated and modeled on a vehicle model of CarSim for some complicated scenarios of the test. The outcome of this approach showed that, with the help of this path planning controller, the autonomous vehicle could avoid obstacles and observe road regulations with an accurate vehicle dynamic. Quin et al. (2018) [55] proposed a versatile and robust estimator of monocular visual-inertial state (VINS-Mono). This approach began with a robust process for the initialization of the estimator. A nonlinear method with an optimization process was applied to obtain visual-inertial odometry of higher accuracy by fusing measurements of the pre-integrated inertial measurement unit and observations of the feature. A module of loop detection in combination with coupled formulation allowed for minimum computation of re-localization. The presented work was a complete, reliable, and versatile system that was applicable for many applications which require localization with higher accuracy. Nicholson et al. (2019) [56] presented a technique for 2D (two-dimensional) object detections with various views to simultaneously evaluate a quadric 3D (three-dimensional) surface for every object and position of camera localization. This paper also included development of a model of the sensor for the detection of objects that addressed the problem of partly visible objects and illustrated how to mutually evaluate the pose of the camera and dual constrained quadric parameters in the graph of factor-based SLAM having a general camera perspective.
Yurtsever et al. (2020) [57] discussed the emerging technologies and common practices in autonomous driving. This study also described issues with automated driving, such as unsolved problems and technical aspects. The study included emerging methodologies, present challenges, system architectures of a high level, and core functions having mapping, planning, perception, human-machine interfaces, and localization, which were reviewed thoroughly. Zhu et al. (2021) [58] reviewed deep reinforcement learning (DRL) methods and a DRL-based navigation framework. Then, it made a systematical comparison and analysis of the differences and relationships between four major application scenarios: indoor navigation, social navigation, and local obstacle avoidance. Lastly, the development of DRL-based navigation was described.

3. Autonomous Navigation of a Mobile Robot

Autonomous mobile robots are equipped with sensors and cameras; if they observe an unexpected obstacle while maneuvering their environments, like a crowd of people or tree, or a fallen box, they will utilize a technique of navigation such as avoidance of collisions to stop, slow, or divert their path around the object and then continue with their predefined mission [59,60]. With the integration of artificial intelligence in mobile robotics, robots can become more advance and efficient in all applications. Computer vision and image processing play a vital role in the movement of autonomous robots with higher accuracy of path planning. Various visual skills are considered desirable in computer vision, among the most important being determining the range of objects in the predefined environment [61]. An accurate and collision-free path planning is mandatory for the movement of mobile robots to perform their assigned task with greater accuracy [62]. The navigation of mobile robots in tropical conditions is the main and most important section for researchers for the real-world implementation of robots. Figure 7 illustrates the control system of a mobile robot. This system is divided into three parts: reasoning, process, and context. The reasoning is the part where the robot makes decisions and localizes the appropriate path to perform its task. The process is the part where controlling the motion of the robot by utilizing the sensors is performed or all dynamic steps performed in between two different states, and it has haptic feedback. Lastly, context is the part of the control system where the representation of the state is performed, and commands for motion are tailored to maintain the contexts of movement.
In recent years, researchers have been focusing on AI-based control strategies, such as sliding mode control systems, fuzzy logic-based control systems, agent-based control systems, ANN-based control systems, etc., for the navigation of mobile robots. One of the major utilizations of AI in the field of mobile robots is to develop a better-quality control system for the autonomous navigation of mobile robots. The genetic algorithm with an artificial neural network is an evolutionary technique for the estimation of optimal parameters for the gimbal joints of the mobile robot to optimize the robotic models [64]. The algorithm for the navigation of mobile robots is based upon the velocity during the methodology of obstacle avoidance and the algorithm for guidance-based tracking. A fuzzy logic-based decision-maker can be developed to integrate the two predefined algorithms intelligently [65]. There are various methods that have been developed for the purpose of optimization in the planning of trajectory for the manipulators of robots. For finding an optimal trajectory, position estimation can be done through the 3RUU robot. After that, an objective function must be defined with two minimized terms, where the first term is associated with the total operation time and the second one is associated with the combination of the squared jerk towards the trajectory [66].
Navigation is a basic and most vital problem of mobile robots. Deep reinforcement learning plays an important role in the navigation of the mobile robot because of its powerful experience and representation learning abilities. It can be implemented for indoor navigation, social navigation, local obstacle avoidance, and multi-robot navigation. Nowadays, solutions for autonomous driving are based on machine learning and artificial vision for the perception of the environment and accelerating decision-creating tasks. This technique can be utilized for the navigation of robots in the indoor environment. The navigation of a mobile robot can be commanded with the robotic operating system, which follows the classification output of an accelerator for the convolutional neural network in the field-programmable gate array integrated into a sensor of neuromorphic dynamic vision [67]. Figure 8 shows one of the best examples of the modern mobile robot, which is known as the Spot dog.
Autonomous navigation is the most important requirement for any intelligent mobile robotics system. For autonomous navigation in an environment, an autonomous mobile vehicle must perform SPLAM (simultaneous planning, localization, and mapping). A mobile robot should perform localization, mapping, and planning consecutively to operate successfully in the environment. If any of these three activities is absent, then a robot cannot walk autonomously in real-life deployment scenarios. Figure 9 illustrates the complete scenarios for the autonomous navigation of an intelligent mobile robotics system and describes all the steps and functionalities taken by a robot to perform its task freely without any obstruction. If any mobile robot just follows these navigation steps with proper guidance, then the robot performs its defined tasks with higher accuracy and in much less time compared to other normal robots.

3.1. Mapping

Mapping is the procedure of designing a map of the predefined environment for the movement of robots to perform their task with greater accuracy [69]. Mapping enables a mobile robot to be aware of the changes in the predefined environment across the meantime of the navigation in a prevailing map or in a map that is generated online. The environment changes may be due to the availability of a new path or a predefined path that is not available, some changes in the condition of environments, or any other changes.

3.2. Localization

Localization is the procedure for finding where an autonomous robot is situated in the environment. It is very important to find the place from which a robot can start its mission [70]. Various methods have been derived for the localization process. This process can be achieved with the help of a GPS sensor. However, the low cost of the GPS sensor has errors when implemented. Algorithms of localization have been organized with the help of sensors of proprioception such as stereo and monovision sensors and LiDAR (2D and 3D), which gives higher accuracy. Bayes filters, such as Kalman filters, particle filters, unscented Kalman filters, extended Kalman filters, etc., are being applied by fusing information gathered from different sources to estimate the localization [71].

3.3. SLAM (Simultaneous Localization and Mapping)

Mapping and localization are directly proportional to each other because if one activity gets an error, then the other activity will get an error automatically. Therefore, these two activities should be estimated in conjunction. These two activities are addressed together using SLAM algorithms [72]. SLAM is the algorithm in which mapping and localization occur at the same time. SLAM can be performed with the help of different sensors such as monocular vision sensors, 2D, 3D LiDAR, and stereo vision sensors [73]. The SLAM algorithms are based on Bayes filtering techniques and SLAM approaches based on the graph. When a mobile robot reassesses an area of its environment, then with the help of loop closure algorithms, similarities in the sensor data are estimated to correct errors of drift in localization and mapping.

3.4. Path Planning

An autonomous robot needs algorithms that provide deliberative planning of the route for an available map, as well as the capability to deliver reactive obstacle avoidance when a mobile robot approaches its goal [74]. Deliberative path planning is called global path planning, and reactive path planning is called local path planning [75]. The route planning map is generally created online and continuously updated with the help of SLAM algorithms. Path planning algorithms are based on efficient state-space search and sampling-based methods [76]. The kinematics of mobile vehicles are considered during the stages of path planning, so the planned path is better for the mobile robot to execute. Planning a path for a robot is also known as navigation. It is the most important part of autonomous vehicles. During the planning of a path for the navigation of a mobile robot, it is required to design a path that is free from obstacles. Thus, collision avoidance by a mobile robot is an important task [77].

3.5. SPLAM (Simultaneous Planning, Localization, and Mapping)

SPLAM is the process in which mapping, localization, and route planning are carried out simultaneously. This is the best way to obtain the highest accuracy during the navigation of an autonomous vehicle [78]. All activities such as location, mapping, and route planning are interdependent. Thus, if any one of them obtains errors, then the remaining will get errors. This is also known as the integrated approach, which consists of every activity of the autonomous system. Sensors are the most important part of the mobile robot.
Figure 10 demonstrates almost every type of sensor with their applications in robotics science, which are useful to move a mobile robot freely without any support. Sensors are the most important part of any robot, which allows any robot to become autonomous.

4. Some Major Applications of Mobile Robots

Mobile robots are used in various fields, including personal assistance, military surveillance and security, space and ocean exploration, healthcare, distribution, warehouse applications, and many more applications. Autonomous robots are utilized in manufacturing industries, where the execution of the robot is in a highly controlled environment. However, intelligent robots cannot always be preprogrammed to perform predefined tasks because no one can judge in advance what type of environment will require the sensorimotor transformations required by the various circumstances that the system may encounter [79]. Figure 11 shows some major and important applications of mobile robots in our daily lives. These applications include cleaning services (such as vacuum cleaning), transportation (such as autonomous vehicles and robots used in transportation), rescue and search (such as rovers used in space exploration), security and surveillance (such as border surveillance robots and autonomous military robots), education and research (such as robots for teaching and playing with kids), and customer supports provided by companies or industries by the help of 24/7 chatbots. In 2023, NASA (National Aeronautics and Space Administration) is going to send its first rover (mobile robot) to the moon. The name of this mobile robot is VIPER (Volatiles Investigating Polar Exploration Rover). This program is part of the Artemis program. This mobile robot will explore and map the resources available under and on the lunar surface, especially water or ice. This research will help investigate the presence of a sustained human on the moon [80].

5. Architecture and Components of a Typical Modern Autonomous Mobile Robot

The theory of autonomy is a route to the Internet of Things vision, assuring improved integration of intelligent systems and services and minimizing the intervention of human beings. Autonomous systems contain object coordination and agents in a few common environments, which is why their simultaneous behavior achieves a collection of global goals. A mobile robot has mainly four components, which are controllers, actuators, sensors, and power systems. Every system has agents that act as controllers of their given environment and work toward individual goals; that is why collective behavior achieves the global goals of the system. Component controllers are usually a microprocessor, personal computers, or embedded microcontrollers [81]. Actuators are motors that are used to move the robot’s arms, legs, or wheels. Sensors are used to sense the environment for the avoidance of obstacles, dead reckoning, position location, etc., depending on the requirement of the mobile robot [82]. The components of the power system are meant to supply the energy to the robots for their functioning, and they are a DC source, which means a battery. Autonomous vehicles use an advanced set of sensors, ML, and AI technology and evaluate to plan the path to explain and navigate their environment, unleashed by wired power. Figure 12 illustrates the complete modern architecture and each component of advanced intelligent robots. The architecture of intelligent systems works according to the functionality of the mobile robot. They perform three continuous functions: perceiving the property and nature of the environment, which means identifying the dynamic conditions; carrying out the actions to affect the dynamic conditions in the predefined environment; and finally, providing the reasoning to calculate perceptions and solve problems and find the actions needed for that environment [83]. An intelligent robot provides an ideal and challenging field for the illustrations of intelligent shapes. Evaluation of rational behavior that contains autonomy by the effectiveness of robots and the ability to solve tasks in multiple and well-known environments [84]. Thus, it raises an important requirement on the architecture of control systems. In recent decades, the application of the modern architecture of autonomous systems has been widely used by every automation company, as well as different types of industries. Modern intelligent mobile robots are nowadays utilized by several industries for different purposes such as remote fleet management, factory automation, networking communications, facilities optimization, remote AI or inference, data analytics, and simulation of multiple tasks.

6. The Mechanism of Mobile Robots

Generally, a mobile robot utilizes wheels for locomotion on the ground. An edge-AI-based intelligent mobile robot is capable of object and voice recognition, which can be applicable for searching for an object indicated by a user’s voice. Object recognition, voice recognition, and motor control can be implemented through intelligent mobile robots [86]. The manipulator arms are normally attached to a surface and have a linkage of chains in series, which might change shape, and an end-effector or hand for object manipulation. To show the complete structure of a typical mobile robot, we use as an example a maxi-sized mobile robot. Figure 13 shows the complete mechanism of a typical Karo mobile robot for rescue operations.
Mobile robots can be categorized into different types based on the types of locomotion. They might be stepping, tracked, wheeled, or ball balancing robots [88]. Wheeled robots are the most important investigated, recognized, and frequently utilized set of mobile robots. Wheeled mobile robots are a special type of dynamic system which require an optimal torque application to the wheels of the mobile robot to obtain the required motion of the platform. Thus, algorithms of motion control in this mobile robot require considering the properties of the dynamics of the system. This type of mobile robot can obtain optimal speeds on plane surfaces, but the optimization of speed on rough terrain still needs the attention of researchers to resolve this problem. Wheeled mobile robots are generally categorized based on the number of wheels attached to a mobile robot, such as two-wheeled mobile robots, three-wheeled mobile robots, four-wheeled mobile robots, five-wheeled mobile robots, six-wheeled mobile robots, and many more. In recent years, various researchers have been focusing on omnidirectional wheels, which can allow a mobile robot to navigate in an unknown direction without having to fluctuate the orientation of the body of the mobile robot [89]. The wheeled mobile robot control system is a most difficult subject for both its practical and theoretical value. Mobile robots that are nonholonomic and omnidirectional are mostly nonlinear, and particularly nonholonomic constraints have been focused on the enhancement of better techniques for nonlinear control.
This type of robot might calculate the internal state of the environment by utilizing proprioceptive sensors or the outsider state of the environment by utilizing sensors of the exteroceptive type. The exteroceptive estimations include images collected by microphones, compasses, cameras, and others. The proprioceptive estimations include the position and orientation of the robot, wheel angular velocity, current, and temperature of the motor. Controlling the motion of wheeled mobile robots in an obstacle-free environment can be carried out by tracking the reference trajectory. However, if the environment has obstacles, motion control for this type of mobile robot cannot be performed by the system of reference trajectory tracking [90]. The hardware system of a typical wheeled mobile robot includes a sensor module, control module, driver module, and power module for autonomous navigation. The sensor module includes an encoder (for real-time speed estimation), an inertial measurement unit (for the estimation of integrated posture by utilizing a gyroscope, accelerometer, and magnetometer), and LiDAR (for real-time navigation with obstacle avoidance). The control module includes STM32 (embedded system board of STM32, which might utilize the algorithm of proportional integral derivative (PID) to control the rotation of the motor through the driver of the motor to realize accurate robot movement) and an industrial computer (for the entire computation for the autonomous navigation). The driver module includes a motor driver (for maintaining motor functioning through embedded system help) and a motor (for navigation). The power module includes the sources of power needed to run the robotics system [91]. Figure 14 shows the hardware structure for the navigation of wheeled mobile robots.

7. Intelligent Control System of Mobile Robots

An intelligent control system is required to plan an efficient collaborative action between mobile robots. Intelligent control is a computationally accurate process of directing a complex system along with inadequate and incomplete representation and under insufficient specifications of how to carry it in an unknown environment toward a predefined task. Intelligent control, significantly, integrates planning with online compensation of error, which requires learning both the environment and the system to be a part of the procedure of controlling. Most significantly, intelligent control generally employs focusing attention, generalization, and combinatorial exploration as the elementary operators, which tends to a multiscale structure [92]. Algorithms such as A* and rapid random trees (RRT), path planning, and probabilistic algorithm of localization are used by autonomous cars, mobile robots, and unmanned aerial vehicles to recognize collision-free, safe, least-cost travel paths and travel collision-free from an origin to destination. Selecting an appropriate algorithm for path planning for mobile robots helps to ensure effective and safe navigation, and the best algorithm depends on the geometry of the robot as well as the constraints of computing, including dynamic/nonholonomic and static/holonomic constrained systems [93].

7.1. The A* Algorithms

This algorithm is very simple in computation with respect to other algorithms of path planning. A* is well suitable for applications in automotive vehicles because it can be adjusted with vehicle size, vehicle kinematics, and steering angles. This algorithm is the most famous traversal algorithm of path planning. The original algorithm utilizes the least expensive path and illustrates it utilizing the function given below in Equation (1) [93]:
F (n) = g(n) + h(n)
where the optimal path cost from the node of the target to n is h(n), and the real cost from node n to the primary node is g(n).

7.2. Probabilistic Algorithms

Collaborative localization and motion planning are utilized for the path planning of multiple robotic systems. One of the major types of algorithms developed for localization is known as probabilistic algorithms for the localization of robots. In this approach, the robot needs to estimate a distribution as posterior over the area of its poses labeled on the available data. If we denote the pose of robots at time t by st, m as the map of the environment, and the data enhancing up to time t by d0….t, then the position (p) of the robot is written as in Equation (2) [94]:
p (st/d0….t, m)
where, d0….t = o0, a0, o1, a1,………., at−1, ot. Here, observation is denoted by ot and at is considered as the action of the mobile robot at time t.

7.3. The RRT Algorithms

This algorithm generates a tree by creating random nodes in the vacant area. It initiates from the initial node and ends at the target node. At every iteration, the tree extends by developing a novel random node. This algorithm is a stochastic procedure. The RRT algorithm plays an important role in finding an accurate path for the navigation of mobile robots [95]. This algorithm is very useful in the obstacle avoidance task for the navigation of mobile robots. Figure 15 represents the RRT algorithm for the path planning of mobile robots.
In this algorithm, qinit is the initial node, qnew is the new node, qrand is the random node, and qnear is the nearest node from the novel node.

8. Some Major Impacts of Mobile Robots and Artificial Intelligence

Intelligent systems have brought about a major outbreak of change in human life in recent years after the integration of AI technology into the robotics system. Moreover, the application of robotics in our society is to do everyday tasks conveniently and easily, reducing the errors caused by humans and easily saving human energy. Autonomous robots are special types of robots that can make their own decisions. These robots are different from traditional robots, which are mainly dependent upon the human expert. Modern intelligent systems are currently quite simple, mostly related to where to move or how to move, but systems have more complexity, so the use of AI techniques has increased rapidly in the real world [96]. In our daily life, we are already using autonomous robots such as vacuum cleaners, pallets used in warehouses, assistance aids for drivers, and surgical tools.

8.1. Impacts on the Workplace

Robots can work continuously every day and night without any interruption. They need only energy in the form of electricity. However, humans cannot work like robots because they need rest and food. Thus, by using robots, we can enhance production capabilities in the workplace. Additionally, when AI technology is implemented in robotics, robots will be more effective. The main reason behind the induction of intelligent systems on the job is to reduce labor costs, improve efficiency, increase production, improve outcomes, and provide many more benefits [97]. There are some major drawbacks of the implementation of intelligent robots in the workplace: It eliminates jobs, which is why unemployment will increase, but the positive impact is that it generates another type of technological job that helps to create employment.

8.2. Impacts on the Industries

The deployment of an intelligent robot will be an essential factor in productivity growth and can change global supply chains. The development of automation technology is expected to create many automated jobs for manufacturing production in developed countries [98]. Intelligent systems have an accuracy rate of approximately 100%, but human beings cause some errors. Therefore, the implementation of intelligent machines increases customer satisfaction and improves business. In recent decades, pharmaceutical companies have adopted an autonomous production system to produce a huge amount of medical equipment, medicine, vaccines, and other medical items because medical-related items are very sensitive and require higher accuracy. These days, the highest-paid jobs in software companies are machine learning or AI engineers. In addition, robotics technology is very in demand in military surveillance and firefighting. Some major disadvantages of robot implications in the industry are high initial expenses, reshaping the market of labor, a burden on the educational system to develop better technologists, and changes to the culture of corporations at every stage.

8.3. Impacts on Human Lives

Intelligent programs and robots are involved in our everyday life as an assistant. Presently, human beings are using various home appliances which are based on intelligent technology and AI, such as automatic coffee machines, automatic washing machines, robot vacuum cleaners, smart TVs, and many more. Siri and Alexa are used as personal assistants to provide a way of searching for information, controlling devices, and making orders in smart homes. The integration of robotics and AI provides a better way for researchers, scientists, and medical practitioners to do their work with better efficiency. The presence of virtual technology can provide greater precision in health diagnosis [99]. This technology helps to protect the lives of human beings by the early prediction of life-threatening diseases.

8.4. Impacts on the Human-Computer Interactions

The application of an intelligent system for interactions between users and user assistance systems is a major step toward making communication systems more convenient and reliable. As intelligent robots become more reliable, there is a requirement to understand how human beings interact and perceive such technology. Various factors are mostly associated with human attitudes toward, acceptance of, trust in, and anxiety towards intelligent robots [100]. An advanced understanding of human attitudes toward robots should inform the future development, research, and deployment of intelligent systems in several domains of individual and public life. Figure 16 shows a complete communication system between the user and the user assistance system. The user section mainly contains human cognition, such as perception, attention, cognitive effort, and mental load, human behavior, such as activity, decisions, and task performance, and finally, human effects, such as emotion, satisfaction, stress, and physiology. The user assistance system is divided into two-part user interfaces in the form of interactivity, such as biofeedback, communication, multichannel access, natural language, and social cues, and a backend interface in the form of intelligence, such as machine learning, machine translation, natural language processing, speech recognition, affect recognition, and bio signal processing.
The best way to support human beings in the utilization of smart systems is by providing user assistance that can be applied in several different forms, such as guidance systems, robot advisors, virtual assistants, conversational agents, and recommendation agents [101]. User assistance is the capability of an intelligent system to assist users during the performance of their tasks. One of the most important parts of AI application is digital assistants for users integrated with AI, which is most common and nowadays becoming available in a wide variety and large numbers of usage scenarios. With recent developments in AI, digital assistants are becoming an important part of our daily life. Digital assistants integrated with AI technology provide remarkable opportunities, but this type of assistant can become a threat, such as in data breach incidents that might occur during the processing of input data through internet fraud [102].

9. Future Research Perspectives

Agriculture businesses, warehouses, military operations, healthcare institutions, and logical companies are all searching for novel and contemporary ways to improve operations efficiency, increase safety, ensure precision, and improve speed. Hence, all require autonomous vehicle support in the future. Advancing the long-term perspective, too much research work is mandatory to create more analytical systems and achieve autonomy of higher grades [103]. Intelligent robots will become more ultra-modern in a more intellectual, human-like way. The localization and navigation of an autonomous vehicle in a random environment is a problem due to the diversity and complexity of the environments, sensors, and methods involved. Various directions look to call for future research, despite other similar research work following through literature surveys. Future autonomous robotic systems will create the use of diverse robots that perform collaboratively to fulfill the mission goal. The heterogeneous collaboration and combination potentiality can strongly contribute to a multitude of various applications. Integrating ML and DL techniques with mobile robotics can enhance the range of robotics applications and the accuracy rate [104]. Figure 17 presents a brief outline of different fields of study and future research aspects of robotics science in almost every field of science and engineering [105].
Technological developments in novel innovations such as autonomous vehicles, drones for delivery, and intelligent industries that have robots as co-workers have a significant impact on the way of functioning of factories. The application of mobile robots for underground mining can accelerate the mapping of the mine, assist workers, create virtual models, and improve safety [106]. Various research opportunities such as obstacle avoidance, motion planning in a tropical situation, multiple-robot collaborative path planning, and many more are still open for researchers in a mobile robot.
Intelligent robots are helping to run the economy of the machine, and the Internet of things will be more and fully realized. Furthermore, more in-depth industrial applications will be provided with the growth in AI, 5G, Cloud-native technologies, and automation [107]. The market for autonomous mobile robots is growing very fast in the service sector, software field, and robotics field applications. The most important factor in achieving the growth of the market is the growing demand and adoption in various applications of end-use for automation products. This has led to enhanced efforts in development and research by several companies functioning in the market. Moreover, the market growth of mobile robots is managed by the government encouragement of several countries around the world. Central governments provide grants and funds for the creation of advanced mobile robots for industrial and defense applications. However, the biggest challenge for the development of this technology is the lack of experts and research. In various research institutes, technologists around the world are busy making the next generation of autonomous systems.
There are several issues related to the optimization and safety of the mobile robotics system formulation that needs the attention of researchers. It is very difficult to achieve optimal motion control, minimization of time for the path search, and enhancement of path search accuracy for mobile robots. Problems associated with path planning include finding an appropriate path from the initial position to the final position. Reinforcement learning algorithms can play a vital role in the optimization of the path for a single- and multiple-agent environment to resolve more realistic and complex problems [108]. There are mainly two types of problems related to the mobile robot that are mostly famous among researchers; these are the single- and multi-objective path planning of mobile robots. The problems for single-objective path planning require more efficient algorithms to provide the solution for the real-time and large-scale problems of path planning. For problems in the dynamic environment of multi-objective path planning, it might be a better solution to integrate novel evolutionary and traditional algorithms together [109]. Thus, hybrid algorithms will be a more effective and better approach to resolving these safety and optimization problems.

10. Conclusions

The intelligent mobile robot is an innovative autonomous vehicle system to overcome the unprecedented growth of demands for mobile robot systems in various fields, such as industry, medicine, firefighting, military operations, and many more. With the mobilization of intelligent mobile robots, businesses of the company are rapidly increasing, diversifying applications and flexibility. The novel technologies have developed and eased the lifestyle of humans in which their environmental hazards and dangers and exposure have been minimized to the optimal minimum. It has been shown that interest in research in the field of robotics engineering has grown rapidly in recent years. In this paper, we presented a comprehensive wide range of overviews and research outlooks regarding intelligent robots in the last 100 years. The focus of this article has been on autonomous vehicle research developments, working principles, and applications. As we know, this is the first comprehensive survey of intelligent mobile robotics that covers a century of research background. This article covered research development from 1920, when the word ‘robot’ first came into this world, to 2021, when the robot became more intelligent and reliable. Thus, it is an outstanding initial point for new researchers and a beneficial footnote blueprint for established researchers. Hence, we believe that a greater number of similar types of studies will be carried out and published in the coming years. This article will play a significant role in the future research directions of the researchers due to its coverage in a wide range of literature surveys and deeper theoretical concepts of mobile robots.

Author Contributions

R.R. is a student who has completed this research work under the supervision of A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their thanks for financial support from subvention No. 16.16.230.434 paid for by the AGH University of Science and Technology, Krakow, Poland.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Britannica. The Editors of Encyclopedia. “R.U.R.”. Encyclopedia Britannica, 20 November 2014. Available online: https://www.britannica.com/topic/RUR (accessed on 17 June 2022).
  2. Mennie, D. Systems and Cybernetics: Tools of the Discipline are progressing from the inspirational to the practical. IEEE Spectr. 1974, 11, 85. [Google Scholar] [CrossRef]
  3. Considine, D.M.; Considine, G.D. Robot Technology Fundamentals. In Standard Handbook of Industrial Automation—Chapman and Hall Advanced Industrial Technology Series; Considine, D.M., Considine, G.D., Eds.; Springer: Boston, MA, USA, 1986; pp. 262–320. [Google Scholar] [CrossRef]
  4. Nitzan, D. Development of Intelligent Robots: Achievements and Issues. IEEE J. Robot. Autom. 1985, 1, 3–13. [Google Scholar] [CrossRef]
  5. Oommen, B.; Iyengar, S.; Rao, N.; Kashyap, R. Robot navigation in unknown terrains using learned visibility graphs. Part I: The disjoint convex obstacle case. IEEE J. Robot. Autom. 1987, 3, 672–681. [Google Scholar] [CrossRef]
  6. Daily Mail Reporter. Built to Last: Robot Made from Crashed Bomber Comes Back to Life after 45 Years Stored in His Inventor’s Garage. 2010. Available online: https://www.dailymail.co.uk/sciencetech/article-1331949/George-foot-robot-comes-life-45-years-stored-inventors-garage (accessed on 17 January 2022).
  7. History Computer Staff. Shakey the Robot Explained: Everything You Need to Know. January 2022. Available online: https://history-computer.com/shakey-the-robot (accessed on 11 May 2022).
  8. Philidog, Miso and More Vehicles. Available online: http://www.joostrekveld.net/?p=321 (accessed on 11 May 2022).
  9. British Broad Casting (BBC). CES 2022: The Humanoid Robot, Ameca, Revealed at CES Show. Available online: https://www.bbc.co.uk/newsround/59909789 (accessed on 19 January 2022).
  10. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Pearson: London, UK, 2009; p. 1. ISBN 0-13-461099-7. Available online: https://zoo.cs.yale.edu/classes/cs470/materials/aima2010.pdf (accessed on 19 January 2022).
  11. Robotics Online Marketing Team. How Artificial Intelligence Is Used in Today’s Robot. Association for Advancing Automation. 2018. Available online: https://www.automate.org/blogs/how-artificial-intelligence-is-used-in-today-s-robots (accessed on 19 January 2022).
  12. Robotics: A Brief History. Available online: https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/robotics/history.html (accessed on 25 January 2022).
  13. Kirk, D.E.; Lim, L.Y. A Dual-Mode Routing Algorithm for an Autonomous Roving Vehicles. IEEE Trans. Aerosp. Electron. Syst. 1970, 6, 290–294. [Google Scholar] [CrossRef] [Green Version]
  14. Cahn, D.F.; Phillips, S.R. ROBNOV: A Range-Based Robot Navigation and Obstacle Avoidance Algorithm. IEEE Trans. Syst. Man Cybern. 1975, 5, 544–551. [Google Scholar] [CrossRef]
  15. McGhee, R.B.; Iswandhi, G.I. Adaptive Locomotion of a Multilegged Robot over Rough Terrain. IEEE Trans. Syst. Man Cybern. 1979, 9, 176–182. [Google Scholar] [CrossRef]
  16. Blidberg, D.R. Computer Systems for Autonomous Vehicles. In Proceedings of the OCEANS-81, IEEE Conference, Boston, MA, USA, 16–18 September 1981; pp. 83–87. [Google Scholar] [CrossRef]
  17. Thorpe, C. The CMU Rover and the FIDO Vision Navigation System. In Proceedings of the 1983 3rd International Symposium on Unmanned Untethered Submersible Technology, IEEE Conference, Durham, NH, USA, 25 June 1983; pp. 103–115. [Google Scholar] [CrossRef]
  18. Harmon, S.Y. Comments on automated route planning in unknown natural terrain. In Proceedings of the IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 13–15 March 1984; pp. 571–573. [Google Scholar] [CrossRef] [Green Version]
  19. Meystel, A.; Thomas, M. Computer-aided conceptual design in robotics. In Proceedings of the IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 13–15 March 1984; pp. 220–229. [Google Scholar] [CrossRef]
  20. Keirsey, D.; Mitchell, J.; Bullock, B.; Nussmeir, T.; Tseng, D.Y. Autonomous Vehicle Control Using AI Techniques. IEEE Trans. Softw. Eng. 1985, 11, 986–992. [Google Scholar] [CrossRef]
  21. Harmon, S.Y. The ground surveillance robot (GSR): An autonomous vehicle designed to transit unknown terrain. IEEE J. Robot. Autom. 1987, 3, 266–279. [Google Scholar] [CrossRef] [Green Version]
  22. Borenstein, J.; Koren, Y. Obstacle Avoidance with Ultrasonic Sensors. IEEE J. Robot. Autom. 1988, 4, 213–218. [Google Scholar] [CrossRef]
  23. Fujimura, K.; Samet, H. A hierarchical strategy for path planning among moving obstacles (mobile robots). IEEE Trans. Robot. Autom. 1989, 5, 61–69. [Google Scholar] [CrossRef]
  24. Luo, R.C.; Kay, M.G. Multisensor integration and fusion in intelligent systems. IEEE Trans. Syst. Man Cybern. 1989, 19, 901–931. [Google Scholar] [CrossRef] [Green Version]
  25. Griswold, N.C.; Eem, J. Control for mobile robots in the presence of moving objects. IEEE Trans. Robot. Autom. 1990, 6, 263–268. [Google Scholar] [CrossRef]
  26. Shiller, Z.; Gwo, Y.-R. Dynamic motion planning of autonomous vehicles. IEEE Trans. Robot. Autom. 1991, 7, 241–249. [Google Scholar] [CrossRef]
  27. Zhu, Q. Hidden Markov Model for dynamic obstacle avoidance of mobile robot navigation. IEEE Trans. Robot. Autom. 1991, 7, 390–397. [Google Scholar] [CrossRef]
  28. Manigel, J.; Leonhard, W. Vehicle control by computer vision. IEEE Trans. Ind. Electron. 1992, 39, 181–188. [Google Scholar] [CrossRef]
  29. Yuh, J.; Lakshmi, R. An Intelligent Control System for Remotely operated Vehicles. IEEE J. Ocean. Eng. 1993, 18, 55–62. [Google Scholar] [CrossRef]
  30. Gruver, W.A. Intelligent Robotics in Manufacturing, Service, and Rehabilitation: An Overview. IEEE Trans. Ind. Electron. 1994, 41, 4–11. [Google Scholar] [CrossRef]
  31. Guldner, J.; Utkin, V.I. Sliding Mode Control for Gradient Tracking and Robot Navigation using Artificial Potential fields. IEEE Trans. Robot. Autom. 1995, 11, 247–254. [Google Scholar] [CrossRef]
  32. Campion, G.; Bastin, G.; Novel, B.D. Structural Properties and Classification of Kinematic and Dynamic Models of Wheeled Mobile Robots. IEEE Trans. Robot. Autom. 1996, 12, 47–62. [Google Scholar] [CrossRef]
  33. Hall, D.L.; Llinas, J. An Introduction to Multisensor Data Fusion. Proc. IEEE 1997, 85, 6–23. [Google Scholar] [CrossRef] [Green Version]
  34. Divelbiss, A.W.; Wen, J.T. Trajectory Tracking Control of a Car-trailer System. IEEE Trans. Control. Syst. Technol. 1997, 5, 269–278. [Google Scholar] [CrossRef] [Green Version]
  35. Dias, J.; Paredes, C.; Fonseca, I.; Araujo, H.; Batista, J.; Almeida, A.T. Simulating Pursuit with Machine Experiments with Robots and Artificial Vision. IEEE Trans. Robot. Autom. 1998, 14, 1–18. [Google Scholar] [CrossRef] [Green Version]
  36. Suzumori, K.; Miyagawa, T.; Kimura, M.; Hasegawa, Y. Micro Inspection Robot for 1-in pipes. IEEE/ASME Trans. Mechatron. 1999, 4, 286–292. [Google Scholar] [CrossRef]
  37. Gaspar, J.; Winters, N.; Santos-Victor, J. Vision-based Navigation and Environmental Representations with an Omnidirectional Camera. IEEE Trans. Robot. Autom. 2000, 16, 890–898. [Google Scholar] [CrossRef]
  38. Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar] [CrossRef] [Green Version]
  39. DeSouza, G.N.; Kak, A.C. Vision for Mobile Robot Navigation: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 237–267. [Google Scholar] [CrossRef] [Green Version]
  40. Lee, J.M.; Son, K.; Lee, M.C.; Choi, J.W.; Han, S.H.; Lee, M.H. Localization of a Mobile Robot Using the Image of a Moving Object. IEEE Trans. Ind. Electron. 2003, 50, 612–619. [Google Scholar] [CrossRef]
  41. Murphy, R.R. Human-robot Interaction in Rescue Robotics. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 2004, 34, 138–153. [Google Scholar] [CrossRef]
  42. Burgard, W.; Moors, M.; Stachniss, C.; Schneider, F.E. Coordinated Multi-Robot Exploration. IEEE Trans. Robot. 2005, 21, 376–386. [Google Scholar] [CrossRef] [Green Version]
  43. Rentschler, M.E.; Dumpert, J.; Platt, S.R.; Lagnemma, K.; Oleynikov, D.; Farritor, S.M. Modeling, Analysis, and Experimental Study of In Vivo Wheeled Robotic Mobility. IEEE Trans. Robot. 2006, 22, 308–321. [Google Scholar] [CrossRef] [Green Version]
  44. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. ManoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [Green Version]
  45. Wood, R.J. The First Takeoff of a Biologically Inspired At-Scale Robotic Insect. IEEE Trans. Robot. 2008, 24, 341–347. [Google Scholar] [CrossRef]
  46. Choi, H.L.; Brunet, L.; How, J.P. Consensus-Based Decentralized Auctions for Robust Task Allocation. IEEE Trans. Robot. 2009, 25, 912–926. [Google Scholar] [CrossRef] [Green Version]
  47. Glaser, S.; Vanholme, B.; Mammar, S.; Gruyer, D.; Nouvelière, L. Maneuver-Based Trajectory Planning for Highly Autonomous Vehicles on Real Road with Traffic and Driver Interaction. IEEE Trans. Intell. Transp. Syst. 2010, 11, 589–606. [Google Scholar] [CrossRef]
  48. Song, G.; Wang, H.; Zhang, J.; Meng, T. Automatic docking system for recharging home surveillance robots. IEEE Trans. Consum. Electron. 2011, 57, 428–435. [Google Scholar] [CrossRef]
  49. Stephan, K.D.; Michael, K.; Michael, M.G.; Jacob, L.; Anesta, E.P. Social Implications of Technology: The Past, the Present, and the Future. Proc. IEEE 2012, 100, 1752–1781. [Google Scholar] [CrossRef]
  50. Broggi, A.; Buzzoni, M.; Debattisti, S.; Grisleri, P.; Laghi, M.C.; Medici, P.; Versari, P. Extensive Tests of Autonomous Driving Technologies. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1403–1415. [Google Scholar] [CrossRef]
  51. Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D Mapping With an RGB-D Camera. IEEE Trans. Robot. 2014, 30, 177–187. [Google Scholar] [CrossRef]
  52. Dong, X.; Yu, B.; Shi, Z.; Zhong, Y. Time-Varying Formation Control for Unmanned Aerial Vehicles: Theories and Applications. IEEE Trans. Control Syst. Technol. 2015, 23, 340–348. [Google Scholar] [CrossRef]
  53. Zeng, Y.; Zhang, R.; Lim, T.J. Throughput Maximization for UAV-Enabled Mobile Relaying Systems. IEEE Trans. Commun. 2016, 64, 4983–4996. [Google Scholar] [CrossRef]
  54. Rasekhipour, Y.; Khajepour, A.; Chen, S.; Litkouhi, B. A Potential Field-Based Model Predictive Path-Planning Controller for Autonomous Road Vehicles. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1255–1267. [Google Scholar] [CrossRef]
  55. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  56. Nicholson, L.; Milford, M.; Sünderhauf, N. QuadricSLAM: Dual Quadrics from Object Detections as Landmarks in Object-Oriented SLAM. IEEE Robot. Autom. Lett. 2019, 4, 1–8. [Google Scholar] [CrossRef] [Green Version]
  57. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  58. Zhu, K.; Zhang, T. Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci. Technol. 2021, 26, 674–691. [Google Scholar] [CrossRef]
  59. Ohya, I.; Kosaka, A.; Kak, A. Vision-based Navigation by a Mobile Robot with Obstacle Avoidance using Single-camera Vision and Ultrasonic Sensing. IEEE Trans. Robot. Autom. 1998, 14, 969–978. [Google Scholar] [CrossRef] [Green Version]
  60. Burman, S. Intelligent Mobile Robotics. Technology Focus—A Bimonthly S&T Magazine of DRDO, November–December 2016; 2–16. [Google Scholar]
  61. Moigne, J.J.L.; Waxman, A.M. Structured Light Patterns for Robot mobility. IEEE J. Robot. Autom. 1988, 4, 541–548. [Google Scholar] [CrossRef]
  62. Alatise, M.B.; Hancke, G.P. A Review on Challenges of Autonomous Mobile Robot and Sensor Fusion Methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
  63. Seminara, L.; Gastaldo, P.; Watt, S.J.; Valyear, K.F.; Zuher, F.; Mastrogiovanni, F. Active Haptic Perception in Robots: A Review. Front. Neurorobotics 2019, 13, 53. [Google Scholar] [CrossRef]
  64. Azizi, A. Applications of Artificial Intelligence Techniques to Enhance Sustainability of Industry 4.0: Design of an Artificial Neural Network Model as Dynamic Behavior Optimizer of Robotic Arms. Complexity 2020, 8564140. [Google Scholar] [CrossRef]
  65. Azizi, A.; Farshid, E.; Kambiz, G.O.; Mosatafa, C. Intelligent Mobile Robot Navigation in an Uncertain Dynamic Environment. Appl. Mech. Mater. 2013, 367, 388–392. [Google Scholar] [CrossRef]
  66. Rashidnejhad, S.; Asifa, A.H.; Osgouie, K.G.; Meghdari, A.; Azizi, A. Optimal Trajectory Planning for Parallel Robots Considering Time-Jerk. Appl. Mech. Mater. 2013, 390, 471–477. [Google Scholar] [CrossRef]
  67. Piñero-Fuentes, E.; Canas-Moreno, S.; Rios-Navarro, A.; Delbruck, T.; Linares-Barranco, A. Autonomous Driving of a Rover-Like Robot Using Neuromorphic Computing. In Advances in Computational Intelligence, IWANN 2021, Lecture Notes in Computer Science; Rojas, I., Joya, G., Català, A., Eds.; Springer: Cham, Switzerland, 2021; Volume 12862. [Google Scholar] [CrossRef]
  68. Spot by Boston Dynamics. Available online: https://www.bostondynamics.com/products/spot (accessed on 17 June 2022).
  69. Arleo, A.; Millan, J.D.R.; Floreano, D. Efficient Learning of Variable-resolution Cognitive Maps for Autonomous Indoor Navigation. IEEE Trans. Robot. Autom. 1999, 15, 990–1000. [Google Scholar] [CrossRef] [Green Version]
  70. Betke, M.; Gurvits, L. Mobile Robot Localization Using Landmark. IEEE Trans. Robot. Autom. 1997, 13, 251–263. [Google Scholar] [CrossRef] [Green Version]
  71. Chen, S.Y. Kalman Filter for Robot Vision: A Survey. IEEE Trans. Ind. Electron. 2012, 59, 4409–4420. [Google Scholar] [CrossRef]
  72. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A Solution to the Simultaneous Localization and Map Building (SLAM) Problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef] [Green Version]
  73. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  74. Kavraki, L.E.; Svestka, P.; Latombe, J.C.; Overmars, M.H. Probabilistic Roadmaps for Path Planning in High- dimensional Configuration Spaces. IEEE Trans. Robot. Autom. 1996, 12, 566–580. [Google Scholar] [CrossRef] [Green Version]
  75. Jensfelt, P.; Kristensen, S. Active Global Localization for a Mobile Robot Using Multiple Hypothesis Tracking. IEEE Trans. Robot. Autom. 2001, 17, 748–760. [Google Scholar] [CrossRef] [Green Version]
  76. Paden, B.; Čáp, M.; Yong, S.Z.; Yershov, D.; Frazzoli, E. A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles. IEEE Trans. Intell. Veh. 2016, 1, 33–55. [Google Scholar] [CrossRef] [Green Version]
  77. Huang, S.; Teo, R.S.H.; Tan, K.K. Collision avoidance of multi unmanned aerial vehicles: A review. Annu. Rev. Control. 2019, 48, 147–164. [Google Scholar] [CrossRef]
  78. Claussmann, L.; Revilloud, M.; Gruyer, D.; Glaser, S. A Review of Motion Planning for Highway Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1826–1848. [Google Scholar] [CrossRef] [Green Version]
  79. Floreano, D.; Godjevac, J.; Martinoli, A.; Mondada, F.; Nicoud, J.D. Design, Control, and Applications of Autonomous Mobile Robots. In Advances in Intelligent Autonomous Systems, International Series on Microprocessor-Based and Intelligent Systems Engineering; Tzafestas, S.G., Ed.; Springer: Dordrecht, The Netherlands, 1999; Volume 18, pp. 159–186. [Google Scholar] [CrossRef] [Green Version]
  80. Nasa’s First Mobile Robot VIPER Will Explore the Lunar Surface When It Launches in 2023. Available online: https://www.firstpost.com/tech/science/nasas-first-mobile-robot-viper-will-explore-the-lunar-surface-when-it-launches-in-2023-9657441.html (accessed on 15 June 2022).
  81. Arkin, R.C. The impact of Cybernetics on the design of a mobile robot system: A case study. IEEE Trans. Syst. Man Cybern. 1990, 20, 1245–1257. [Google Scholar] [CrossRef]
  82. Borenstein, J.; Koren, Y. Real-time obstacle avoidance for fast mobile robots. IEEE Trans. Syst. Man Cybern. 1989, 19, 1179–1187. [Google Scholar] [CrossRef] [Green Version]
  83. Hayes-Roth, B. An architecture for adaptive intelligent systems. Artif. Intell. 1995, 72, 329–365. [Google Scholar] [CrossRef] [Green Version]
  84. Alami, R.; Chatila, R.; Fleury, S.; Ghallab, M.; Ingrand, F. An Architecture for Autonomy. Int. J. Robot. Res. 1998, 17, 315–337. [Google Scholar] [CrossRef]
  85. Autonomous Mobile Robot Technology and Use Cases. Intel corporation. Available online: https://www.intel.com/content/www/us/en/robotics/autonomous-mobile-robots/overview (accessed on 10 February 2022).
  86. Miyata, R.; Fukuda, O.; Yamaguchi, N.; Okumura, H. Object Search Using Edge-AI Based Mobile Robot. In Proceedings of the 6th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Oita, Japan, 25–27 November 2021; pp. 198–203. [Google Scholar] [CrossRef]
  87. Habibian, S.; Dadvar, M.; Peykari, B.; Hosseini, A.; Salehzadeh, M.H.; Hosseini, A.H.; Najafi, F. Design and implementation of a maxi-sized mobile robot (Karo) for rescue missions. Robomech J. 2021, 8, 1. [Google Scholar] [CrossRef]
  88. Moreno, J.; Clotet, E.; Lupiañez, R.; Tresanchez, M.; Martínez, D.; Pallejà, T.; Casanovas, J.; Palacín, J. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR). Sensors 2016, 16, 1658. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Fiedeń, M.; Bałchanowski, J. A Mobile Robot with Omnidirectional Tracks—Design and Experimental Research. Appl. Sci. 2021, 11, 11778. [Google Scholar] [CrossRef]
  90. Klančar, G.; Zdešar, A.; Blažič, S.; Škrjanc, I. Chapter 3—Control of Wheeled Mobile Systems. In Wheeled Mobile Robotics, Butterworth-Heinemann; Klančar, G., Zdešar, A., Blažič, S., Škrjanc, I., Eds.; Elsevier: Amsterdam, The Netherlands, 2017; pp. 61–159. ISBN 9780128042045. [Google Scholar] [CrossRef]
  91. Saike, J.; Shilin, W.; Zhongyi, Y.; Meina, Z.; Xiaolm, L. Autonomous Navigation System of Greenhouse Mobile Robot Based on 3D Lidar and 2D Lidar SLAM. Front. Plant Sci. 2022, 13, 815218. [Google Scholar] [CrossRef]
  92. Meystel, A. Encyclopedia of Physical Science and Technology, 3rd ed.; Academic Press: Cambridge, MA, USA, 2003; pp. 1–24. ISBN 9780122274107. [Google Scholar]
  93. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
  94. Thrun, S.; Beetz, M.; Bennewitz, M.; Burgard, W.; Cremers, A.B.; Dellaert, F.; Fox, D.; Haehnel, D.; Rosenberg, C.; Roy, N.; et al. Probabilistic Algorithms and the Interactive Museum Tour-Guide Robot Minerva. Int. J. Robot. Res. 2000, 9, 972–999. [Google Scholar] [CrossRef]
  95. Kuffner, J.; LaValle, S.M. RRT-Connect: An Efficient Approach to Single-Query Path Planning. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; Volume 2, pp. 995–1001. [Google Scholar] [CrossRef] [Green Version]
  96. Hawes, N. The Reality of Robots in Everyday Life. University of Birmingham. Available online: https://www.birmingham.ac.uk/research/perspective/reality-of-robots.aspx (accessed on 11 February 2022).
  97. Smids, J.; Nyholm, S.; Berkers, H. Robots in the Workplace: A Threat to- or Opportunity for- meaningful Work. Philos. Technol. 2020, 33, 503–522. [Google Scholar] [CrossRef] [Green Version]
  98. Tai, M.C.T. The impact of artificial intelligence on human society and bioethics. Tzu Chi Med. J. 2020, 32, 339–343. [Google Scholar] [CrossRef]
  99. Atkinson, R.D. Robotics and the Future of Production and Work. Information Technology and Innovation Foundation. 2019. Available online: https://itif.org/publications/2019/10/15/robotics-and-future-production-and-work (accessed on 15 February 2022).
  100. Naneva, S.; Gou, M.S.; Webb, T.L.; Prescott, T.J. A Systematic Review of Attitudes, Anxiety, Acceptance, and Trust towards social Robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  101. Morana, S.; Pfeiffer, J.; Adam, M.T.P. User Assistance for Intelligent Systems. Bus. Inf. Syst. Eng. 2020, 62, 189–192. [Google Scholar] [CrossRef] [Green Version]
  102. Maedche, A.; Legner, C.; Benlian, A.; Berger, B.; Gimpel, H.; Hess, T.; Hinz, O.; Morana, S.; Sollner, M. AI-Based Digital Assistants. Bus. Inf. Syst. Eng. 2019, 61, 535–544. [Google Scholar] [CrossRef]
  103. Luettel, T.; Himmelsbach, M.; Wuensche, H. Autonomous Ground Vehicles—Concepts and a Path to the Future. Proc. IEEE 2012, 100, 1831–1839. [Google Scholar] [CrossRef]
  104. Kuutti, S.; Bowden, R.; Jin, Y.; Barber, P.; Fallah, S. A Survey of Deep Learning Applications to Autonomous Vehicle Control. IEEE Trans. Intell. Transp. Syst. 2021, 22, 712–733. [Google Scholar] [CrossRef]
  105. Vermesan, O.; Bahr, R.; Ottella, M.; Serrano, M.; Karlsen, T.; Wahlstrom, T.; Sand, H.E.; Ashwathnarayan, M.; Gamba, M.T. Internet of Robotic Things Intelligent Connectivity and Platforms. Front. Robot. AI 2020, 7, 104. [Google Scholar] [CrossRef]
  106. Grehl, S.; Mischo, H.; Jung, B. Research Perspective—Mobile Robots in Underground Mining: Using Robots to Accelerate Mine Mapping, Create Virtual Models, Assist Workers, and Increase Safety. AusIMM Bull. 2017, 2, 44–47. [Google Scholar]
  107. Modic, E.E. 13-Characteristics of an Intelligent Systems Future. Today’s Medical Developments. 2 September 2021. Available online: https://www.todaysmedicaldevelopments.com/article/13-characteristics-intelligent-systems-future (accessed on 5 April 2022).
  108. Lee, H.; Jeong, J. Mobile Robot Path Optimization Technique Based on Reinforcement Learning Algorithm in Warehouse Environment. Appl. Sci. 2021, 11, 1209. [Google Scholar] [CrossRef]
  109. Xue, Y.; Sun, J.-Q. Solving the Path Planning Problem in Mobile Robotics with the Multi-Objective Evolutionary Algorithm. Appl. Sci. 2018, 8, 1425. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The first humanoid robot George was capable of walking by shaking its feet and was remote-controlled [6].
Figure 1. The first humanoid robot George was capable of walking by shaking its feet and was remote-controlled [6].
Applsci 12 06951 g001
Figure 2. The robot Shakey with Charles Rosen in 1983 [7].
Figure 2. The robot Shakey with Charles Rosen in 1983 [7].
Applsci 12 06951 g002
Figure 3. An image of Albert Ducrocq and his robot Miso-1 in 1950s [8].
Figure 3. An image of Albert Ducrocq and his robot Miso-1 in 1950s [8].
Applsci 12 06951 g003
Figure 4. Image of the robot Ameca [9].
Figure 4. Image of the robot Ameca [9].
Applsci 12 06951 g004
Figure 5. Relationship between AI, ML, and DL.
Figure 5. Relationship between AI, ML, and DL.
Applsci 12 06951 g005
Figure 6. Some important and major events in the field of robotics science in the last 100 years (sources: Google search engine).
Figure 6. Some important and major events in the field of robotics science in the last 100 years (sources: Google search engine).
Applsci 12 06951 g006
Figure 7. Closed-loop robot control of the sensorimotor type [63].
Figure 7. Closed-loop robot control of the sensorimotor type [63].
Applsci 12 06951 g007
Figure 8. Spot dog mobile robot developed by Boston Dynamics [68].
Figure 8. Spot dog mobile robot developed by Boston Dynamics [68].
Applsci 12 06951 g008
Figure 9. Interactions of activities of autonomous navigation of mobile robots.
Figure 9. Interactions of activities of autonomous navigation of mobile robots.
Applsci 12 06951 g009
Figure 10. List of sensors utilized by the robots.
Figure 10. List of sensors utilized by the robots.
Applsci 12 06951 g010
Figure 11. Some major applications of mobile robots.
Figure 11. Some major applications of mobile robots.
Applsci 12 06951 g011
Figure 12. A complete list of components and architecture of modern intelligent mobile robots [85].
Figure 12. A complete list of components and architecture of modern intelligent mobile robots [85].
Applsci 12 06951 g012
Figure 13. 3D model of the mobile robot (Karo) [87].
Figure 13. 3D model of the mobile robot (Karo) [87].
Applsci 12 06951 g013
Figure 14. Hardware structure for the navigation of wheeled mobile robots [91].
Figure 14. Hardware structure for the navigation of wheeled mobile robots [91].
Applsci 12 06951 g014
Figure 15. Representation of RRT algorithm for the path planning of mobile robots.
Figure 15. Representation of RRT algorithm for the path planning of mobile robots.
Applsci 12 06951 g015
Figure 16. An ideal interaction system between a user and user assistance system [101].
Figure 16. An ideal interaction system between a user and user assistance system [101].
Applsci 12 06951 g016
Figure 17. Outline of different fields of study and research perspectives on mobile robots [105].
Figure 17. Outline of different fields of study and research perspectives on mobile robots [105].
Applsci 12 06951 g017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Raj, R.; Kos, A. A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives. Appl. Sci. 2022, 12, 6951. https://doi.org/10.3390/app12146951

AMA Style

Raj R, Kos A. A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives. Applied Sciences. 2022; 12(14):6951. https://doi.org/10.3390/app12146951

Chicago/Turabian Style

Raj, Ravi, and Andrzej Kos. 2022. "A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives" Applied Sciences 12, no. 14: 6951. https://doi.org/10.3390/app12146951

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop