Skip to Content
SensorsSensors
  • Article
  • Open Access

15 July 2022

InjectMeAI—Software Module of an Autonomous Injection Humanoid

,
,
,
and
Computer Engineering, Brandenburg University of Technology, Cottbus-Senftenberg, 03046 Cottbus, Germany
*
Author to whom correspondence should be addressed.

Abstract

The recent pandemic outbreak proved social distancing effective in helping curb the spread of SARS-CoV-2 variants along with the wearing of masks and hand gloves in hospitals and assisted living environments. Health delivery personnel having undergone training regarding the handling of patients suffering from Corona infection have been stretched. Administering injections involves unavoidable person to person contact. In this circumstance, the spread of bodily fluids and consequently the Coronavirus become eminent, leading to an upsurge of infection rates among nurses and doctors. This makes enforced home office practices and telepresence through humanoid robots a viable alternative. In providing assistance to further reduce contact with patients during vaccinations, a software module has been designed, developed, and implemented on a Pepper robot that estimates the pose of a patient, identifies an injection spot, and raises an arm to deliver the vaccine dose on a bare shoulder. Implementation was done using the QiSDK in an android integrated development environment with a custom Python wrapper. Tests carried out yielded positive results in under 60 s with an 80% success rate, and exposed some ambient lighting discrepancies. These discrepancies can be solved in the near future, paving a new way for humans to get vaccinated.

1. Introduction

Controlling the spread of Coronavirus variants has become a dire need in hospitals where humans with compromised immune systems converge to seek relief, and contact with bare skin is eminent, particularly during the vaccination process. The voluntary participation of the general citizenry in enforcing agreed-upon regulations on hand sanitizing, social distancing, facial masks, and hand gloves use has not entirely halted the pandemic. Instead, the virus keeps resurfacing in one variant or another in different areas. In the heart of Europe, Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE) has coordinated with the European Center for Disease Prevention and Control (ECDC) to retard the spread [1,2].
The adoption of home office practices has paved new ways for available technology to permeate daily activities more than ever. Record sales in personal service robots and medical robots attest to this [3]. While analyzing the adoption of social robots during the pandemic in [4], 156 diverse experiences with 66 different robots could be delineated in various applications implemented worldwide. Over 104 applications of social robots in hospitals were identified, and functions such as carrying food to patients were becoming popular as a means of further reducing human-to-patient contact. Robots have also been developed in the last year to perform pandemic tasks [5]. In Tokyo, an autonomous vacuum sweeper was deployed in hotels housing patients with mild COVID-19 symptoms [6].
Minimum nurse-to-patient ratio legislations help bolster stress relief among medical staff. One nurse should cater to at most six patients [7]. However, this is not the case in Germany where the ratio could be as high as 10 patients or more to a nurse [8]. Combined with a global pandemic, the ratio gets worse. This could lead to exhaustion and subsequent resignations among healthcare personnel. Different approaches exist to mitigate this problem. However, the most promising is the introduction of robots into the medical field.
When introduced at the right points, robots can bring massive improvements to the working conditions of nurses and doctors. One advantage when using a robot is its ability to interact with highly infectious people without the risk of getting infected itself. It does not experience boredom from monotonous tasks. It also poses no risk to the supervisory personnel if it gets disinfected after its work. In addition, it can be used in a task where not much human empathy is involved. A medical task that does not involve much human empathy is the vaccination process, during which it is next to impossible to enforce the social distancing rule on medical personnel as close contact is inevitable. One solution is to engage the assistance of humanoid robots.
The broad aim of this research, therefore, is to replace medical personnel with an autonomous humanoid robot to perform vaccination activities in contagious environments during pandemics. To the best of our knowledge, no such system has been introduced to the medical society. This will be accomplished by attaching an injection system to an autonomous humanoid robot that can walk independently to interact verbally with patients, identify their poses, and deliver injections on their bare shoulders through a needle attached to one robotic finger; however, only if it can be proven that no one is harmed through this process.
In this first phase of the research, we concentrate on the software aspects of this task. In other words, how to position the robot to be able to deliver the injection to the waiting patient. An overview of this work appeared in [1]. The specific position will be determined through pose estimation. Pose estimation describes the procedure of grouping body parts into person instances. There are different approaches to achieve this. These approaches are dependent on how one considers the problem; either one deals with person instances from which body parts are isolated (Figure 1) or one deals with key points that culminate in person instances (Figure 2). These approaches are known respectively as top-down and bottom-up.
Figure 1. Top-down pose estimation. Input image is cropped for single person pose estimation.
Figure 2. Bottom-up pose estimation. Key points define body parts.
Accordingly, different objectives can be formulated for accomplishment by a humanoid robot (Pepper). Firstly, it needs to be able to move to patients in the right position. Hence, it needs to identify humans in its surroundings. If a human can be identified, it needs to estimate the pose and decide if the patient is seated for an injection. This is achieved by using pose estimation and classification. If the patient is seated, the injection spot needs to be checked to determine if it is bare (free of clothing) and reachable. These checks are mostly done via images, so the robot needs to transform the image coordinates to actual world coordinates. Without these points, it cannot move to the patient or raise the hand to the right spot. In the end, the robot needs to activate the injection system (hardware) which will be expounded in a separate work. These objectives can then be broken down into the following specific steps:
  • Obtain a 3D orientation of the human relative to the robot.
  • Identify human pose from 2D image.
  • Move the robot next to the seated patient.
  • Find injection position on bare shoulder.
  • Raise the hand of the robot to the required height.
This paper is structured as follows; Section 2 discusses the current state of research involving robots in medical and assisted living environments and compares different pose estimation methodologies. Section 3 explains the rationale behind the mapping of image coordinates to real world coordinates and how the injection point is estimated after patient’s pose classification. Section 4 presents the overall implementation concept from the software perspective, describing the programming ecosystem of Pepper Robot (past and present) and the need for a Python wrapper. Evaluation results are discussed here. Conclusion is provided under Section 5.

3. Design and Development

The Pepper humanoid robot has touch sensors, two RGB cameras, and depth sensors in its head that detect human presence to support its emotion engine. In addition, sonar, laser, gyro, and bumper sensors help it to avoid obstacles up to a maximum speed of 3km/h. We take advantage of these sensors to detect human presence and use pose estimation to determine a patient’s preparedness for injection. The robot follows a trajectory to position itself and perform bare shoulder verification for injection point spotting. The robot then raises its hand to the injection point to begin the vaccination process. The setup is as shown in Figure 3. The vaccine will be contained in a backpack to be worn by the Pepper robot. The sequence of steps, described earlier, is depicted in Figure 4 where (A) designates the point of connection between the left and right branches of the flowchart. An image of the algorithms in relation to the Pepper programming ecosystem is also shown.
Figure 3. Autonomous Vaccination System Co-design.
Figure 4. Vaccination Algorithm in relation to Pepper programming Eco-system.

3.1. Bare Shoulder Verification

Our approach is based on the following assumptions;
  • The patient wears a short sleeve shirt that does not cover the whole upper arm but ends above the elbow.
  • The patient does not wear a tattoo of the same color as the shirt and skin.
When the shirt color is different from the skin color, we can predict an edge between the color of the skin and that of the shirt. The color difference helps in computing color gradients. Furthermore, we assume that the detected key points are correct and that the upper arm can be approximated using a line that meets two points on the arm but moved closer to the edge of the arm. If true, we will detect an edge at any time, being between the arm and background.
Thus,
l i n e ( x ) = { l e f t E l b o w + m i d d l e L e f t l e f t E l b o w t i m e s / 2 x ,   i f   x t i m e s / 2 m i d d l e L e f t + l e f t S h o u l d e r m i d d l e L e f t t i m e s / 2 ( x ( t i m e s / 2 ) ) ,   else
To calculate the gradient at position x , we use the following Formula (3);
g r a d i e n t ( x ) = l i n e ( x + 1 ) l i n e ( x )
Line, describes the triangle function on the given point.
It returns the color in RGB format at the point. The difference between colors in RGB is defined as (4);
d i f f e r e n c e ( x , y ) = | x R y R | + | x G y G | + | x B y B |
We map the 3D RGB-space onto 1D space with this function, making it easier to compare two numbers and compute the gradient on an array with the size s i z e p i x e l that represents all pixel values along the triangle function. The result of this computation is an array with the size, s i z e p i x e l 1 . This array describes the difference between two neighboring pixels. By filtering this array for a value above a given threshold, which needs to be calculated or guessed, we know that there will be a considerable change in color. This has to be an edge in the image. At least on the color level, if there is an edge, we can conclude that the patient is wearing a shirt and the sleeve is not above the injection point. If the patient is wearing a shirt whose color is the same as his skin color, the edge would not be detected. The algorithm will evaluate this as an injection-free point. If all values are below the threshold, the patient is ready for injection; otherwise, he is not ready.

3.2. Injection Point Estimation

Complications associated with intramuscular injection can be abated if the right point is accurately predicted. Four common locations for inoculation are the arm (deltoid), upper outer posterior buttock (gluteus maximus)–also referred to as the dorsogluteal site, the thigh (vastus laterallis), and the lateral hip (gluteus medius)–also called the ventrogluteal site [31]. For convenience, tests will be limited to the deltoid muscle. The injection spot is in the middle of the deltoid muscle about 2.5 cm to 5 cm below the acromion process. In general, it is three fingers across the deltoid muscle and below the acromion process [32]–that is, three finger widths below the middle of the muscle. Computations were carried out to determine the ratio of arm length to distance to the injection point. We arrived at 80% and picked 5% variance for disparities in upper arm lengths. Three different videos were shown to nursing students to select the right estimation at 75%, 80%, and 85%, as shown in Figure 5. These medical students accepted the 80% estimate as having a better chance of spotting the injection point. Thus, we arrive at the following equation for determining the injection point (5);
i n j e c t i o n P o i n t = l i n e ( c e i l ( t i m e s 0.8 ) )
Figure 5. From left to right 75%, 80%, and 85%.

3.3. Hand to Injection Point Mapping

The coordinates picked from images seen by the robot have to be interpreted in real world human dimensions. Here also, our approach is based on three assumptions:
  • The distance from robot to patient is predetermined.
  • The robot has a direct line view of the patient.
  • The injection point is below the head of the patient.
These make it possible to consider the distance along the x axis only while keeping the y distance unchanged. Figure 6 shows the coordinate system on the Pepper robot.
Figure 6. The Pepper Robot. (a) Front view displaying top-camera; (b) Robot coordinate system©, source [25].
Whereas the midpoint of the image is known in both real world and image coordinates, the injection point is known only in the image coordinates. Therefore, the task reduces to finding the real-world coordinates of the injection point on the patient. The first step is to look at the midpoint of the image and at the H e a d F r a m e of the patient, which is in the middle of the image. The midpoint can be described as
M P ( w i d t h O f i m a g e / 2 , h e i g h t O f i m a g e / 2 ) and   the   HeadFrame   H F ( f i x e d D i s t n c e , y h , z h ) .
where y h , z h are coordinate axes on the human body.
We want the frame position F ( x 2 , y 2 , z 2 ) from a given Image Point I P ( x 1 , y 1 ) . The third assumption permits y 2 = y h . The next step is to map the y image coordinate to the z real world coordinate. We use the ratio of pixels to meters to accomplish this. It is known that the midpoint of the image is 3 m away from the robot (assumption 1). It is also known that the viewing angle of the whole image from the robot is 44.30 degrees, and that the distance between a surface and an object is calculated via the perpendicular. With this information, we re-present the problem using a simple triangle. The annotated triangle can be seen in Figure 7. It has the following properties: α = v i e w i n g A n g l e / 2 , y = 3 , γ = 90 deg , and x is unknown. We use Equation (6):
x = ( y sin α ) / sin β
Figure 7. Theoretical limb movement visualization.
If we insert the given values into the formula with β = 180 α γ = 67.85 deg , we get x = 1.22   m . This means the distance from M P to P ( 1208 , 480 ) , which is the middle point at the right side, in real-world dimensions is 1.22   m , and one pixel on that line is approximately 1.22 / 480   m . To compute the real-world coordinates for the z value, which corresponds to the direction of the value in the image, we need to calculate the change in the y dimension in relation to the middle point, then multiply it by the pixel-meter ratio. Moreover, the last step is to add it to the known z value of the middle point. We use the Formula (7) below to calculate the new z value;
z 2 = ( ( y h 480 ) × 1.22 480 ) + z h
The x value is estimated via average human sizes. Therefore, it is necessary to know the gender of the patient. If this is known, we use the 95% percentile as the shoulder width. For women, this value is 48.5 cm and for men it is 52.5 cm [33]. With these values and assumptions, we can create the following translation vector (8) from the H e a d F r a m e :
υ t r = [ s h o u l d e r W i d t h 0 ( ( y h 480 ) × 1.22 480 ) + z h ]

3.4. Joint Angle Estimation

The next task is to move the arm of the robot in such a way that the hand stretches to the correct injection point, theoretically. Two points are known at this juncture; where the robot’s shoulder starts which is the start of its arm, and where the patient’s injection point is. These are 3D coordinates that can be used for movement on the autonomous humanoid robot. To calculate the injection point position in the coordinate system from Pepper, we use the head as the starting position, consider the head to shoulder offset, and calculate the shoulder coordinates. We can calculate the vector from the shoulder point to the patient’s injection point when this is done. This can be done with Equation (9);
υ r p = P s R P i P t
where υ r p , is the resulting vector between the robot’s shoulder and the injection point. P s R describes the shoulder point of the robot in 3D coordinates while P i P t describes the injection point on patient, also in 3D coordinates. Two joints are present in each shoulder; s h o u l d e r R o l l and s h o u l d e r P i t c h . The s h o u l d e r R o l l handles rotation around the x-axis while the s h o u l d e r P i t c h handles rotation around the y-axis (Figure 8). However, the s h o u l d e r R o l l does not need to be taken into context, since it would only adjust the injection angle and not the direction through which the hand reaches the patient. To perform a more human-like injection, the s h o u l d e r R o l l is set to 0 degrees. This leads to the condition where the elbow stays below the shoulder, looking more like how a human will deliver the injection. We can again use a simple triangle (Figure 7) to describe the problem. The three points to be considered are: the shoulder point, the elbow point, and the injection point, which are all unknown.
Figure 8. Right Shoulder Rotations ©, source [25].
From manufacturer’s data [25], the upper arm length of Pepper is 181.20 mm and the lower arm is 150.00 mm. The distance remaining is the one between the shoulder and the injection points. However, because the vector is known, the distance can be easily calculated with the Formula (10);
| υ | = υ x 2 + υ y 2 + υ z 2
With this information, we can compute the parameters in (Figure 9) as follows (11);
α = a r c cos ( b 2 + c 2 a 2 2 b c )
Figure 9. Theoretical Hand Movement Visualization.
a =   distance   between   shoulder   of   robot   and   injection   point b =   robot s   upper   arm   length c =   robot s   lower   arm   length
We use Equation (12) to calculate the angle at the shoulder;
β = arcsin ( l e n g t h L o w e r A r m sin α | υ r p | )
Angle α will be used for setting the E l b o w R o l l (Figure 8) and β , the s h o u l d e r P i t c h . The E l b o w P i t c h can be taken into consideration, but it is not necessary in this use case. This is because Pepper will move 10 cm aside from the injection point, implying that the E l b o w P i t c h does not need to be adjusted.

4. Implementation and Evaluation

Application development on the autonomous humanoid robot has changed over the last few years [1]. Programs were coded and run directly on the NAOqi 2.5 operating system. Such codes had direct access to sensor data and could also control the robotic limbs. The programming languages primarily used were C and Python 2. Softbank’s introduction of the QiSDK on the Android Integrated Development Environment (IDE) has shifted control from a low-level to a high-level. On NAOqi 2.5, the robot (Pepper) was the main controller. It could control what appeared on the tablet and also move independently. With the QiSDK, roles have been reversed; the tablet is now the main controller. This leads to enhanced possibilities on the programming side. Java and Kotlin programming languages can also be used for coding. Softbank recommends the newer and safer Kotlin. The robot accesses the tablet over USB. The robot can rightly be described as an API that picks stimuli for the tablet. An added advantage derived from the transition from Choregraphe to Android is that it is now very easy to build seemingly complex applications to run on the robot. Almost all Android applications can be ported to it with ease, allowing the use of any feature that is supported by API version 23 and above. What needs to be added to it most importantly is the implementation of the RobotLifecycleCallbacks interface. This interface enables Android activities to get notified when the robot focuses on the program. The robot can only focus on one program at a time.

4.1. Implementation Concept

Our implementation was inspired by tutorials from Softbank and various repositories that worked with the QiSDK [34,35,36]. Some abstract functions had to be implemented from the RobotLifecycleCallbacks [37]. The compulsory functions are onCreate, onDestroy, onRobotFocusGained, onRobotFocusLost and onRobotFocusRefused. The onCreate function can be regarded as the constructor for the application. onFocusGained is called when the robot focuses on the application. At this point the implementation will be called and started. If this is carried out in onCreate, the robot may focus on a different activity (in the program) and thereby lose the tablet’s connection to Pepper, resulting in an error state. The application starts with the press of a button. This button is activated only when the program is ready to run and the Pepper robot is focused on the application. The detection function begins when the button is pressed. Due to the usage of threads, the detection is wrapped in the runBlocking function [38]. This function runs a new coroutine and blocks the current thread until execution completes. To begin human pose detection, it is necessary to locate the closest person in the vicinity of the robot. When a patient is found, we need to estimate the pose of this person. It is a requirement by BlazePose to obtain a face in the input image. To achieve this, the robot moves three meters away from the patient and takes a snapshot. It lowers its head and focuses on the patient’s chest, getting a good shot in the camera with less free space and ignoring body parts. The resulting image is piped into the pose estimation and detection algorithm. If the person is standing, he is asked to take a seat. When a person with the right pose is found, the bare shoulder detection starts. When the shoulder is bare as expected, the robot moves closer to the patient and raises its arm to the height of the injection point. At this stage, the injection hardware system will be activated.

4.2. Closest Human (Patient) Detection

When more than one person is near Pepper, it should choose the nearest person as the patient. Pepper has the built-in high-level function to detect human objects. This object holds information about the position of the head, age, gender, and excitement state. To obtain the closest person, the HeadFrame is of interest. To locate the patient, we take all human objects in the field of view of the robot and calculate the distance from each. This information can be obtained by calculating the translation between the robotFrame and the HeadFrame of the current person. The distance metric is defined as x t r 2 + y t r 2 , where tr is the translation. The next step is to move three meters away from the patient. To calculate the translation from the robotFrame to the target frame, fromXTranslation can be used. The axis is relative to the robot, not to the human (Figure 6). This means that Pepper will be aligned with the patient’s body. To obtain the right frame, the HeadFrame is used at xtr from three meters. This frame has to be an attachedFrame, otherwise this frame will move with the movement of the robot which is not the expected behavior. Now that the frame to which the robot should move is calculated, it needs to move to this point. This can be achieved in two different ways. One of two functions provided by QiSDK can be used, GoTo or the StubbornGoTo [39]. They are different in implementation and being open-sourced can be refined as needed. Based on this, the StubbornGoTo has been applied with some changes. The robot will only move in straight lines and try two times to move to the target frame. This is necessary because the robot has problems differentiating shadows from walls. Unfavorable lighting conditions can also lead to unpleasant situations. At the end of the movement, Pepper should look at the patient. This is achieved by calling the built-in LookAt function to obtain the HeadFrame of the patient. The last step is to take a snap shot of the patient. Therefore, the TakePictureBuilder is used. It uses the top camera of the robot, which is situated on its forehead as shown in (Figure 6). This returns a TimestampedImage that is converted into a bitmap image for further processing.

4.3. Pose Classification

There are two variants of BlazePose that can be employed—lite and full versions. Their main differences lie in the computation time and accuracy. The single image mode takes more computation time but is also more accurate than the video mode. The single image mode is used because the computation time is only about one second, which is fine. The bitmap resulting from the picture needs to be converted into an inputImage. An image of type inputImage is used for interacting with Google’s vision API and provides different encoding functions when fed into the network. First, the pose estimation algorithm is called using the function poseDetectorImage.process. This returns a task which is used in asynchronous programming. In other words, we can decide what to do with the output after the function completes the computation. We pick the detected pose key points (33 in total) and pipe them into the poseClassifierProcessor. This returns the pose that is detected with its accuracy. The classifier is a simple k-Nearest Neighbor (k-NN) classifier. At the end, this result and pose key points are combined in the PoseWithClassification class to generate the final result.

4.4. Training Data

A base dataset [40] and a custom dataset, covering students posing for injection at The Brandenburg University of Technology, were used for training. The training data were stored in a CSV file with the following structure: image file name, class, key points. The classes being: sitting, lying, and standing. Videos from these classes were split into frames and fed into the classification CSV generator from Google [41]. The training was performed in a Docker container on a local machine and not on the Colab Cloud servers. Working on the Colab Cloud was not helpful because the computation resources were taken away after sometime, resulting in multiple failed tries. The Docker container had the exact configuration as the Colab Cloud. This led to easy usage of the container without adjustment.

4.5. Bare Shoulder Classification and Injection Point Spotting

The estimated key points are revisited to obtain information on shoulder and elbow key point pixel locations. The pixel from shoulder to elbow is needed. An iterative function is used for this. Making use of a while loop, it starts at zero and runs until the variable times is encountered. The values are then pushed into a dynamic array. There are different ways to determine the difference between the current entry in the list and its successor. Here, the zipWithNext function is used with the zip function colorDiff. After this, a color gradient is obtained to check if any value in the array is above a specified threshold. To test this, the all function is used. If all values are below the threshold, we proceed with the injection point estimation. This is done by using the same function as in the bare shoulder classifier. In pixel values, the injection point is calculated using x = t i m e s * 0.80 . This produces the point that was seen as good for injection by the medical students. The translation is created by the TransformBuilder with a translation from a 3D vector. The x value is chosen based on the gender of the person that should get the injection. This information is available in the human object class. For safety reasons, 3 cm are added so that Pepper does not move extremely close to the patient. The z coordinate is calculated by inserting the y i P t for y h . In addition, the resulting z translation needs to be multiplied by 1 . This has to be done because the image coordinate is rising from the top to bottom. However, the z coordinate has its zero at the ground. This means that if the pixel position is increasing, the z value needs to decrease. In contrast, to the earlier assumption, the y value is not set to zero but to 0.14974, the y offset from Pepper’s shoulder relative to its head. This has to be done so that the robot’s arm is before the injection point and not its head. As this point should not move when Pepper is moving, it is attached to the HeadFrame of the patient. Hand raising is performed by moving the right arm joints per calculated angles, so that the robot’s hand arrives at the height of the spotted injection point.

4.6. Joint Movement Actualization

Design characteristics place the upper arm length of the robot at 18.120 cm, its lower arm at 15.0 cm, and its hand at 7 cm long. We can therefore confirm that the robot can be at most 18.120 + 15.0 + 7 = 40.120 cm away from the patient to reach the injection spot. A shorter distance is more appropriate. When planning its movement, the robot prefers a clear view around obstacles from a distance of 120 cm [42]. We can again use the custom StubbornGoToBuilder because it is just moving in a straight line and mostly ignores obstacles. In the end, the body of Pepper is positioned facing the point towards which it should move. It makes two attempts to reach the point. The first attempt is to get as close as possible to the frame. The second attempt corrects any drift while moving along the 3 m distance to the patient, sets the maximum speed to a value that looks very safe to the patient, and stops Pepper from moving too far when it does not brake hard enough. Continuing the process, it looks at the injection point that has been calculated. This is needed to compute the joints’ movement. To start this, it computes the distance between the shoulder of the robot and the injection point. The offset for the shoulder is supplied by the HeadFrame of the robot. To access the information, the gazeFrame is used with added values from the shoulder, defined in [25]. The distance function can be used to obtain the distance from the newly calculated frame to the injection frame. This returns the distance in meters. Now all values are available to compute the α   a n d   β angles. The α value requires some adjustment when being used for the joint movement. This is because when setting the elbowPitch to zero degrees, it is in reality 180 degrees. When we want to have a 120-degree angle, we move the elbow by 60 degrees. This places the real value of α outside the computation at 180 α . Pepper can now perform small movements at the shoulderRoll. If this adjustment is not made, Pepper will move very slowly in order not to hit the tablet in any way. The allowable restrictions, extracted from [25], are shown in Table 3. Having completed the computations, the Python Wrapper function can be called. The parameters needed are the joint names, the calculated angles, the translation time, and if absolute angles are given or not, and whether movements are performed asynchronously or otherwise.
Table 3. Movement Restrictions Imposed on the Right Arm of Pepper.

4.7. Python Wrapper

Currently, the tablet does not support direct access to sensor data or the inner system of the robot [43]. It is however, possible to use the NAOqi 2.5 on the robot. This permits the tablet Secure Shell (SSH) to be loaded onto the robot. From this point, it is possible to control the joints of the robot [44]. This is where a Python wrapper comes in handy. The resulting strings will then be executed on the robot via SSH. A library for enabling SSH on the tablet sshj (SSHv2 library for java) from Hierynomus was used [45]. When joint movement is needed, the wrapper connects to the robot and offers SSH connection that can be used to execute Python code. The IP address is from the USB interface and not from WIFI. This has the advantage that it is not affected by changing the network. The IP address is obtained by establishing a connection via SSH to the robot and running ipconfig [46]. When a connection is established, the wrapper creates an environment for working with the Python SDK. This results in initialization of all services that are needed to perform the desired actions. Therefore, a function, createEnv was made. To move the joints on Pepper, the ALMotion service is needed. It is responsible for everything connected with movement from current joint’s position to the interpolated joint angles. This service can be obtained from a qi application session. Qi applications are programs needing resources from the robot running NAOqi. The Python Wrapper functions as the Python SDK. Therefore, it sends parameters over the SSH connection that has been established. Presently, the supported functions are angleInterpolation and setStiffness. However, needed functions can be easily added to the wrapper. With the aid of the wrapper, it is now possible to perform computations using libraries in Kotlin and use Python for directing the joints.

4.8. Autonomous Behavior

Pepper is imbued with an emotional engine that supports its autonomous abilities to act human-like. Under these abilities is face detection, where it looks in its idle state, into the patient’s eyes, emulates human gestures, and uses its in-built text-to-speech API. The sayBuilder is used to interact with the API. When performing critical tasks like taking a snapshot after looking at a given frame, the autonomous abilities are disabled to stop unwanted movements. Such movements can be considered as safety hazards when working on patients. Hence, these autonomous abilities are disabled for a short period only, with a holdAbilities function. If this is done long term, the humanoid robot will lose its human-like characteristics.

4.9. Findings and Discussion

Tests were mainly carried out on adult males at the chair of computer engineering, Brandenburg University of Technology, under varying lighting conditions. The robot has an arm reach of 40.120 cm. The upper arm lengths of our test subjects ranged between 23 and 32 cm. At no time did the robot cause harm (injury) to any of the testing participants. The patient’s seating was also rotated from a transparent glass background to an opaque wall (Figure 10); Pepper itself needs good lighting conditions to work correctly. The bare shoulder detection exposed a discrepancy where some combinations of skin color and shirt color were not supported under certain lighting conditions. This discrepancy can be addressed through further training. The patient was required to wear short sleeves irrespective of the ambient temperature conditions; an avoidable discomfort. The robot reached the injection spot in under 60 s in real time. However, after several trials we can confirm a success rate of about 80%. A change in seating position after pose detection and classification resulted in a change in direction of the robot’s heading. Thus, the robot could not reach the injection spot. Further testing is needed as unpredictable delays in joint movements occur occasionally when Pepper is interacting with patients. Such movements must be detected for the robot to call for assistance from medical staff during peak service hours. To enhance real time computation and accuracy, key points that are not needed at particular stages in the program can be selectively ignored. Another point to consider is the use of Pepper’s depth sensor data to support injection point spotting. This could increase injection point estimation accuracy on the deltoid muscle. Compared to the stationary robotic arm introduced by the University of Waterloo [47] for vaccinations in upright positions, our mobile robot supported with an emotion engine can interact with humans verbally and perform the injection point estimation on patients in more comfortable positions, without direct inputs from the patients. Thus, our system also eliminates the need for patient training. As progress is made in this area of research, it is hoped that benchmark data for evaluating the performance of autonomous vaccination robots will become publicly available.
Figure 10. Software Module Evaluation.

5. Conclusions

A software module for vaccination at hospitals and medical centers has been developed and tested on an autonomous humanoid Pepper robot. The robot can interact with patients verbally and identify their readiness for inoculation. When confirmed, it checks if the bare shoulder requirement is met, draws closer to the patient, and moves its joints in the correct sequence to reach the injection spot. Tests carried out have confirmed the success of our research with 80% positive results, all occurring in under 60 s of real time. No injuries were recorded during testing. For the first time, we have shown how a humanoid robot can be engaged in the vaccination process at hospitals and medical centers in order to save lives by reducing human-to-human contagion. In the near future, this proof of concept can be used to develop efficient ways of working and interacting with infectious patients in hospitals and assisted living spaces, especially during pandemics.

Author Contributions

Conceptualization, K.O.A., F.R. and M.H.; methodology, K.O.A. and F.R.; software, F.R. and K.O.A.; validation, S.M., F.R. and K.O.A.; investigation, F.R., K.O.A., S.M. and M.R.; resources, M.H. and M.R.; data curation, F.R.; writing—original draft preparation, K.O.A. and F.R.; writing—review and editing, M.R., S.M. and K.O.A.; visualization, F.R. and K.O.A.; supervision, M.H. and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mahmood, S.; Ampadu, K.O.; Antonopoulos, K.; Panagiotou, C.; Mendez, S.A.P.; Podlubne, A.; Antonopoulos, C.; Keramidas, G.; Hübner, M.; Goehringer, D.; et al. Prospects of robots in assisted living environment. Electronics 2021, 10, 2062. [Google Scholar] [CrossRef]
  2. Deruelle, T.; Engeli, I. The COVID-19 crisis and the rise of the european centre for disease prevention and control (ECDC). West Eur. Politics 2021, 44, 1–25. [Google Scholar] [CrossRef]
  3. SERVICE ROBOTS Record: Sales Worldwide Up 32%-International Federation of Robotics (ifr.org). Available online: https://ifr.org/ifr-press-releases/news/service-robot-record-sales-up-32 (accessed on 1 June 2021).
  4. Aymerich-Franch, L.; Ferrer, I. The implementation of social robots during the COVID-19 pandemic. arXiv 2020, arXiv:2007.03941. [Google Scholar]
  5. Wiederhold, B.K. The ascent of social robots. Cyberpsychol. Behav. Soc. Netw. 2021, 24, 289–290. [Google Scholar] [CrossRef] [PubMed]
  6. Zhao, Z.; Ma, Y.; Mushtaq, A.; Rajper, A.M.A.; Shehab, M.; Heybourne, A.; Song, W.; Ren, H.; Tse, Z.T.H. Applications of robotics, AI, and digital technologies during COVID-19: A review. Disaster Med. Public Health Prep. 2021, 1–11. [Google Scholar] [CrossRef]
  7. McHugh, M.D.; Aiken, L.H.; Sloane, D.M.; Windsor, C.; Douglas, C.; Yates, P. Effects of nurse-to-patient ratio legislation on nurse staffing and patient mortality, readmissions, and length of stay: A prospective study in a panel of hospitals. Lancet 2021, 397, 1905–1913. [Google Scholar] [CrossRef]
  8. Iris Völlnagel (SWR). “Pflegekräfte Hadern Mit Ihrem Job,“ April 2021. Available online: https://www.tagesschau.de/wirtschaft/pflege-arbeitsplatz-kuendigungen-101.html (accessed on 15 June 2021).
  9. Moustaka, E.; Constantinidis, T.C. Sources and effects of work-related stress in nursing. Health Sci. J. 2010, 4, 210. [Google Scholar]
  10. Chang, W.L.; Sabanovic, S. Interaction expands function: Social shaping of the therapeutic robot PARO in a nursing home. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 343–350. [Google Scholar]
  11. Calo, C.J.; Hunt-Bull, N.; Lewis, L.; Metzler, T. Ethical implications of using the paro robot, with a focus on dementia patient care. In Proceedings of the Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 7–11 August 2011. [Google Scholar]
  12. Huisman, C.; Kort, H. Two-year use of care robot Zora in Dutch nursing homes: An evaluation study. Healthcare 2019, 7, 31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Mackenzie, D. Ultraviolet light fights new virus. Engineering 2020, 6, 851. [Google Scholar] [CrossRef] [PubMed]
  14. Balter, M.L.; Leipheimer, J.M.; Chen, A.I.; Shrirao, A.; Maguire, T.J.; Yarmush, M.L. Automated end-to-end blood testing at the point-of-care: Integration of robotic phlebotomy with downstream sample processing. Technology 2018, 6, 59–66. [Google Scholar] [CrossRef] [PubMed]
  15. Moss, M.; Good, V.S.; Gozal, D.; Kleinpell, R.; Sessler, C.N. An official critical care societies collaborative statement: Burnout syndrome in critical care health care professionals: A call for action. Am. J. Crit. Care 2016, 25, 368–376. [Google Scholar] [CrossRef] [PubMed]
  16. Da Vinci Education. Available online: https://www.intuitive.com/en-us/products-and-services/da-vinci/education (accessed on 27 July 2021).
  17. Ho, C.; Tsakonas, E.; Tran, K.; Cimon, K.; Severn, M.; Mierzwinski-Urban, M.; Corcos, J.; Pautler, S. Robot-Assisted Surgery Compared with Open Surgery and Laparoscopic Surgery: Clinical Effectiveness and Economic Analyses. 2011. Available online: https://europepmc.org/article/med/24175355 (accessed on 9 June 2022).
  18. Palep, J.H. Robotic assisted minimally invasive surgery. J. Minimal Access Surg. 2009, 5, 1. [Google Scholar] [CrossRef] [PubMed]
  19. Zheng, C.; Wu, W.; Yang, T.; Zhu, S.; Chen, C.; Liu, R.; Shen, J.; Kehtarnavaz, N.; Shah, M. Deep learning-based human pose estimation: A survey. arXiv 2020, arXiv:2012.13392. [Google Scholar]
  20. Fang, H.S.; Xie, S.; Tai, Y.W.; Lu, C. Rmpe: Regional multi-person pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2334–2343. [Google Scholar]
  21. Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. BlazePose: On-device real-time body pose tracking. arXiv 2020, arXiv:2006.10204. [Google Scholar]
  22. Papandreou, G.; Zhu, T.; Chen, L.C.; Gidaris, S.; Tompson, J.; Murphy, K. Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 269–286. [Google Scholar]
  23. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Bulat, A.; Kossaifi, J.; Tzimiropoulos, G.; Pantic, M. Toward fast and accurate human pose estimation via soft-gated skip connections. In Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, 16–20 November 2020; pp. 8–15. [Google Scholar]
  25. Softbank. Pepper–Documentation–Aldebaran 2.5.11.14a. Available online: http://doc.aldebaran.com/2-5/home_pepper.html (accessed on 5 May 2021).
  26. Reed, R.G.; Cox, M.A.; Wrigley, T.; Mellado, B. A CPU benchmarking characterization of ARM based processors. Comput. Res. Model. 2015, 7, 581–586. [Google Scholar] [CrossRef] [Green Version]
  27. DGroos; Ramampiaro, H.; Ihlen, E.A. EfficientPose: Scalable single-person pose estimation. Appl. Intell. 2021, 51, 2518–2533. [Google Scholar] [CrossRef]
  28. Andriluka, M.; Pishchulin, L.; Gehler, P.; Schiele, B. 2d human pose estimation: New benchmark and state of the art analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 23–28 June 2014; pp. 3686–3693. [Google Scholar]
  29. Google. Mediapipe. Available online: https://github.com/google/mediapipe (accessed on 27 July 2021).
  30. SalinasJJ. BBpose. Available online: https://github.com/salinasJJ/BBpose (accessed on 27 July 2021).
  31. Elizabeth, H.W. The right site for IM injections. Am. J. Nurs. 1996, 96, 53. [Google Scholar]
  32. Doyle, G.R.; McCutcheon, J.A. Intramuscular Injections. In Clinical Procedures for Safer Patient Care, Chapter 7, Parental Medication Administration; BCCampus: Victoria, BC, Canada, 2015. [Google Scholar]
  33. DIN 33402-2:2005-12, Ergonomie–Körpermasße des Menschen ß Teil 2Ö Werte. Available online: https://doi.org/10.31030/9655264 (accessed on 10 June 2021).
  34. Softbank Robotics, QiSDK Tutorials. Available online: https://github.com/aldebaran/qisdk-tutorials (accessed on 5 August 2021).
  35. Softbank Robotics, Pepper QiSDK Design. Available online: https://developer.softbankrobotics.com/pepper-qisdk/design (accessed on 5 August 2021).
  36. Softbank Robotics Labs, Pepper Mask Detection. Available online: https://github.com/softbankrobotics-labs/pepper-mask-detection (accessed on 7 August 2021).
  37. Softbank Robotics, Pepper QiSDK Design. Available online: https://developer.softbankrobotics.com/pepper-qisdk/principles/mastering-focus-robot-lifecycle (accessed on 7 August 2021).
  38. Runblocking Documentation. Available online: https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/run-blocking.html (accessed on 10 August 2021).
  39. Kroeger, E.; Humbert, R. Pepperextras, Softbank Robotics Labs. Available online: https://github.com/softbankroboitcs-labs/pepper-extras (accessed on 10 August 2021).
  40. Boulay, B. Human Posture Dataset. Available online: http://bbpostures.free.fr/Human%20Posture%20Datasets2.htm (accessed on 1 August 2021).
  41. Google, Pose Classification. Available online: https://mediapipe.page.link/pose_classification_extended (accessed on 1 August 2021).
  42. Softbank, GoTo, API Level 1. Available online: https://developer.softbankrobotics.com/pepper-isdk/api/motion/reference/goto (accessed on 10 August 2021).
  43. Softbank, Comparison of Pepper’s OS Versions. Available online: https://developer.softbankrobotics.com/blog/comparison-peppers-os-versions (accessed on 15 July 2021).
  44. Softbank, Pepper SDK for Android. Available online: https://developer.softbankrobotics.com/pepper-qisdk (accessed on 15 July 2021).
  45. Jeroen Van Erp, sshj. Available online: https://github.com/hierynomus/sshj (accessed on 20 August 2021).
  46. Dominic, D. Get Pepper’s IP Running QiSDK from Android APP on Pepper, NAOqi 2.9. Available online: https://stackoverflow.com/a/63806715 (accessed on 13 August 2021).
  47. McGlaun, S. Autonomous Robot Performs Its First Intramuscular Injection without Needles. Available online: https://www.slashgear.com/autonomous-robot-performs-its-first-intramuscular-injection-without-needles-08698592 (accessed on 11 July 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.