2.2.4. Complete Prototype

All the mechanical parts that make up the sensing device were produced using a 3D printer. After that, the two mechanical parts and the electronic board described above were assembled with the use of the magnetic anchor and positioned on the EndoWrist tool of the dVRK system. Figure 5 clearly shows how the first version of the prototype guarantees a minimum footprint and better compactness. This support configuration is to be taken into consideration in cases where there is no need to detect objects placed on the workspace using the integrated sensors. For example, in the case in question, it is possible to refer to this first version to demonstrate the potential of the BLE connection between two boards placed on two arms of the dVRK. Furthermore, during the validation phase of the device on dVRK instrumentation, it was found that the friction between the surface of the instrument and the inside of the device prevents any movement of the support; therefore, the stability of the electronic component is guaranteed.

**Figure 5.** Complete prototype of the first version of the detection device connected to the EndoWrist tool of dVRK system.

The second assembled version of the prototype, unlike the first, allows the board to be kept parallel to the work surface. The main limitation of this configuration is the larger footprint, as shown in Figure 6, which could constitute an obstacle for the other tools of the robotic system during the execution of the tasks. Furthermore, it may seem less stable than the first configuration, but it has been tested that using magnets, the sensing device remains perfectly in position during the entire procedure carried out with the dVRK system.

**Figure 6.** Complete prototype of the second version of the detection device connected to the EndoWrist tool of dVRK system.

#### *2.3. Experimental Evaluations*

This section presents the experiments conducted to verify the possibility of introducing a proximity and color sensor on the EndoWrist instrument.

#### 2.3.1. Proximity Detection and Experiments with Organs

Proximity calibration was performed using the embedded APDS9960 unit of the Arduino Nano 33 BLE Sense. The path of the IR receive signal begins with the IR detection using four photodiodes, and ends with an 8-bit proximity result (256 values) in the PDATA register. So, we focused on proximity readings, which were based on sensing a tissue on photodiodes, and were then converted to millimeters within the sensor for our use. The board has been programmed to print out simple proximity detections and control the RGB LED accordingly, and to change the colors of the RGB LED according to the proximity of a tissue to the sensor. To take advantage of the functions of the APDS9960 detection unit, the <Arduino\_APDS9960.h> library was used. In particular, the LED lights up green if the object is far from the sensor (proximity value >150), blue in the case of intermediate distance (60 < proximity value < 150), and red if the object is very close to the sensor (proximity value < 60). The threshold values shown in parentheses have been implemented in the code. The first test performed concerns the evaluation of the proximity between the tissues and the sensor. This allows the surgeon to work in a safer environment with the sensorization of the instrument, which gives a sound or visual feedback based on the

distance of the instrument to the tissue. This aspect is very important during abdominal surgery procedures, where the various organs are very close to each other. Before getting to the heart of the testing phase, a connection was created between the Arduino Nano 33 BLE Sense board and the Matlab software through a serial communication protocol to manage the data collected by the proximity sensor directly in the Matlab environment. Subsequently, we moved on to the real-time display of the proximity values read by the sensor, which allowed for an instant evaluation of the trend of the data collected in relation to the distance in centimeters. Proximity data acquisition was performed with the MELFA RV3-SB industrial robot. For the execution of the tests, a support for the Arduino board to be fixed to the end effector of the robot was designed and built using a 3D printer. The support was fixed to the robot using two M5 screws, as shown in Figure 7.

**Figure 7.** (**a**) Support for the Arduino board fixed to the end effector of the MELFA RV3-SB industrial robot by means of two M5 screws. (**b**) View of the support with an integrated board made with a 3D printer.

The robot was programmed to perform movements along the z axis, allowing the sensor to move away from and approach the tissue sample. Once we acquired the initial position (minimum distance from the tissue) desired, we decided to make the robotic arm move by 13 cm by making vertical movements of 0.5 mm intervals. All the positions necessary for the desired movement were implemented manually using the robot programming language with a structure that is very simple and intuitive; for example, the command MOV was used to manage the movement of the robot from one position to the next. Slices of tissues with a thickness of about six millimeters were obtained from the organ. Obviously, since these are soft and/or spongy organs, it is not possible to consider a precise and absolute measure of thickness, as the tissue surfaces have protuberances and depressions. This condition was considered relevant for an optimal and faithful representation of the operational reality; therefore, no corrections were made. These slices of tissue were placed on a rigid plate on the workstation below the sensor. As shown in Figure 8, a safety space was left between the sensor and the surface of the tissues to ensure the cleanliness and integrity of the detector. During the testing phase, the same environmental lighting conditions were always maintained. Furthermore, the tests on the samples were carried out on the same day that the tissues were taken from the slaughterhouse; in this way, the freshness of the sample was guaranteed to preserve its color and consistency, to better respect the conditions in vivo.

**Figure 8.** Tissue samples from the pig, placed on the plate below the sensor for the testing phase with the robot MELFA RV3-SB. (**a**) Liver. (**b**) Gut. (**c**) Stomach.

#### 2.3.2. Classification of Tissues by Color

This subsection presents the implementation of a TensorFlow model to classify tissues, based on color detection. The main objective is to be able to distinguish normal tissue from one with characteristics that are not typical of physiological tissue, evaluating the color differences that characterize the two types of tissue. For example, it is possible to define the area of the tissue affected by cancerous manifestations that lead to an alteration of the color, compared to that of normal tissue.

To demonstrate this concept, we limited ourselves to verifying that it is possible, based on color, to distinguish two different tissues, i.e., liver and stomach. In particular, the procedure for the identification and classification of tissues was based on the use of the TensorFlow Lite Micro library and the Arduino Nano 33 BLE Sense colorimetric sensor. To do this, a simple neural network was implemented on the board, combining machine learning with integrated systems to develop an intelligent device. For the first phase of collecting the color data from the two portions of tissues, the Arduino Nano 33 BLE Sense board was first programmed. The colorimetric sensor was integrated inside the APDS9960 unit. This allowed the detection of the intensity of the red, green, and blue colors of each tissue subjected to the sensor. For correct data acquisition, it is advisable to move the sensor around the surface of the tissue to capture the variations of color, as shown in Figure 9. RGB color values were captured as comma separated data grouped in CSV format. This procedure was repeated for both organs, which we were decided to classify by capturing the color data.

**Figure 9.** Frames concerning the acquisition of color data from the stomach and liver to detect the intensity of the red, green, and blue colors of each tissue subjected to the sensor.

Following the data acquisition phase, the model training phase was implemented. The model training phase represents the process by which a model learns to produce the correct output for a given set of inputs. This phase involved feeding training data through the model, making small changes until the most accurate predictions were possible. A model was a network of simulated neurons represented by arrays of numbers arranged in various layers. As data are fed into the network, it is transformed by successive mathematical operations involving weights and distortions at each level. The output of the model was the result of executing the input through these operations. Training stopped when the model's performance stopped improving. For the training phase of the machine learning model, reference was made to the Google Colaboratory, an interactive environment that provides a notepad that allows the writing and execution of code in Python, using the data collected in the previous phase. Before TensorFlow Lite ran the model that was trained, it needed to be converted to the TensorFlow Lite format, and then a model.h file would be generated to be downloaded and included in the final Arduino code, to classify fabrics based on color. Finally, after loading this code on the Arduino board, it was possible, by approaching the RGB sensor close to the object to be classified with which the model was trained, to view the percentage relating to the two classes implemented in the model.
