In recent years, the Internet of Things (IoT) has paved the way for new methods of obtaining and processing large amounts of data, playing a crucial role in a variety of applications, such as mobility, industrial automation, and smart manufacturing. The necessity for a large number of sensors to monitor crucial production variables in order to ensure consistent product quality and optimize energy consumption [
1] has caused the emergence of virtual sensors, software-based models that emulate the functioning of physical sensors. The main advantage of virtual sensors is related to their cost-effectiveness, gaining an advantage especially in those cases where the installation and maintenance of physical sensors is difficult and expensive [
2,
3]. The flexibility of virtual sensors, which are easily adaptable to different process conditions and environments, makes them easy to integrate into industrial settings with different aims, ranging from predictive maintenance [
4] to process optimization [
5]. Generally, soft sensors are divided into two categories: those based on mathematical models and those based on data-driven methods. The foundational concept of virtual sensors originates from the utilization of mathematical models to estimate critical quantities, without direct measurement. Hu et al. extrapolated the force exerted at the catheter tip within the human body with a mathematical model, exploiting a sensor positioned on the exterior of the catheter tube [
6]. In recent years, with the advent of new technologies that provide enhanced data storage capabilities, data-driven methods have gained significant attention thanks to the development of new machine learning algorithms [
7,
8]. This innovative data-driven approach offers a new, effective means of obtaining accurate readings along the entire processing chain, without installing expensive hardware, providing an effective tool for the online estimation of quantities that are important but difficult to measure with traditional sensors. A clear example of a data-driven virtual sensor is presented by Sabanovic et al. in their work, where different architectures of artificial neural networks are exploited to estimate vertical acceleration in vehicle suspension [
9]. Leveraging technological advancements in computational power and increasingly efficient algorithms, recent developments have paved the way for the use of vision-based virtual sensors (i.e., based on image analysis). Indeed, the use of cameras, even across various spectral ranges, allows for real-time information gathering in a non-invasive manner, enabling installation and data acquisition even on pre-existing structures that were not designed for the use of physical sensors. Moreover, the extreme versatility of these systems allows for their use remotely or in dangerous or hard-to-reach situations (i.e., through the use of robots or unmanned aerial vehicles (UAVs) [
10]). Byun et al. introduce a methodology for the use of image-based virtual vibration measurement sensors to monitor vibration in structures, offering an alternative to traditional accelerometers [
11]. Wang et al. introduce a model-based approach for the design of virtual binocular vision systems using a single camera and mirrors to improve the 3D haptic perception in robotics [
12]. Ögren et al. develop vision-based virtual sensors with the aim of estimating the equivalence ratio and concentration of major species in biomass gasification reactors, using image processing techniques and regression models on real-time light reaction zone image data, demonstrating the applicability and effectiveness of vision-based monitoring for process control in a complex industrial setting [
13]. Alarcon et al. discuss the integration of Industry 4.0 technologies into fermentation processes by implementing complex culture conditions that traditional technologies cannot achieve. In this context, computer vision techniques are exploited to develop a virtual sensor to detect the end of the growth phase and a supervisory system to monitor and control the process remotely [
14]. There are many cases found in the literature related to the use of virtual sensors to measure forces. Physics-informed neural networks (PINNs) are used to estimate equivalent forces and calculate full-field structural responses, demonstrating high accuracy under various loading conditions and offering a promising tool for structural health monitoring [
15]. Marban et al. introduce a model based on a recurrent and convolutional neural network for the sensor-less force measurement of the interaction forces in a robotic surgical application. Using video sequences and surgical instrument data, the model estimates the forces applied during surgical tasks, improving the haptic feedback in minimally invasive robot-assisted surgery [
16]. Ko et al. developed a vision-based system to estimate the interaction forces between the robot grip and objects by combining RGB-D images, robot positions, and motor currents. By incorporating proprioceptive feedback with visual data, the proposed model achieves high accuracy in force estimation [
17]. In the context of smart manufacturing, Chen et al. develop a real-time milling force monitoring system, using sensory data to accurately estimate the forces involved in the process, thus enabling real-time adjustment to optimize the cutting operation [
18]. Bakhshandeh et al. propose a digital-twin-assisted system for machining process monitoring and control. Virtual models, integrated with real-time sensor data, are used to measure the cutting forces, enabling adaptive control, anomaly detection, and precision applications without physical sensors [
19].
1.1. Novel Work and Motivation
In this research, we intend to demonstrate that properly trained inferential models have the potential to revolutionize the study of the dynamic behavior of real systems and mechanisms. By exploiting virtual sensing techniques, we intend to observe these systems in operation in real time through a camera, eliminating the need for additional, often bulky and difficult-to-install sensors. Our goal is to validate this approach using a simple mechanism as a test case, illustrating how these models can become valuable tools in studying the operation of machinery and devices with ease, efficiency, and robustness. This study is intended to explore the feasibility and effectiveness of inferential models in capturing and analyzing the complex dynamics of mechanical systems, paving the way for their widespread application in industry. Even in the case in which accurate multi-body models or digital twins of the mechanism are available, delivering real-time input to these models can be challenging. Simulating the actual scenario for a multi-body model requires knowledge of the actual motion of law and demands significant computational effort, especially in the case of very complex models. The measurement of the motion law is easy in the case of synchronous motors since their speed is correlated to the frequency of the AC power supply and the number of poles of the motor itself. However, in an industrial context, asynchronous motors are frequently used, and the measurement of their speed is not straightforward. External sensors such as encoders become necessary, demanding both excessive time and high costs for their installation. Our research investigates the implementation of an innovative vision-based virtual sensor that, through data-driven training, is able to emulate traditional sensing solutions for the estimation of reaction forces. The implemented virtual sensor and related multi-layer perceptron (MLP) architecture are trained and tested using a simple mechanism, exploiting a multi-body model. The first model is trained with the ideal inputs and outputs, while the second is instead trained on a dataset that takes into account the uncertainty in the measurement of the input quantities (i.e., closely replicating a real-world scenario). The developed models are tested with new and unobserved trajectories to further assess their effectiveness and robustness.
The motivation behind our study lies in the challenge of studying the dynamics of operational machines that lack installed sensors. In many cases, halting production for the installation of sensors is impractical and time-consuming, despite the potential benefits in terms of analysis, control, and overall system reliability. The interest in exploring non-invasive alternatives, to obtain accurate estimations of machine behavior without disrupting ongoing operations, drove us to implement a specific virtual sensor for the estimation of ground reaction forces in industrial machines.
The concept revolves around exploiting verified multi-body models of machines to train AI-based virtual sensors. These sensors can then be employed to gather real-time information by processing data from cameras, eliminating the need for physical force sensors. Our aim is to develop a robust method that enables the generation of virtual sensors capable of providing valuable insights into the dynamics of existing operational machines. Our approach is well suited for the real-time estimation of forces and other relevant values and it accounts for the variability introduced by various factors, such as the camera positioning, significantly enhancing the reliability and robustness of the sensor. In summary, our study paves the way for the establishment of a robust framework for the creation of virtual sensors that can monitor and gauge the condition of moving machines solely by visually observing them.