*2.2. Hardware Implementation*

On one hand, validation of a multibody model thanks to specific sensors is widespread and ranges from many applications with various sensors, for example, a railway track geometry measurement based on IMUs, cameras, and encoders [13]; model estimation of a vehicle suspension system with strain gauges [14]; gait analysis through IMUs compared to optical motion capture [15]; validation of a axial piston pump model via force and pressure sensors [16]; and acceleration and force piezoelectric sensors to test a wind turbine flexible multibody model [17].

On the other hand, a growing number of researches has been carried out to use sensors in real-time to measure the physical system behavior. This information is crucial to properly feed the multibody model.

For example, in 2016, Torres-Moreno et. al focused on an online dynamic estimation of the four-bar mechanism [18]. The kinematics of the system is retrieved through IMUs measurements and optical encoders. They proposed an extended Kalman filter approach that allows to deal with constrained multibody system. The estimators are run in real-time and experiments show good agreements with simulation results.

Running a multibody model in real-time requires sufficient computational power on the considered hardware. While it is possible to perform the online computation on a classic computer with *Intel* processor and *Windows* OS [19], it may not be the most robust solution. Besides, if one aims to perform an embedded simulation, the hardware is often limited. In this regard, some researches have achieved to run multibody models on other kinds of platforms, such as FPGA [20] or ARM-based systems [21]. In particular, the above-mentioned symbolic approach has been proved to be a good candidate to achieve real-time computation without having to resort to simplifications within the model [22].

Concerning the software implementation, new tools have been provided by the scientific community to combine integrated devices and simulators, mainly in the robotic field. The so-called *middleware* platforms are able to manage the communications between the different interfaces and facilitate the code modularity and reuse. Rivera presented a recent summary of the middleware platforms in [23], while using a Gazebo/ROS-based environments for his application. The ROS environment is now widely used for real-time robotics, for example, in [24] to represent a complex robot real-time dynamic simulator.

In the recent past, our multibody software ROBOTRAN has been coupled with the YARP interface for humanoid applications [25], combining the real robot with the simulated one. Force–torque sensors and IMUs have been used in reality and successfully matched with their simulation equivalent.

Despite all these researches, the real-time multibody models are still rarely embedded in haptic devices, the goal being to improve the feedback for "rigid–rigid interaction" type applications, which only represent one of the multiple components of the Haptic rendering discipline [26]. Haptic devices often consider simplified dynamics to synthesize the haptic feedback, for example in the REPLALINK prototype [27] or in a piano action model [28] (see also Section 5.1). Advanced haptic devices embedding a full multibody model begin to appear in the literature [29,30], exhibiting promising prospects thanks to the conjoint evolution of modeling, computational and technological tools.
