**1. Introduction**

Multi-sensory interaction aids learning, inclusion, and collaboration because it accommodates the diverse cognitive and perceptual needs. Multi-sensory integration is an essential part of information processing, by which various forms of sensory information such as sight, hearing, touch, and proprioception (also called kinesthesia, the sense of self-movement and body position) are combined into a single experience. Information is typically integrated across sensory modalities when the sensory inputs share certain common features. Cross-modality refers to the interaction between two different sensory channels. Cross-modal correspondence is defined as the surprising associations that people experience between seemingly unrelated features, attributes, or dimensions of experience in different sensory modalities. For visually impaired people, conventional human–computer interaction devices are inconvenient, as these devices rely heavily on visual information. Though many studies introduce the use of other modalities of sensation like haptic, sound, and scent for the user interface to act as a supplement for the absence of vision, they are still not close to what vision is to the people. The topics of interest include but are not limited to the following:

	- Flexible haptic displays;

This Special Issue contains 10 research papers [1–10] and two review papers [11,12].
