3.1.4. Design Requirements

Based on the feedback obtained during the formative study, we identified the following design requirements to develop our interactive multimodal guide. Independent exploration is the most important need derived from the formative study. It is largely derived from two factors, adequate access to the artwork and the information presentation method to facilitate understanding and experience. To improve it, the IMG should tackle the following:


### *3.2. Interactive Multimodal Guide (IMG)*

Based on the design requirements that we identified from the formative study to address the limitations of tactile graphics and audio guides, we decided to develop an interactive multimodal guide. Our IMG will use a combination of tactile and audio modalities to communicate information and promote the exploration of visual artworks such as paintings. The tactile modality is covered by employing a 2.5-dimensional basrelief model representation of the visual artwork. This model is accessible by touch and will convey the spatial and composition information of the artwork and will be the primary input interface of the IMG. The audio modality will be delivered through speakers or headphones and will include: narrations, sounds, and background music to convey iconographic and iconological information. The following subsections will cover the implementation of the several components of our proposed IMG.

### 3.2.1. 2.5D Relief Model

Users of the IMG can touch the 2.5-dimensional model to ge<sup>t</sup> an idea of the objects, textures, and their locations in the artwork. The main difference between a tactile graphic and a 2.5D model is that the latter can provide depth perception by giving volume to the objects in the model. There are several techniques to extract the topographical information from artworks like paintings to make a 2.5D model. Three of them are 3D laser triangulation, structured light 3D scanning, and focus variation microscopy [47]. The advantage of these techniques is that they are highly automated and provide close to exact information to reproduce the artwork's surface. Blind and visually impaired people using a model designed using these techniques can perceive the direction of the strokes made by the artist, but often cannot recognize the objects. Only artworks made with simple strokes or rich in textures like splatter, impasto, or sgraffito are good candidates to be experienced with models designed using these techniques. Instead, we decided to use a semi-automated hybrid approach combining a technique known as shape from shading (SFS) [48]. SFS only requires a single image of the painting to generate the depth information to create a 2.5-dimensional model [49]. We chose this technique for three reasons: First, we do not need to have direct access to the artwork. Only a high-resolution image of the artwork is

required to generate the depth information. Second, the process is automated and does not need specialized equipment like stereo cameras. Third, the output of the process is a greyscale height-map image that can be easily modified with any image editing software for corrections, or like in our case, to abstract, simplify or accentuate features and objects on the image. The process to design a 2.5-dimensional relief model to use with our IMG is graphically described in Figure 1 and is as follows:

(**b**)

**Figure 1.** Touch sensitive 2.5D relief model fabrication process. (**a**) Original image; (**b**) Grey scale height-map; (**c**) 2.5D digital model; (**d**) 2.5D printed model; (**e**) Conductive paint coat (**f**) Completed 2.5D relief model.


Once the digital model of the relief model is ready, there are several methods to produce it. We chose to 3D print it using a fused filament fabrication 3D printer due to the variety and low cost of the materials, as well as the popularity and production services available (Figure 1d). It is also possible to 3D print the model using other 3D printing methods, as long as the material is non-conductive. Such methods are selective laser sintering (SLS) or stereolithography (SLA), which offer improved printing resolution at a cost trade-off. Another alternative is to use a CNC mill to carve the model out of a solid block of material.

The relief model is the primary input interface of our IMG. The touch interactivity on the relief surface is implemented by treating the surface with conductive paint. Conductive paints are electrical conductive solutions composed of dissolved or suspended pigments and conductive materials such as silver, copper, or graphite. We chose to use a water-based conductive paint that uses carbon and graphite for their conductive properties because of its easy to use, safe, and low cost nature. For our IMG, we used electric paint by bare conductive, but there are other suppliers in the market, as well as online guides to self-produce it.

Once the relief model has been 3D printed, making touch-sensitive areas is a simple procedure that only requires painting the areas that must be sensitive using conductive paint. The only requirement is to be careful to paint each touch-sensitive area isolated from the others, as seen in Figure 1e. If two treated areas with conductive paint overlap, they will act as one. The conductive paint dries at room temperature and does not require any special post-processing. One limitation of this method is that while extending or adding zones to the relief model is as simple as painting more areas or extending the existing ones, reducing or splitting existing ones is a more complicated process that involves scrapping or dissolving the paint. Therefore, it is recommended to plan the location and shape of the touch-sensitive areas. Each sensitive area must be connected with a thin conductive thread or wire to the circuit board. To this end, holes can be included in the model design before production or be made using a thin drill. Once the process is complete, the relief model can be sealed using a varnish or coating, preventing smudging and acting like a protective layer. It is possible to add subsequent layers of paint to produce a range of more aesthetic finishes, like a single color finish, a colored reproduction (Figure 1f), or different color palette combinations to improve visibility.

### 3.2.2. Control Board

The control board is the processing center of the IMG. It receives the touch sensor input from the 2.5D relief model described in Section 3.2.1 and peripherals, processes the signals, and provides audio output feedback. The control board is primarily composed of three components: An Arduino Uno microcontroller (Arduino, Somerville, MA, USA), a WAV Trigger polyphonic audio player board (SparkFun Electronics, Boulder, CO, USA) and an MPR121 proximity capacitive touch sensor controller (Adafruit Industries, New York, NY, USA). The wire leads from each of the touch sensitive areas of the relief model connect to one of the electrode inputs of the MPR121 integrated circuit. The MPR121 processes the capacitance of each of the touch areas in the relief model, which changes when the users touch the area, and it communicates touch and release events to the microcontroller through an I2C interface. One MPR121 integrated circuit is limited to 12 electrodes. It can only handle input for up to 12 touch areas. While this was enough for our prototypes, if more touch areas are required, up to four MPR121 can be connected by configuring different I2C addresses for a total of 48 touch areas. If more areas are needed, an I2C multiplexer, such as the TCA9548A (Adafruit Industries, New York, NY, USA), can be used to extend the number of supported touch zones. The microcontroller acts as the orchestrator of the control board. It receives input signals from the MPR121 and its general purpose input/output ports, processes them, and depending on the current state of execution, issues commands through its UART port to the audio board to trigger audio feedback. The WAV Trigger polyphonic audio player is a board that can play and mix up to 14 audio tracks at the same time and outputs the amplified audio through a mini-plug speaker connector. The audio files are read from an SD card and should be stored using WAV format.

### 3.2.3. External Hardware

Besides the relief model and the control board, the IMG is composed of an enclosure display. The enclosure was designed for different exploration scenarios. For example, for our preliminary test, a portable box-shaped enclosure is made of laser-cut acrylic. The box itself acts as an exhibit, the relief model is on its top surface, and the control board and electronics are in its interior. Headphones or speakers are connected to listen to the audio, and there is a button that the user can push to start using the IMG prototype. This prototype is meant to be placed on a desk to be used in a seated position during the early preliminary tests to make its use more comfortable for longer periods. For the IMG evaluation, we designed an exhibition display made of plywood for standing up use, as this is the more frequently used display arrangemen<sup>t</sup> in art museums and galleries. This version includes three physical buttons with labels in Braille to listen to use instructions, general information of the artwork, and to change the speed of the audio. Headphones are on the right side of the display. Depending on the size of the relief model or the floor space of the gallery, it might be difficult to explore the relief model if it is displayed horizontally or at a near angle, so a full-size vertical display was also developed, as seen in Figure 2c.
