Next Article in Journal
Cultural Heritage as a Didactic Resource through Extended Reality: A Systematic Review of the Literature
Previous Article in Journal
Harnessing AI and NLP Tools for Innovating Brand Name Generation and Evaluation: A Comprehensive Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Blind Individuals Recall Mathematical Expressions in Auditory, Tactile, and Auditory–Tactile Modalities

by
Paraskevi Riga
and
Georgios Kouroupetroglou
*
Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Ilissia, GR-15784 Athens, Greece
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2024, 8(7), 57; https://doi.org/10.3390/mti8070057
Submission received: 10 May 2024 / Revised: 23 June 2024 / Accepted: 28 June 2024 / Published: 2 July 2024

Abstract

:
In contrast to sighted students who acquire mathematical expressions (MEs) from their visual sources, blind students must keep MEs in their memory using the Tactile or Auditory Modality. In this work, we rigorously investigate the ability to temporarily retain MEs by blind individuals when they use different input modalities: Auditory, Tactile, and Auditory–Tactile. In the experiments with 16 blind participants, we meticulously measured the users’ capacity for memory retention utilizing ME recall. Based on a robust methodology, our results indicate that the distribution of the recall errors regarding their types (Deletions, Substitutions, Insertions) and math element categories (Structural, Numerical, Identifiers, Operators) are the same across the tested modalities. Deletions are the favored recall error, while operator elements are the hardest to forget. Our findings show a threshold to the cognitive overload of the short-term memory in terms of type and number of elements in an ME, where the recall rapidly decreases. The increase in the number of errors is affected by the increase in complexity; however, it is significantly higher in the Auditory modality than in the other two. Therefore, segmenting a math expression into smaller parts will benefit the ability of the blind reader to retain it in memory while studying.

1. Introduction

Speech technologies, particularly Text-to-Speech (TtS) systems, have been contributing significantly to digital accessibility since the invention of the first TtS engine in 1986 [1,2]. Nowadays, automated reading devices and screen readers [3,4] are extensively used by blind users to convert printed or electronic textual content to audible speech. In education, students with blindness use computers and mobile devices with Assistive Technology (AT) to access the educational content and participate in the educational process [5]. These AT use the following modalities:
(a)
Auditory Modality through TtS systems in connection to screen readers;
(b)
Tactile Modality by
  • reading texts in braille either on embossed paper or on refreshable braille displays,
  • reading tactile images, or
  • manipulating 3D tactile artifacts;
(c)
Auditory–Tactile Modality by listening to an audio rendering output (e.g., from a screen reader or other AT) alongside reading the output braille (e.g., on a refreshable braille display or an embossed sheet of paper or by using audio–tactile devices [6,7]).
When math is in a digital form, not just graphically presented but in code accessible to AT, it can be commonly rendered in either Tactile Modality based on a braille math notation or in Auditory Modality using a Math-to-Speech (MtS) system that complies with specific speech transformation rules.
In recent years, the acoustic rendering of mathematics has been explored and applied mainly at the research level. One of the most essential AT systems to make math accessible via speech and sound was AsTER (Audio system for TEchnical Readings) [8]. AsTER was a tool to convert LaTeX [9] documents to a format that could be used as audio documents. MathTalk [10] was developed to speak standard algebra notation through a speech synthesizer, using prosody to make math more accessible and allow the user control of the information flow. AudioMath [11] was introduced as an application to convert mathematical expressions (MEs) from MathML [12] format to plain text and, along with a TtS system, reading out the mathematical content. MathSpeak, which incorporated a set of rules for speaking MEs non-ambiguously [13,14], became a component of MathPlayer [15]. Localization (i.e., adaptation of a specific native language), support of multilingual mathematical or textual content, cultural differences, and user preferences are among the open challenging factors that influence the behavior of advanced MathML players [16]. Some examples of local implementations for audio math rendering have been proposed for the Thai language [17], for Polish [18], and for the Korean language [19]. Ongoing research for advanced MtS is aimed at navigating mathematical structures.
Nowadays, some screen readers apply the acoustic rendering of mathematics. They either incorporate the ability to speak math (JAWS, VoiceOver with Safari) or achieve it with the help of plugins (MathPlayer [20] and MathJax [21]) or browser extensions (ChromeVox).
The rules for the acoustic rendering of mathematics are less extensive regarding notations and coverage of MEs than those for the braille notations of math. One reason is that braille notations, such as Nemeth [22], provide rules to extend the given symbols and create new ones at any given time. Also, while in Tactile rendering, similar to the visual representation, readers are responsible for interpreting the role of a symbol that can take different names based on the context (e.g., the symbolic operator ∇ could be read as “nabla”, “del” in vector analysis, “backward difference” in the calculus of finite differences, “widening operator” in the computer science field of abstract interpretation and more), speech rules in existing systems do not provide this “smart” interpretation yet and use descriptive or more generic descriptions of some symbols. Contextual semantic analysis has been recently proposed [23] to address this shortcoming.
Braille math notations that are currently in use include the Antoine Notation (French Braille Code), Nemeth Code, Unified English Braille Code (UEB), British Mathematics Notation (BAUK), Spanish Unified Mathematics Code, Marburg Mathematics (German Code), Woluwe Code (Notaert Code), Italian Braille Code, Swedish Braille Code, Finnish Braille Code, Russian Code, Chinese Code, and Arabic Code [24]. Some of them are solely and others are partially dedicated to mathematics. As their names suggest, the codes differ from country to country, and no global braille notation is in use, unlike in math for the sighted. Given the linearity of braille and the finite number of symbols to be represented in a single braille cell, these codes contain complex rules to convey mathematical symbols and structures in a space-saving fashion [25].
Other written systems or codes used in some form by blind people include LaTeX and MathML. LaTeX is widely used to create technical and scientific documents, and blind people studying STEM subjects in higher education train themselves in using source code LaTeX as an option to read and write mathematical content. LaTeX is sometimes used as an alternative text to ME incorporated in a document or on a webpage as images. It can be used as input to some commercial math accessibility products such as MathType [26], DBT [27], Tiger Software Suite [28], and ChattyInfty [29]. Different efforts have been made to either make LaTeX more accessible [30,31] for the visually impaired or to convert LaTeX to an accessible format (e.g., braille) [32,33]. MathML is not meant to be written or read in source code but is a code for mathematics on the Web; therefore, it is used as input to some of the AT systems mentioned above.
When sighted people read an ME, it has been observed that they (a) read from left to right, element by element, (b) back-scan the expression, (c) substitute the outcome of a parenthetical expression, and (d) scan the entire ME for creating a schematic structure [34]. These observations were supported by experiments conducted with sighted participants in Visual Modality. In contrast to sighted students who can acquire MEs from their sources whenever necessary, blind students must keep them in their memory [35].
Working memory has received much attention as a source of improved cognitive functions in middle childhood. It is considered the “active” memory system, which holds and manipulates the information needed to reason about complex tasks and problems [36]. A standard behavioral method for measuring the changing capacity of working memory is to assess children’s memory span, that is, the number of randomly presented pieces of information that children can repeat as soon as they are presented [37]. Researchers divide memory into two stages: short-term memory, lasting from seconds to hours, and long-term memory, which lasts from hours to months [38]. According to [39], the auditory information remains in short-term memory for around 10–30 s.
As mentioned, there are two modalities for blind students: the Auditory and the Tactile. The first step to mathematical problem solving is the ability to hold the information in memory. Recall and the working memory of blind people is typically addressed to children in the literature and usually, but not always, it is related to text [40,41]. There has not been any previous reference to the recall of whole MEs that contain structural elements, operators, numerical elements, and identifiers, as opposed to number sequences. When comparing auditory versus tactile encoding, blind and braille-literate children recall more words encoded in braille compared to when listening to words [42]. The same has not yet been confirmed for math.
In this work, we intend to check the ability of blind individuals to temporarily retain an ME when they use different input modalities. We measure the capacity of one’s memory retention by mathematical expression recall. Our goal is to answer the following questions:
i.
Is there a threshold to the cognitive overload regarding the type and number of elements in an ME where the recall rapidly decreases?
ii.
Does a modality provide better chances of ME recall to blind users?

2. Materials and Methods

The basis of this study relies on experiments that took into account user experience with MEs in terms of representation and not calculation. Specifically, blind individuals were invited to read (using Tactile or Auditory or Auditory–Tactile Modalities) and then recall three sets of similar MEs in a three-unit experiment. The approach in the present study was influenced by the EAR-Math evaluation methodology for audio-rendered MEs [43] but was modified accordingly to incorporate Tactile Modality. Participants were asked to recall the representation of MEs.

2.1. Participants

Sixteen volunteers who were blind (age: 21.25 ± 5.98 years, eight males and eight females, education: 13.27 ± 3.86 years) participated in this study. All of them (100%) had a visual loss of 95–100% in both eyes. All participants had a good grasp of the braille code for both literal and math texts (braille users for 15.18 ± 5.77 years). They all reported being active users of embossed braille and reading math during the last two years prior to the experiment. Regarding their education level, all users attended some elementary school for the blind, for 2–6 years, depending on when they became blind, followed by inclusive education in secondary school. The level of mathematical education received was the same for all participants. However, their competence in the subject was not measured since no computations were required on their part. All the participants spoke Greek as a primary language and used screen readers daily. None of the participants had any other disability (e.g., hearing or dexterity impairment) or were diagnosed with a learning difficulty. They all confirmed that they fully understood the experimental procedure of the current study and signed a written consent form for their participation. For the underage participants, an additional parental consent form was signed. All signatories were given in both printed and embossed documents. The research followed the tenets of the Helsinki Declaration and was approved by the Ethics Committee of the National and Kapodistrian University of Athens.

2.2. Materials

The MEs used in the stimuli were based on that introduced in Raman’s AsTER [8]. Our set of mathematics included simple fractions and expressions, superscripts, and subscripts, Knuth’s examples of fractions and exponents, a continued fraction, square roots, trigonometric identities, logarithms, series, integrals, summations, limits, cross-referenced equations, the distance formula, a quantified expression, and exponentiation. Well-known expressions, such as the Pythagorean theorem and trigonometric identities, were excluded to avoid implicit associate responses. All the mathematical concepts included in the stimuli are taught as part of the Greek secondary school curriculum. A total of 25 expressions were initially selected.
Using the Presentation MathML syntax, MEs can be regarded as trees where each node corresponds to a MathML element, the branches under a “parent” node correspond to its “children”, and the leaves in the tree correspond to atomic notation or content units such as numbers, characters, etc. [44]. For this work, we chose to address the three element types found of presentation token elements, namely (a) structural elements, (b) identifiers and numbers, and (c) operators. As an example, the syntax tree of the math expression e ( α χ + β χ + χ ) is depicted in Figure 1.
Each of the three experimental units provides the user with 25 MEs in random order (25 expressions × 3 sets = 75 total stimuli) (see Appendix A). We created two extra variation sets of the initially selected expressions to avoid learning the original expressions using mnemonic strategies. The expressions in the three sets had identical structures and the same number of identifiers and operators. They only differed in identifiers and operators when moving from one set to another. We wanted them to maintain similarity to the initially selected set and have the same level of difficulty while being different.
The expressions chosen from different math areas were also given in random order to each unit and user to ensure that the deliberate use of practices to enhance memorization [45] was minimal, if existent.
In their tactile form, the MEs were embossed in the Nemeth Code on free dust paper of 160 g/m2 of A4-size sheets, one per page, in the middle of the paper in landscape orientation (Figure 2), using an Index Everest V4 embosser. The ME was also written above the tactile form to aid the researcher in following the expression while a participant read it out loud.
In their auditory form, the MEs were pre-recorded using MathPlayer with the Acapela Text-to-Speech Greek voice Dimitris, a voice familiar to all participants, in the default speech rate and pitch. The users could set only the sound level to match their individual needs. The participants did not have the option to navigate in the MEs.
We replaced the embossed test stimuli sets for each group of eight participants to avoid paper deterioration caused by intensive use. Paper deterioration was similar to the attrition of braille books after extended use.

2.3. Experimental Procedure

Initially, a researcher briefly described the study’s objectives, the experimental procedure, and how to complete each task for each participant.
Before experimenting, (i) users were trained in audio rules used by MathPlayer, and (ii) the Greek braille system and Nemeth braille code were repeated. To complete the training phase, users were asked to read and write 15 MEs afterward to ensure they understood the audio rules and could write in Nemeth code. The expressions used in the training phase were the ones from AsTER that were left out of the experiment phase. The whole training lasted 1 h.
The experiment was conducted in three units with a one-day gap between them. The units were (1) Tactile, (2) Auditory, and (3) Auditory–Tactile, assigned randomly to each participant. One blind individual at a time participated in the experiment conducted in a quiet room. The experiment was set in a quiet environment not to interfere with users’ concentration and achieve maximum information retention. During an experimental unit, participants sat on a chair with adjustable height in front of a desk. To note their answers, the researcher placed a Perkins braille machine and A4 120 g/m2 paper sheets on the desk (Figure 3). The researcher was responsible for providing each stimulus to the user (embossed sheets and/or audio recordings).
Our experiment extensively used the users’ short-term memory and was not designed to require any computing on their part. If we adopted Baddeley’s [46,47] multi-component model for working memory, in both Tactile and Auditory Modalities, the users would have temporarily used the speech-based phonological loop to store the MEs. The tactile presentation of the MEs was given in a horizontal format, as in the auditory representation. When reading them in braille, we asked the users to read the MEs aloud to treat them as math and not as text. As with the multi-digit arithmetic problem when presented in a visual format where individuals may translate the visually presented information into a phonological code for temporary storage [48], we ensured that our users translated the tactile information into a phonological code for temporary storage. In translating the tactile input to phonological code, participants had to use the input sensory recording and retrieve the meaning of the braille codes from their long-term memory. We hypothesized that users would benefit from the extra computing and therefore show better results in the tactile part.
The participants were asked to read/hear each stimulus only once— the users were not allowed to repeat the material they had to memorize [49] and then write on the braille machine as much as they remembered from the ME. In the tactile unit, the users were also asked to recite what they were reading so that we could check that they recognized mathematical rather than mere braille symbols.
In the Audible–Tactile experimental unit, the MEs were first assigned in an embossed form in Nemeth code. Once a participant finished reading braille, the auditory version of the same expression was rendered, and then they were asked to write on the braille machine as much as they remembered from the expression.
The Tactile and Audiovisual parts of the experiment were video recorded to determine the reading time in later analysis. To determine the reading time in the case of Tactile reading, the recording focus was on the stimuli, as well as on the hands of the participants.
All the reading times were recorded by the experimenter using a stopwatch after the end of each experiment. The experimenter rewatched the video recordings. The timer started when the user first touched the embossed expression and stopped when the user took their hands off the printed paper.
The embossed paper sheet was fixed on the desk’s surface, and participants were allowed to explore stimuli freely with both hands and all fingers, as in that case, a more detailed examination could be performed effectively.
Each recall trial ended after the participant announced that they finished writing. The procedure of an experimental unit was repeated until all 25 stimuli of the same set were tested. The sequence of the units of the experiment, the stimuli set to be used for each unit, as well as the sequence of stimuli within each test for each participant were randomly selected with normal distribution based on computer software. The users visited the MEs sequentially and only once for all modalities.

2.4. Data Analysis

The primary outcome of this study is the number of recall errors (RE), and the main question is whether RE varies significantly between the three modalities—Auditory (A), Tactile (T), and Auditory–Tactile (A–T). The proportional distribution of RE is described and compared across (a) error types: Deletions (D), Substitutions (U), and Insertions (I), (b) elements: Structural (S), Numerical or Identifiers (N) and Operators (O), and (c) the combination of error types and elements [50,51].
The mean values of recall errors were compared between the two genders with the independent samples t-test and between modalities, error types, elements, and their two-way and three-way interactions with the three-factor ANOVA, followed by pairwise comparisons with Bonferroni adjustment.
Moreover, using regression techniques, the distribution of RE was tested against the complexity of the MEs, where complexity (C) is defined as the total number of structural elements, numerical identifiers, and operators contained in the following expression:
Complexity = Structures + IdeNtifiers + Operators
C = S + N + O
Finally, we used repeated measures ANOVA to evaluate under which complexity conditions the RE significantly differs between the three modalities. The level of significance was set at 0.05.

3. Results and Discussion

3.1. Descriptive Statistics

The sixteen participants committed a total number of 5408 recall errors across the three modalities. Figure 4A shows that this number was not evenly distributed between the three modalities. There were 2403 errors in Auditory Modality, 1606 in Tactile Modality, and 1399 in Auditory–Tactile Modality. This is the first indication that performance in the specific experiment was inferior in Auditory Modality and that Auditory–Tactile Modality was significantly better than Tactile Modality.
Figure 4B depicts that most recall errors were deletions, i.e., when participants omitted an element. Substitutions and insertions of elements were much less frequent.
The majority of recall errors were committed with structural elements (Figure 4C). However, the ME sets had different numbers of identifiers, structures, and operators: 192 identifiers, 163 structures, and 103 operators. Therefore, the correct approach is to divide the total number of recall errors in the identifiers, structures, and operators by the total number of items in each category to obtain the mean number of recall errors per element type. This approach makes recall errors across the three element types seem evenly distributed (Figure 4D).
Table 1 presents the time spent on each ME regarding mean, minimum, and maximum values. Users spent less time on the tactile part of Auditory–Tactile Modality than on Tactile Modality. Still, this time difference is insufficient to cover the time spent on the auditory part of Auditory–Tactile Modality, making it the most time-consuming.

3.2. Inferential Statistics

There were no significant differences in the mean numbers of recall errors between the two genders (t-test, t430 = 1.947, p = 0.052, Figure 5A). Three-factor analysis of variance revealed that all three factors had a significant effect on the mean number of recall errors: (Modality—F2,405 = 10.8, p < 0.01), Error type—F2,405 = 111.0, p < 0.01 and Element—F2,405 = 15.5, p < 0.01. There were no significant two-way or three-way interaction effects. Post hoc pairwise comparison with Bonferroni adjustment revealed that (a) the mean number of recall errors in Auditory Modality was significantly greater than in Tactile Modality (p = 0.022) and Auditory–Tactile Modality (p = 0.018)—Figure 5B, (b) the mean number of recall errors of the deletion type was significantly greater than those of the insertion type (p < 0.01) and the substitution type (p < 0.01) (Figure 5C), and (c) the mean number of recall error in operators was significantly lower than in identifiers (p < 0.01) and structures (p < 0.01), Figure 5D.
The fact that there are no interaction effects means that the relative number of recall errors in each modality is independent of the types of error and the elements. This allows for investigating the dependence of the number of recall errors on the complexity of the MEs and evaluating under which complexity conditions the averaged per participant RE significantly differs between the three modalities.
Contrary to what might be expected, the dependency of the number of recall errors on the complexity of the MEs is best described by a linear equation rather than a power or an exponential function (Figure 6A). This means that the number of recall errors is expected to increase linearly, proportional to the increase in the complexity of the expression. According to the regression equation, an increase in two items in the complexity of the ME results in an increase in roughly one recall error.
Furthermore, it seems (Figure 6B) that the linear relationship between the number of recall errors and the complexity of the expression is different in the three modalities.
Table 2 presents the parameters of the linear regression equations of the dependency of the number of errors (RE) on the complexity (C) of the ME for the three modalities, RE = a + bC, where a is the constant and b is the coefficient (slope) of the equation.
The 95% confidence intervals (CIs) for coefficient (b) in the auditory modality lie beyond the CI for the other two modalities. Thus, the coefficient (0.570) in the auditory modality is significantly greater than the coefficients in the other two modalities (0.449 and 0.389). This means that the increase in the number of errors effected by the increase in complexity is significantly greater in the auditory modality than in the other two modalities.
Table 3 presents the results (p-values) of the three pairwise comparisons of the values of recall errors between the three modalities separately for each ME complexity. In expressions of low complexity (up to 10 items), the participants performed equally well in all three modalities. Starting from the medium complexity of 11 items and up to 35 items, participants performed significantly better (marked as bold in gray cells) in Tactile and especially in Auditory–Tactile Modalities than in Auditory Modality. Finally, the participants performed equally poorly in the high-complexity expression containing 46 items.

4. Conclusions

In this investigation, we worked with blind users active in learning and with math content that has not been randomly generated but one may come across in a textbook, consisting not only of numbers but also of variables, symbols, operators, and functions. We experimented in three settings with Auditory, Tactile, and Auditory–Tactile Modalities in an experiment designed to measure the users’ short memory capacity regarding ME recall. The questions we posed were answered in the following conclusions.
The first conclusion from the statistical analysis is that the distribution of the recall errors regarding the error types and elements is the same across the tested modalities.
Second, deletions are by far the most common type of recall error, although participants were asked to write on paper every part of a given ME they could recall and to avoid omitting the parts they did not feel they retained correctly.
A third conclusion is that recall errors in operators are less frequent than in structures and identifiers, which is in accordance with the results of a similar experiment conducted on sighted students in Visual Modality [34].
Fourth, the complexity of the MEs (i.e., the total number of math elements) affects the recall capabilities of the participants, as expected, because of the augmented cognitive load. The number of recall errors is linearly dependent on the complexity of the expression. However, the increase in the number of errors effected by the increase in complexity is significantly greater in Auditory Modality compared to the other two modalities. In expressions of medium complexity, the participants’ performance in Auditory Modality is substantially worse than in the other two modalities. Expressions of low complexity are easily recalled, while expressions of high complexity are not, irrespective of modality. Therefore, our hypothesis that participants perform worse in Auditory Modality than in Tactile and Auditory–Tactile Modalities is proven for expressions of medium complexity. These expressions are neither too short to occupy one’s full short memory nor too long that one cannot benefit from using long-term memory in the tactile mode.
The current study constitutes a first step toward recommendations to be considered when designing math educational material for people who are blind. It is a given that educators must make math content accessible in different modalities, depending on the student’s preferences. Our findings suggest that the extraneous cognitive load cannot be eased by choosing a specific modality in favor of another, but for medium complexity, math braille is a better choice. Thus, long MEs should be given to the student in smaller parts, as proposed previously [52]. While cognitive accessibility [53] aims to make content usable for people with cognitive and learning disabilities, based on our results, the length of the MEs embedded in text should also be considered by both content creators and (semi)automatic accessibility checkers.
If students are given control of a lengthy ME over audio, they can pause it whenever they see fit, therefore segmenting it themselves. An automatic segmentation would be preferable in comparison to self-segmentation, as in its case it could be performed on different structural levels and not random places, thereby allowing for users to listen to complete sub-expressions. Therefore, pre-recorded audio of math is not preferable to a fully accessible content that can be accessed multimodally by students. In real-life circumstances, e.g., in a textbook, a long and complex ME is usually “built” in/from several steps/expressions, so readers can use prior knowledge to recall the new expression. However, whether this prior knowledge will augment the recall is unproven and thus requires further research.
As proven, providing Tactile or Auditory–Tactile content to students increases their ability to understand and recall the ME. These modalities also prove valuable if an ME contains ambiguous symbols whose meanings depend on the context.
In 2017, in the USA, blind people represented less than 5% of all the science, technology, engineering, and mathematics (STEM) workforce [54]. If interest in STEM is lost in the educational years, then we believe we should try to make STEM content more interesting by making it more accessible also at a cognitive level. Since multimodal interaction and technologies are a given for blind people and there is a constant interest in research to exploit newer technologies in pursuit of accessibility, the technologies created for math should offer users access to different modalities and assist them in decreasing the cognitive load and achieving better recall.
In the future, we plan to exploit the video recordings of our experiment further. We want to study the users’ finger movements, pauses, and backtracking and check whether they are somehow in accordance with how sighted users look at MEs [55].

Author Contributions

Conceptualization, P.R. and G.K.; methodology, P.R.; validation, G.K.; investigation, P.R.; data curation, P.R.; writing—original draft preparation, P.R.; writing—review and editing, G.K.; visualization, P.R.; supervision, G.K.; project administration, G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the National and Kapodistrian University of Athens. (Project no. 11172/2018).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. For the underage participants, an additional parental consent was signed. All signatories were given in both printed and embossed documents.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical reasons.

Acknowledgments

We thank the Panhellenic Association of the Blind, as well as the Center for Education and Rehabilitation for the Blind, Athens, Greece, for their contribution to recruiting the participants.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Three Sets of Stimuli Used in the Experiment

Set ASet BSet C
1 α + β + γ + δ α β γ δ α β + γ δ
2 α + β γ + δ α β γ δ α + β γ δ
3 α + β γ + δ α β γ δ α + β γ δ
4 α β + γ + δ α β γ δ α β + γ δ
5 ( α + β ) ( γ + δ ) ( α β ) ( γ δ ) ( α + β ) ( γ δ )
6 χ 1 κ + χ 2 κ + χ 3 κ + + χ ν κ = 0 χ 2 α χ 4 α χ 6 α χ ν α = 0 χ 0 κ × χ 1 κ × χ 2 κ × × χ ν κ = 1
7 χ κ 1 + χ κ 2 + χ κ 3 + + χ κ ν = 0 χ α 2 χ α 4 χ α 6 χ α ν = 0 χ κ 0 × χ κ 1 × χ κ 2 × × χ κ ν = 1
8 χ + ψ 2 κ + 1 χ ψ 2 β + 1 χ + ψ 2 α 1
9 χ + ψ 2 κ + 1 χ ψ 2 β + 1 χ + ψ 2 α 1
10 χ 2 ψ χ 2 ψ χ 3 ψ χ 2 ψ χ 2 ψ ψ 2 χ
11 1 + χ 1 + χ 1 + χ 1 + χ 1 + χ 1 κ 1 κ 1 κ 1 κ 1 κ χ + 1 χ + 1 χ + 1 χ + 1 χ + 1
12 π 2 π 2 π 3 π 2 π 2 = π 4
13 sin 1 χ sin χ 1 sin 2 χ cos χ 1 sin 1 χ = sin ψ 1
14 log 2 χ 2 log χ log 2 χ = 2 log ψ log 3 χ 3 log χ
15 1 + χ + χ 2 + + χ ν 1 + = 1 1 + χ 1 + α + α 2 + + α κ 1 + = 1 1 α 1 χ χ 2 χ ν 1 1 1 χ
16 χ χ 2 2 + χ 3 3 ± = log ( 1 + χ ) α + α 2 2 + α 3 3 + = log ( 1 + α ) χ χ 2 2 + χ 3 3 ± log ( 2 × χ )
17 d χ χ = log χ d ψ ψ = log ψ d χ χ log ψ
18 1 e χ 2 χ 1 d χ 0 e χ 2 χ + 1 d χ 2 e χ 3 + χ + 3 d χ
19 0 1 0 1 ψ 2 1 d χ d ψ = ο π 2 0 1 ρ d ρ d θ 0 1 1 1 + ψ 2 1 d χ d ψ = 1 π 4 0 1 ρ d ρ d θ 0 1 0 1 ψ 3 3 d χ d ψ = ο π 3 0 1 ρ d ρ d θ
20 e ( α χ + β χ + χ ) e ( e χ + e ψ + ω ) e ( χ χ + e e + χ )
21 ι = 1 ν α ι = 1 κ = 1 ν α κ = 10 ι = 1 κ α ι = 0
22 lim χ 0 χ e ψ 2 d ψ = π 2 lim χ 0 χ e α 3 d α = π 3 lim α 0 α e χ 2 d χ = π 4
23 cos h 2 χ sin h 2 χ = 1 cos h 2 χ + sin h 2 χ 1 cos h 3 α sin h 3 α = 2
24 δ ( χ , ψ ) = ( χ 1 ψ 1 ) 2 + ( χ 2 ψ 2 ) 2 χ ( α , β ) = ( α 1 β 1 ) 2 + ( α 2 β 2 ) 2 α ( χ , ψ ) = ( χ 1 + ψ 1 ) 2 ( χ 2 + ψ 2 ) 2
25 χ Χ : ψ Ψ : χ = ψ χ Χ : ψ Ψ : χ = ψ α A : β Β : α β

References

  1. Fellbaum, K.; Koroupetroglou, G. Principles of electronic speech processing with applications for people with disabilities. Technol. Disab. 2008, 20, 55–85. [Google Scholar] [CrossRef]
  2. Lorch, R.F.; Lemarié, J. Improving Communication of Visual Signals by Text-to-Speech Software. In Universal Access in Human-Computer Interaction. Applications and Services for Quality of Life. UAHCI 2013. Lecture Notes in Computer Science; Stephanidis, C., Antona, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8011, pp. 364–371. [Google Scholar]
  3. Freitas, D.; Kouroupetroglou, G. Speech technologies for blind and low vision persons. Technol. Disab. 2008, 20, 135–156. [Google Scholar] [CrossRef]
  4. Khan, A.; Khusro, S. An insight into smartphone-based assistive solutions for visually impaired and blind people: Issues, challenges and opportunities. Univers. Access Inf. Soc. 2021, 20, 265–298. [Google Scholar] [CrossRef]
  5. Fernández-Batanero, J.M.; Montenegro-Rueda, M.; Fernández-Cerero, J.; García-Martínez, I. Assistive technology for the inclusion of students with disabilities: A systematic review. Educ. Technol. Res. Dev. 2022, 70, 1911–1930. [Google Scholar] [CrossRef]
  6. Remache-Vinueza, B.; Trujillo-León, A.; Zapata, M.; Sarmiento-Ortiz, F.; Vidal-Verdú, F. Audio-tactile rendering: A review on technology and methods to convey musical information through the sense of touch. Sensors 2021, 21, 6575. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, Z.; Li, B.; Hedgpeth, T.; Haven, T. Instant tactile-audio map: Enabling access to digital maps for people with visual impairment. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 25–28 October 2009. [Google Scholar]
  8. Raman, T.V. AsTeR: Audio system for technical readings. Inf. Technol. Disab. 1994, 1. [Google Scholar]
  9. LaTeX—A Document Preparation System. Available online: https://www.latex-project.org/ (accessed on 27 April 2024).
  10. Stevens, R.; Edwards, A.; Harling, P. Access to mathematics for visually disabled students through multimodal interaction. Hum. Comput. Interact. 1997, 12, 47–92. [Google Scholar] [PubMed]
  11. Ferreira, H.; Freitas, D. Audio Rendering of Mathematical Formulae Using MathML and AudioMath. In User-Centered Interaction Paradigms for Universal Access in the Information Society; Stary, C., Stephanidis, C., Eds.; Lecture Notes in Computer Science 2004; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3196, pp. 391–399. [Google Scholar]
  12. Mathematical Markup Language (MathML). Available online: https://www.w3.org/Math/whatIsMathML.html (accessed on 27 April 2024).
  13. Isaacson, M.; Srinivasan, S.; Lloyd, L. Development of an algorithm for improving quality and information processing capacity of MathSpeak synthetic speech renderings. Disab. Rehabil. Assist. Technol. 2010, 5, 83–93. [Google Scholar] [CrossRef] [PubMed]
  14. Sheikh, W.; Schleppenbach, D.; Leas, D. MathSpeak: A non-ambiguous language for audio rendering of MathML. Int. J. Learn. Technol. 2018, 13, 3–25. [Google Scholar] [CrossRef]
  15. Soiffer, N. MathPlayer v2.1: Web-based math accessibility. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility, Tempe, AZ, USA, 15–17 October 2007. [Google Scholar]
  16. Yamaguchi, K.; Masakazu, S. On necessity of a new method to read out math contents properly in DAISY. In Computers Helping People with Special Needs; Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A., Eds.; Lecture Notes in Computer Science 2010; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6180, pp. 415–422. [Google Scholar]
  17. Wongkia, W.; Naruedomkul, K.; Cercone, N. i-Math: Automatic math reader for Thai blind and visually impaired students. Comput. Math. Appl. 2012, 64, 2128–2140. [Google Scholar] [CrossRef]
  18. Salamonczyk, A.; Brzostek-Pawlowska, J. Translation of MathML formulas to Polish text, example applications in teaching the blind. In Proceedings of the 2nd International Conference on Cybernetics, Gdynia, Poland, 24–26 June 2015. [Google Scholar]
  19. Park, J.H.; Lee, J.W.; Um, J.W.; Yook, J. Korean language math-to-speech rules for digital books for people with reading disabilities and their usability evaluation. J. Supercomput. 2021, 77, 6381–6407. [Google Scholar] [CrossRef]
  20. Soiffer, N. Browser-independent accessible math. In Proceedings of the 12th International Web for All Conference, New York, NY, USA, 18–20 May 2015. [Google Scholar]
  21. Cervone, D. MathJax: A platform for mathematics on the Web. Not. AMS 2012, 59, 312–316. [Google Scholar] [CrossRef]
  22. The Nemeth Braille Code for Mathematics and Science Notation; American Printing House for the Blind: Louisville, KY, USA, 1972.
  23. Bansal, A.; Sorge, V.; Balakrishnan, M.; Aggarwal, A. Towards Semantically Enhanced Audio Rendering of Equations. In Computers Helping People with Special Needs; Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P., Eds.; Lecture Notes in Computer Science 2022; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13341, pp. 30–37. [Google Scholar]
  24. Riga, V.; Antonakopoulou, T.; Kouvaras, D.; Lentas, S.; Kouroupetroglou, G. The BrailleMathCodes Repository. In Proceedings of the 4th International Workshop on Digitization and E-Inclusion in Mathematics and Science, Tokyo, Japan, 18–19 February 2021. [Google Scholar]
  25. Stöger, B.; Miesenberger, K. Accessing and dealing with mathematics as a blind individual: State of the art and challenges. In Proceedings of the International Conference Enabling Access for Persons with Visual Impairment, Athens, Greece, 12–14 February 2015. [Google Scholar]
  26. MathType. Available online: https://en.wikipedia.org/wiki/MathType (accessed on 27 April 2024).
  27. Duxbury DBT: Braille Translation Software. Available online: https://www.duxburysystems.com/ (accessed on 27 April 2024).
  28. Tiger Software Suite 8 (TSS). Available online: https://viewplus.com/product/tiger-software-suite8/ (accessed on 27 April 2024).
  29. Kanahori, T.; Suzuki, M. Scientific PDF document reader with simple interface for visually impaired people. In Computers Helping People with Special Needs; Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A., Eds.; Lecture Notes in Computer Science 2006; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4061, pp. 48–52. [Google Scholar]
  30. Kortemeyer, G. Using artificial-intelligence tools to make LaTeX content accessible to blind readers. arXiv 2023, arXiv:2306.02480. [Google Scholar] [CrossRef]
  31. Arooj, S.; Zulfiqar, S.; Qasim Hunain, M.; Shahid, S.; Karim, A. Web-ALAP: A web-based LaTeX editor for blind individuals. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, New York, NY, USA, 26–28 October 2020. [Google Scholar]
  32. Hara, S.; Ohtake, N.; Higuchi, M.; Miyazaki, N.; Watanabe, A.; Kusunoki, K.; Sato, H. MathBraille: A system to transform LATEX documents into Braille. ACM SIGCAPH Comput. Phys. Handicap. 2000, 66, 17–20. [Google Scholar] [CrossRef]
  33. Papasalouros, A.; Tsolomitis, A. A direct TeX-to-Braille transcribing method. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers and Accessibility, Lisbon, Portugal, 26–28 October 2015. [Google Scholar]
  34. Gillan, D.J.; Barraza, P.; Karshmer, A.I.; Pazuchanics, S. Cognitive Analysis of Equation Reading: Application to the Development of the Math Genie. In Computers Helping People with Special Needs; Miesenberger, K., Klaus, J., Zagler, W.L., Burger, D., Eds.; Lecture Notes in Computer Science 2004; Springer: Berlin/Heidelberg, Germany, 2014; Volume 3118, pp. 630–637. [Google Scholar]
  35. Archambault, D.; Stöger, B.; Fitzpatrick, D.; Miesenberger, K. Access to Scientific Content by Visually Impaired People. Upgrade 2007, 8, 14. [Google Scholar]
  36. Simmering, V.R.; Wood, C.M. The development of real-time stability supports visual working memory performance: Young children’s feature binding can be improved through perceptual structure. Dev. Psychol. 2017, 53, 1474–1493. [Google Scholar] [CrossRef]
  37. Barrouillet, P.; Gavens, N.; Vergauwe, E.; Gaillard, V.; Camos, V. Working memory span development: A time-based resource-sharing model account. Dev. Psychol. 2009, 45, 477–490. [Google Scholar] [CrossRef] [PubMed]
  38. McGaugh, J.L. Memory—A century of consolidation. Science 2000, 287, 248–251. [Google Scholar] [CrossRef] [PubMed]
  39. Zimmermann, J.F.; Moscovitch, M.; Alain, C. Attending to auditory memory. Brain Res. 2016, 1640, 208–221. [Google Scholar] [CrossRef]
  40. Withagen, A.; Kappers, A.M.; Vervloed, M.P.; Knoors, H.; Verhoeven, L. Short term memory and working memory in blind versus sighted children. Res. Dev. Disab. 2013, 34, 2161–2172. [Google Scholar] [CrossRef]
  41. Argyropoulos, V.; Masoura, E.; Tsiakali, T.K.; Nikolaraizi, M.; Lappa, C. Verbal working memory and reading abilities among students with visual impairment. Res. Dev. Disab. 2017, 64, 87–95. [Google Scholar] [CrossRef] [PubMed]
  42. Pring, L. The ‘reverse-generation’ effect: A comparison of memory performance between blind and sighted children. Br. J. Psychol. 1988, 79, 387–400. [Google Scholar] [CrossRef] [PubMed]
  43. Kacorri, H.; Riga, P.; Kouroupetroglou, G. EAR-Math: Evaluation of Audio Rendered Mathematics. In Universal Access in Human-Computer Interaction; Stephanidis, C., Antona, M., Eds.; Lecture Notes in Computer Science 2014; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8514, pp. 111–120. [Google Scholar]
  44. W3C Web Accessibility Initiative (WAI). MathML Fundamentals. Available online: https://www.w3.org/TR/MathML2/chapter2.html (accessed on 21 June 2024).
  45. Geurten, M.; Catale, C.; Meulemans, T. Involvement of executive functions in children’s metamemory. Appl. Cognit. Psychol. 2016, 1, 70–80. [Google Scholar] [CrossRef]
  46. Baddeley, A.; Hitch, G. Working Memory: Past, present….and future. In The Cognitive Neuroscience of Working Memory; Osaka, N., Logie, R.H., D’Esposito, M., Eds.; Oxford University Press: New York, NY, USA, 2007; pp. 1–20. [Google Scholar]
  47. Baddeley, A.D.; Logie, R.H. Working memory: The multiple-component model. In Models of Working Memory: Mechanisms of Active Maintenance and Executive Control; Miyake, A., Shah, P., Eds.; Cambridge University Press: Cambridge, UK, 1999; pp. 28–61. [Google Scholar]
  48. Noel, M.P.; Désert, M.; Aubrun, A.; Seron, X. Involvement of short-term memory in complex mental calculation. Mem. Cognit. 2001, 29, 34–42. [Google Scholar] [CrossRef] [PubMed]
  49. Lehmann, M. Rehearsal development as development of iterative recall processes. Front. Psychol. 2015, 6, 308. [Google Scholar] [CrossRef]
  50. Kacorri, H.; Riga, P.; Kouroupetroglou, G. Performance Metrics and Their Extraction Methods for Audio Rendered Mathematics. In Computers Helping People with Special Needs; Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W., Eds.; Lecture Notes in Computer Science 2014; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8547, pp. 30–37. [Google Scholar]
  51. Riga, P.; Kouroupetroglou, G.; Ioannidou, P. An Evaluation Methodology of Math-to-Speech in Non-English DAISY Digital Talking Books. In Computers Helping People with Special Needs; Miesenberger, K., Bühler, C., Penaz, P., Eds.; Lecture Notes in Computer Science 2016; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9758, pp. 27–34. [Google Scholar]
  52. Bansal, A.; Balakrishnan, M.; Sorge, V. Evaluating cognitive complexity of algebraic equations. J. Technol. Pers. Disab. 2021, 9, 170–200. [Google Scholar]
  53. W3C Web Accessibility Initiative (WAI). Cognitive Accessibility at W3C. Available online: https://www.w3.org/WAI/cognitive/ (accessed on 21 June 2024).
  54. National Federation of the Blind. Statistical Facts about Blindness in the United States. 2017. Available online: https://nfb.org/blindness-statistics (accessed on 21 June 2024).
  55. Souza, A.; Freitas, D. Towards the Improvement of the Cognitive Process of the Synthesized Speech of Mathematical Expression in MathML: An Eye-Tracking. In Proceedings of the International Conference on Interactive Media, Smart Systems and Emerging Technologies, Limassol, Cyprus, 4–7 October 2022. [Google Scholar]
Figure 1. Example MathML tree of a mathematical expression. Structural elements are presented in rectangular form, operators are given in diamonds, and numericals/identifiers are circled.
Figure 1. Example MathML tree of a mathematical expression. Structural elements are presented in rectangular form, operators are given in diamonds, and numericals/identifiers are circled.
Mti 08 00057 g001
Figure 2. Example of a mathematical expression in Tactile form.
Figure 2. Example of a mathematical expression in Tactile form.
Mti 08 00057 g002
Figure 3. The setting of the tactile experimental unit.
Figure 3. The setting of the tactile experimental unit.
Mti 08 00057 g003
Figure 4. The absolute, mean, and relative number of recall errors across all participants by modality, type of error, and element. (A). The absolute and relative number of recall errors committed by all participants in each of the three modalities: Auditory (A), Tactile (T), and Audio–Tactile (A-T). (B). The absolute and relative number of recall errors across all participants, math expressions, and modalities by type of error (I—Insertions, D—Deletions, and U—Substitutions). (C). The absolute and relative number of recall errors across all participants, math expressions, and modalities in Structural elements (S), Operators (O), and Identifiers (N). (D). Mean number and relative number of recall errors across all participants, math expressions, and modalities per item in Structural elements (S), Operators (O), and Identifiers (N).
Figure 4. The absolute, mean, and relative number of recall errors across all participants by modality, type of error, and element. (A). The absolute and relative number of recall errors committed by all participants in each of the three modalities: Auditory (A), Tactile (T), and Audio–Tactile (A-T). (B). The absolute and relative number of recall errors across all participants, math expressions, and modalities by type of error (I—Insertions, D—Deletions, and U—Substitutions). (C). The absolute and relative number of recall errors across all participants, math expressions, and modalities in Structural elements (S), Operators (O), and Identifiers (N). (D). Mean number and relative number of recall errors across all participants, math expressions, and modalities per item in Structural elements (S), Operators (O), and Identifiers (N).
Mti 08 00057 g004
Figure 5. Mean number and 95% confidence intervals (CI) of the recall errors per gender, modality error type, and element.
Figure 5. Mean number and 95% confidence intervals (CI) of the recall errors per gender, modality error type, and element.
Mti 08 00057 g005
Figure 6. Parameters of the linear regression equations RE = a + bC of the dependency of the number of errors RE on the complexity C of the expression for the three modalities, along with the 95% confidence intervals (CI) for coefficient (b). (A). Scatterplot of the number of recall errors depending on the complexity of the expression. Results of the linear regression analysis. (B). Dependence of recall errors on the complexity of the expression for each modality.
Figure 6. Parameters of the linear regression equations RE = a + bC of the dependency of the number of errors RE on the complexity C of the expression for the three modalities, along with the 95% confidence intervals (CI) for coefficient (b). (A). Scatterplot of the number of recall errors depending on the complexity of the expression. Results of the linear regression analysis. (B). Dependence of recall errors on the complexity of the expression for each modality.
Mti 08 00057 g006
Table 1. Time spent in math expressions.
Table 1. Time spent in math expressions.
ModalityMeanMin MaxStd
Auditory7.88 s2 s20 s5.27 s
Tactile46.18 s2 s370 s52.16 s
Auditory-Tactile
Auditory part
Tactile part
54.06 s
7.88 s
37.00 s
4 s
2 s
3 s
383 s
20 s
299 s
54.41 s
5.27 s
38.00 s
Table 2. Parameters of the linear regression equations RE = a + bC of the dependency of the number of errors RE on complexity C of the expression for the three modalities, along with the 95% confidence intervals (CI) for coefficient (b).
Table 2. Parameters of the linear regression equations RE = a + bC of the dependency of the number of errors RE on complexity C of the expression for the three modalities, along with the 95% confidence intervals (CI) for coefficient (b).
ModalityConstant
a
Coefficient
b
Lower 95% CI Upper 95% CI
Auditory−4.40.5700.5120.628
Tactile−4.20.4490.3950.503
Auditory–Tactile−3.60.3890.3340.444
Table 3. Pairwise comparisons of the mean numbers of recall errors between the three modalities separately for each expression complexity.
Table 3. Pairwise comparisons of the mean numbers of recall errors between the three modalities separately for each expression complexity.
Complexityp-Values of Pairwise Comparisons
Auditory vs. TactileAuditory
vs.
Auditory–Tactile
Tactile
vs.
Auditory–Tactile
70.3330.3330.333
80.6690.3330.333
90.1640.3330.216
100.3450.2770.839
110.0480.0070.354
120.0550.0880.682
140.0000.0000.439
150.0100.0060.679
190.1090.3210.589
250.0660.0070.044
260.0210.0010.095
290.0300.0290.759
320.0990.0080.260
350.0120.0070.871
460.9260.4990.201
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Riga, P.; Kouroupetroglou, G. How Blind Individuals Recall Mathematical Expressions in Auditory, Tactile, and Auditory–Tactile Modalities. Multimodal Technol. Interact. 2024, 8, 57. https://doi.org/10.3390/mti8070057

AMA Style

Riga P, Kouroupetroglou G. How Blind Individuals Recall Mathematical Expressions in Auditory, Tactile, and Auditory–Tactile Modalities. Multimodal Technologies and Interaction. 2024; 8(7):57. https://doi.org/10.3390/mti8070057

Chicago/Turabian Style

Riga, Paraskevi, and Georgios Kouroupetroglou. 2024. "How Blind Individuals Recall Mathematical Expressions in Auditory, Tactile, and Auditory–Tactile Modalities" Multimodal Technologies and Interaction 8, no. 7: 57. https://doi.org/10.3390/mti8070057

Article Metrics

Back to TopTop