Next Article in Journal
Genesis and Magmatic Evolution of the Gejiu Complex in Southeastern Yunnan, China
Previous Article in Journal
Risk Assessment of Drilling and Blasting Method Based on Nonlinear FAHP and Combination Weighting
Previous Article in Special Issue
Effect of Interactive Virtual Reality on the Teaching of Conceptual Design in Engineering and Architecture Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MultiScaleAnalyzer for Spatiotemporal Learning Data Analysis: A Case Study of Eye-Tracking and Mouse Movement

1
Department of Visual Communication Design, Jiangnan University, Wuxi 214122, China
2
School of Media Arts & Design, James Madison University, Harrisonburg, VA 22807, USA
3
Department of Special Education, University of Illinois Chicago, Chicago, IL 60607, USA
4
Department of Computer Graphics Technology, Purdue University, West Lafayette, IN 47907, USA
5
Department of Educational Studies, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(8), 4237; https://doi.org/10.3390/app15084237
Submission received: 5 March 2025 / Revised: 9 April 2025 / Accepted: 9 April 2025 / Published: 11 April 2025

Abstract

:
With the development of high-performance computers, cloud storage, and advanced sensors, people’s ability to gather complex learning data has greatly improved. However, analyzing these data remains a significant challenge. Especially for spatiotemporal learning data such as eye-tracking and mouse movement, understanding and analyzing these data to identify the learning insights behind them is a difficult task. We propose a visualization platform called “MultiScaleAnalyzer”, which employs hierarchical structure to illustrate spatiotemporal learning data in multiple views. From high-level overviews to detailed analyses, “MultiScaleAnalyzer” provides varying resolutions of data tailored to educators’ need. To demonstrate the platform’s effectiveness, we applied “MultiScaleAnalyzer” to a mathematical word problem-solving dataset, showcasing how the visualization platform facilitates the exploration of student problem-solving patterns and strategies.

1. Introduction

The rapid development of the Internet and digital technologies has significantly transformed the landscape of education, making e-learning platforms widely accessible. This accessibility has enabled the collection of vast amounts of student learning data. And, the data have gone beyond simple statistics such as login frequency, exam accuracy, and course completion rates. They now extend to complex spatiotemporal data like mouse movement and eye-tracking data, which contain rich details about student learning process. These types of data can reflect students’ attention distribution and reveal how students perceive, process, and apply information, aiding educators in understanding students’ cognitive process and problem-solving difficulties [1].
However, despite the potential of spatiotemporal learning data, much of the existing research focuses on theoretical analyses or experimental studies. These efforts often result in static conclusions that remain confined to academic publications and fail to transition into practical tools or platforms accessible to educators or decision-makers. While there is growing interest in leveraging visualization techniques to make sense of learning data, most of them emphasize the presentation of simple data patterns or the direct output of analysis results, rather than supporting the dynamic, multidimensional exploration of complex spatiotemporal learning datasets.
Therefore, we propose “MultiScaleAnalyzer”, a novel visualization platform designed to bridge the gap between complex spatiotemporal datasets and actionable educational insights. This paper first details the design and development of MultiScaleAnalyzer, with a focus on its framework and functionalities. It then evaluates the platform’s potential and practical applications through a qualitative analysis involving five educators, whose findings are synthesized into illustrative use cases. Finally, the discussion outlines the platform’s strengths and limitations and proposes potential directions for future research and improvements. By systematically addressing these aspects, we aim to answer the following research questions:
  • How can the visualization framework of MultiScaleAnalyzer support educators in presenting and analyzing complex spatiotemporal learning data?
  • What learning behaviors or patterns can MultiScaleAnalyzer help educators to identify?
  • What is the role of different types of spatiotemporal learning data (i.e., mouse movements and eye-tracking data) in understanding students’ cognition and behavioral processes in educational contexts?

2. Literature Review

2.1. Spatiotemporal Learning Data Analysis

Spatiotemporal learning data capture both the ‘when’ (temporal) and ‘where’ (spatial or interactional context) information of students’ attention and interactions. When analyzing spatiotemporal learning data, it is important to extract and identify key features, as these features play a critical role in uncovering patterns of student learning behavior. Eye-tracking and mouse movement data are typical spatiotemporal learning datasets. Thus, we use these two types of data to demonstrate how spatiotemporal learning data have been processed and examined in previous research.

2.1.1. Eye-Tracking Data Analysis

Eye-tracking data have been used to explore various aspects of learning, such as problem-solving [2,3], information processing [4], learning strategies [5], decision-making [6], and individual differences [7]. Key features extracted from eye-tracking data typically include gaze, fixation, pupil, and regressions [8,9]. For instance, Susac et al. [10] employed eye-tracking technology to examine students’ approaches to solving simple equations by evaluating metrics such as gaze duration, fixation duration, and fixation count within the equation and answer areas. Their findings indicated a relationship between the number of fixations in the answer region and the efficiency of students’ problem-solving. Wu et al. [11] explored students’ pupil sizes and fixation durations in areas with or without stimuli to investigate students’ cognitive processes involved in mathematical word problems. They found a positive correlation between these features and students’ performance. Wei et al. [12] investigated students’ fixation durations, saccade numbers in areas with or without visual cueing to examine the cognitive processes involved in mathematical word problem-solving. By analyzing these eye-tracking features, researchers can gain a deeper understanding of students’ learning and cognitive processes.

2.1.2. Mouse Movement Data Analysis

Mouse movement data mainly includes mouse clicks and mouse trajectories. Mouse trajectory tracking is a behavioral methodology used to explore learners’ emotions, cognition, and psychological states [13]. For instance, Azcarraga and Suarez [14] analyzed students’ mouse behavior including the number of clicks, duration of each click, and distance traveled by the mouse to predict students’ emotions during learning. Takashi [15] extracted 134 mouse trajectory variables and discovered that the distribution of spatial and temporal features, such as mouse direction change and mouse velocity, could indicate students’ states of anxiety. Pimenta et al. [16] found a decrease in cognitive performance associated with a reduction in mouse acceleration and velocity. Zushi, Miyazakim, and Norizuki [17] examined mouse movement features, including movement distance, average velocity, suspension duration, drag-and-drops, and U-turns. They found a negative correlation between the number of correct answers and the number of U-turns and drag-and-drops. They suggested that unstable mouse movements, such as excessive U-turns and drag-and-drops, often reflect a lack of confidence or understanding. These studies highlight the potential of using mouse movement data, particularly trajectory data, to assess learners’ comprehension and identify areas where additional support may be needed.

2.2. Spatiotemporal Learning Data Visualization

Visualization uses graphics to present data in a way that enhances understanding. It leverages our perceptual abilities and professional expertise to interpret information and analyze behaviors effectively. As noted by Koedinger et al. [18], perception enables the efficient transfer of large amounts of information to our mind, facilitating the recognition of key features and the making of meaningful inferences. Analyzing spatiotemporal data is challenging due to their multiple dimensions, which complicate mental visualization and understanding. By using graphics such as maps to directly display spatial information and overlaying temporal data, visualization techniques effectively illustrate how variables change over time and space. The following section reviews visualization methods used to analyze eye-tracking and mouse movement data.

2.2.1. Eye-Tracking Data Visualization

Eye-tracking data visualization techniques can be categorized into two main types: AOI (area of interest)-based visualization methods and point-based visualization methods [19]. AOI-based visualization methods divide the space into defined areas and aggregate data, such as the number of fixations and fixation duration within each area. Then, these aggregated data are linked to the AOIs for display. Notable methods in this category include AOI River [20] and Timeline Visualization [21]. Although this approach effectively illustrates how eye-tracking data change over time, it simplifies spatial information and limits the representation of detailed spatial features. On the other hand, point-based visualization methods directly display data points within a graphical interface without aggregation. This category includes methods like attention maps [22], scan path visualizations [23], and space-time cubes [24]. Attention maps and scan path visualizations present detailed data, allowing for a complete understanding of the information. However, displaying excessive details directly on a two-dimensional interface without any aggregation can make it challenging to present the data clearly and may lead to visual clutter. Space-time cubes employ a three-dimensional space to represent spatial information on the Z-axis, thereby extending the two-dimensional interface.

2.2.2. Mouse Movement Data Visualization

Mouse movement data visualization often relies on line-based representations to depict various mouse features. For instance, Freeman and Ambady [25,26] used lines to illustrate average mouse trajectories and employed curve plots to show mouse movement velocity and acceleration. They analyzed participants’ psychological characteristics by comparing the deviation of actual mouse movement trajectories with idealized straight-line paths. Another approach, called the trajectory heat map [27], applies the color and intensity of the trajectory lines to indicate mouse dwell time and passage frequency. The MatMouse visualization toolbox developed by Krassanakis and Kesidis [28] designs dots on trajectory lines to represent mouse clicks, with the size of the dots indicating mouse dwell time. Furthermore, Zgonnikov et al. [29] employed three-dimensional visualization methods, which, based on two-dimensional trajectories, add a z-axis to illustrate “potential energy” derived mathematically from mouse movement velocity. It reflects the “energy” state during decision-making. Higher values may indicate greater decision uncertainty or competition. By visually analyzing mouse trajectories, educators can better understand how students behave and make decisions, helping to identify learning challenges.
While current visualization methods for eye-tracking and mouse movement data have proven valuable for analyzing spatiotemporal learning behaviors, they also present several limitations. Eye-tracking visualizations often involve a trade-off between clarity and detail: AOI-based methods simplify spatial complexity but lose finer details, whereas point-based techniques provide richer visualizations that may lead to clutter or misinterpretation, especially on two-dimensional displays. Mouse movement visualizations, such as trajectory lines or heat maps, are effective at illustrating patterns like pathways, dwell time, or decision-making “energy”; however, they are often designed to analyze single data types and rarely integrate other forms of complementary data, such as eye-tracking data. This lack of integration limits the ability to support more comprehensive and flexible analyses across multiple modalities. These limitations highlight the need for a more integrated and scalable solution. To address these challenges, we propose MultiScaleAnalyzer, a hierarchical framework that enhances the exploration of spatiotemporal learning data by balancing clarity and detail, while enabling integration of multiple data types, as described in Section 3.

3. Multiscale Visualization Design and Development

3.1. Visualization Framework

The idea of using data abstraction to simplify visuals and handle larger datasets is widely accepted. Shneiderman’s mantra “Overview first, zoom and filter, then details-on-demand” provides a basic guide for information visualization [30]. Building on this, researchers have proposed many visualization methods designed for different data types and analysis needs. For instance, Woodring and Shen [31] utilized clustering to abstract time-varying data and applied wavelet transform views to show hierarchical resolutions, highlighting different temporal trends. Similarly, Viola and Isenberg [32] employed visual abstraction to illustrate biological processes at multiple scales, ranging from tissue-level organization to molecular interactions. Inspired by these approaches, we propose a top–down multiscale visualization framework for spatiotemporal learning data analysis. Our approach starts with an overview to provide a broad summary, progresses to aggregated views for highlighting patterns and clusters, and finally drills down into a scenario view for detailed exploration of specific interactions. This methodology uses both data and visual abstraction techniques to handle the complexity and multidimensionality of spatiotemporal datasets.
  • Overview: This level provides a comprehensive overview of students’ learning progress and performance by presenting key features. It enables educators to grasp overall patterns and trends easily.
  • Aggregated data views: This level displays clustered or summarized data by applying selection and filtering techniques to reveal intermediate patterns. Data are aggregated based on specific criteria, such as AOI or segments. The process of aggregation simplifies the management of large datasets by organizing data points into meaningful categories or summaries. It facilitates the efficient analysis of students’ learning patterns, strategies, and outliers.
  • Raw data view: This level presents unprocessed or minimally processed data in their original format, offering a transparent and direct representation. It enables educators to conduct an in-depth examination of specific data points or interaction trajectories within the learning context.
  • To develop MultiScaleAnalyzer, we used a MySQL 5.7.18 database to store datasets, with PHP 7.1.1 managing backend data operations and transfer. On the frontend, JavaScript served as the primarily programming language, supplemented by the D3.js v5.16.0 and jQuery 3.3.1 libraries to enable rich, interactive data visualizations. CSS and HTML5 were used to define and style the platform layout.

3.2. Overview

The overview of MultiScaleAnalyzer is designed to provide educators with a comprehensive snapshot of student performance (Figure 1a). All students and tasks are listed in a menu, allowing users to select multiple students and tasks of interest. The grid visualization in the overview dynamically updates based on the selections, providing a focused and customizable view of the data. Students in the grid view can be sorted either by default or by performance, which supports an intuitive understanding of their progress and achievements. The grid visualization presents key data, including the number of questions attempted (represented by squares), correctness (red for incorrect, green for correct), and completion time (indicated by color intensity, with light representing shorter durations and dark representing longer durations). Furthermore, the elements within the grid, including squares, student names, and task names, are interactive and clickable. Users can explore detailed information in other views by selecting specific components of the grid matrix. For instance, clicking on a square displays the corresponding task information for the associated student in both the aggregated data view and the raw data view. Similarly, clicking on a student’s name presents all tasks completed by that student across these views, and clicking on a task name provides information regarding all students who have attempted that specific task.
This overview enables the rapid identification of performance trends among students, allowing educators to recognize both high-achieving students and those who may require additional support. Furthermore, it provides insights into the class’s overall understanding of specific topics or concepts.

3.3. Aggregated Data View

Aggregating complex data according to specific criteria can effectively simplify datasets. This approach not only reduces data complexity but also helps in identifying and highlighting important patterns within data [35]. Choosing appropriate clustering criteria is crucial for uncovering patterns, and these criteria can be based on expert domain knowledge or statistical features identified during data analysis [36,37]. For spatiotemporal data, segmenting time and dividing space into meaningful units aids in better understanding and analysis. Commonly, time is segmented into temporal cycles like hours, days, or seasons, while space is divided into areas such as cities, regions, or countries [38].
In the context of spatiotemporal learning data analysis, common spatial critiques include interaction pages or AOIs, while temporal critiques might involve learning sessions or problem segments. Thus, MultiScaleAnalyzer adopts AOIs and problem segments as aggregating criteria. This choice allows for a structured exploration of learning data, enabling users to focus on specific areas and time periods that are most relevant to learning analysis.
MultiScaleAnalyzer offers two aggregated data views. By default, the system divides the problem interface into predefined AOIs, such as the question area and equation area. Meanwhile, the AOI setting page in the first view allows educators to customize AOIs by drawing rectangular regions, assigning names, and selecting colors. The system calculates fixations and mouse clicks within these AOIs and displays the data on a timeline, as shown in Figure 1b. In this view, the x-axis represents the time, while the y-axis shows the vertical coordinates of the fixations and clicks. Each dot represents a fixation within an AOI, and each mouse icon represents a click, with their positions on the timeline indicating the timing of these events. The size of the dots denotes the duration of each fixation. This view enables educators to analyze students’ attention distribution and click activities across different AOIs, offering valuable insights into their engagement patterns and interaction behaviors.
The second aggregated view (Figure 1c), offers higher data resolution by reconstructing the students’ interactive interface, providing a comprehensive presentation of the learning content. In the view, every piece of question text and equation box on the interface is automatically set as an AOI by the system. Eye movement fixations are shown on horizontal lines below each AOI, arranged vertically in the order of occurrence. The vertical axis represents the number of fixations, and the length of these lines indicates the duration of each fixation. This helps educators to easily recognize the key elements that the student has focused on, along with the duration and sequence of fixations.
The color of lines denotes the correctness of the student’s final performance (red for incorrect, green for correct). Curved lines in the visualization represent mouse movement trajectories. The starting and ending points of these curves are located below the elements where the mouse movement begins and ends, corresponding to specific AOIs. The curved lines are generated using a cubic Bezier curve formula [39]:
B(t) = (1 − t)3P0 + 3(1 − t)2tP1 + 3(1 − t)t2P2 + t3P3, t ∈ [0, 1]
where P0 represents the starting point, P3 is the ending point, and P1 and P2 are control points. The positions of P1 and P2 are adjusted based on the inverse average velocity of the mouse movement to simulate the degree of curvature. Slower movements result in greater deviations from the straight-line path between P0 and P3, while faster movements produce smoother and straighter paths. By analyzing the curvature and the number of curve lines, educators can infer the trajectory of the mouse, the number of drag-and-drop actions, and the speed of these movements, which may indicate the student’s level of confidence and the challenges they experience [40].
This design enables educators to intuitively understand the context of a student’s learning material while presenting high-resolution spatiotemporal data for deeper analysis. Students’ eye movement and interaction data are minimally aggregated to preserve detailed information. This approach aims to achieve a balance between maintaining simplicity and preserving information richness.

3.4. Raw Data View

MultiScaleAnalyzer also provides data in its raw form without any aggregation. Aggregation means reorganizing, simplifying, and compressing information. In aggregated views, subtle details might be lost, making certain anomalies or patterns difficult to interpret. The raw data view preserves all the detailed information, allowing educators to trace back and investigate when needed. Furthermore, these unaggregated data empowers educators to explore the multidimensional changes in students’ learning activities, addressing or validating any questions or hypotheses that arise when using an overview or aggregated view.
The raw data view offers a video slider that originally displays students’ eye movements and interaction activities. Educators can select any time period on the timeline to display the interaction data of students within that period. Figure 1d shows the gaze plot and interaction activities of a student over a period of time, where each circle represents a fixation point. The numbers on the circles indicate the sequence of fixations, and lines connecting the circles present the actual trajectory. By reconstructing students’ authentic learning processes, the video slider provides educators with the most precise and accurate information to address potential questions or assessments regarding students’ interactive behaviors.

4. Use Cases

To examine the usefulness and applicability of the visualization system, we applied it to a spatiotemporal learning dataset (described in detail in Section 4.1). The variety and scale of this dataset made it challenging to analyze and interpret, which could be solved by our visualization tools. We invited five educators with expertise in mathematics education and extensive experience in online teaching. All of them possessed a PhD or master’s degree in education. They were asked to use MultiScaleAnalyzer to analyze students’ problem-solving processes and identify any notable patterns or outliers. Their analysis processes were recorded, and some representative findings were organized and described as use cases in this section.

4.1. Spatiotemporal Learning Dataset

The dataset used in this study was derived from the Conceptual Model-Based Problem-Solving (COMPS) program [41]. The COMPS program developed an online mathematical problem-solving platform designed to enhance students’ mathematical understanding and problem-solving skills. The platform consists of two modules and fourteen lessons. Module A focuses on using counting strategies to solve mathematical problems. While module B emphasizes model-based problem-solving, teaching students to abstract and structure mathematical problems into equations and solve them systematically.
A group of 16 students participated in this program, dedicating 30 min per school day to solving mathematical problems on the platform over the course of one academic semester. The computer labs used for the program were designed with consistent environmental settings, illuminated by both natural and artificial light. All students worked on Dell Precision 3520 workstations equipped with 15.6-inch displays and a resolution of 1920 × 1080 pixels (Dell Technologies, Round Rock, TX, USA).
To capture students’ eye movements, a Tobii Pro X3-120 eye tracker (Tobii AB, Stockholm, Sweden) was mounted at the bottom of each laptop screen. The Tobii Pro X3-120 eye tracker operated at a frequency of 120 Hz and featured an ultra-slim design (dimensions: 115 × 111 × 32.7 mm), which minimized interference with students’ interactions and reduced potential distractions. Students’ eye movement data, including fixation positions (‘x_position’ and ‘y_position’ in pixels), timestamps (‘timestamp’), and fixation durations (‘fixation_duration’ in milliseconds), were recorded using Tobii Pro Studio 3.4.7 and imported into a MySQL 5.7.18 database for storage, retrieval, and analysis. Additionally, students’ mouse movements and clicks were captured by the COMPS program and stored in the database with the fields ‘mouse_x’, ‘mouse_y’ (mouse coordinates in pixels), ‘click_type’ (left or right click), and ‘timestamp’. Task performance data, including students’ answers to problems (‘answer’), correctness (‘is_correct’), and time spent on each problem (‘time_spent’), were also stored in the database. Each dataset was organized in separate tables, enabling efficient storage and analysis.
Together, the spatiotemporal learning dataset provided rich information about students’ learning behaviors and cognitive engagement. For the use cases analysis, the learning data were provided to the five educators. Due to time constraints, the educators were only required to analyze five specific tasks of Module B.

4.2. Attention Pattern

MultiScaleAnalyzer provided intuitive visualization methods to identify attention patterns among students. For example, from the overview in Figure 2, it is evident that S9 and S1 were low-performing students, while S7 and S15 were high-performing students. By selecting students’ names, the aggregated data view 1—referred to as “AOI Vis” (area-of-interest-based visualization)—revealed clear distinctions in attention patterns between these two groups.
The low-performing student S1 exhibited a relatively low number of fixations, which were distributed intermittently (Figure 3). A considerable portion of these fixations were directed toward non-relevant areas, reflecting a tendency toward inattentiveness and disengagement. Besides this, S1’s task completion times for certain questions were notably short, suggesting a tendency to rush through tasks without sufficient cognitive engagement.
Similarly, S9, another low-performing student, demonstrated a slightly higher number of fixations, with longer durations. However, the attention pattern remained inconsistent and fragmented (Figure 4). S9 alternated between periods of focus on the task and episodes of visual wandering, during which no fixations appeared or fixations fell on irrelevant areas. These patterns highlighted distinct challenges in maintaining sustained attention and effective task engagement among low-performing students.
In contrast, the high-performing student, S7, demonstrated sustained and stable attention throughout the task (Figure 5). Fixations were consistently focused on key informational areas, reflecting a deliberate and systematic approach to problem-solving. This attention pattern indicated an active process of reading and comprehending the problem statements, analyzing equations, and formulating solutions, thereby showcasing effective cognitive engagement and task management strategies.
MultiScaleAnalyzer revealed students’ different attention patterns in their problem-solving processes. However, it should be noted that not all high-performance students demonstrated sustained attention patterns, nor did low performance necessarily equate to the absence of positive attention patterns. Other factors, such as prior knowledge, task familiarity, and external distractions, may also influence students’ performance outcomes.

4.3. Problem-Solving Strategy

MultiScaleAnalyzer enabled the identification of different problem-solving strategies employed by students. Using the aggregated data view, two distinct approaches to solving mathematical problems were observed. The first approach was exemplified by S7, who demonstrated sustained fixations on areas with relevant information throughout the problem-solving process (Figure 5). In contrast, S16 also maintained focused attention on the problem, but individual fixation durations were significantly longer, with many exceeding one second (Figure 6). According to Wu et al., fixations lasting over 500ms can be classified as long fixations, indicating extended periods of cognitive processing [38,40]. This highlights a significant difference between S7’s frequent but short fixations and S16’s fewer but prolonged fixations.
Furthermore, through the aggregated data view 2—referred to as “Elements Vis” (elements’ visualization), it was observed that S7 tended to read through the entire problem comprehensively (Figure 7a), whereas S16’s long fixations were concentrated on specific keywords like “a total of”, “gave”, “brother”, and “left” (Figure 7b). These findings align with Hegarty et al.’s [42] identification of two distinct mathematical problem-solving strategies. The first is the model-based problem-solving strategy, which involves understanding the problem and constructing a mental model. The second strategy is the keyword strategy, which focuses on identifying and leveraging key terms within the problem statement. Correspondingly, it suggests that S7 employed a model-based problem-solving strategy, while S16 adopted a keyword strategy. These insights demonstrate the utility of MultiScaleAnalyzer in uncovering diverse cognitive approaches and strategies to solve mathematical problems.

4.4. Problem Solving Difficulties

MultiScaleAnalyzer provided valuable insights into the challenges students encountered during problem-solving. For example, the overview analysis in Figure 8 reveals that the error rate for the second step of a problem-solving task (B3.5_2, B7.1_2) was significantly higher than for the first step (B3.5_1, B7.1_1). In the COMPS framework, word problems are designed to be solved through a series of structured steps. The first step involves model formulation, where students abstract textual information into mathematical equations. This process begins with identifying and integrating the appropriate name tags into the equation. For instance, in part–part–whole problems like B3.5_1, students were required to recognize which components in the problem represented the “part” and which represented the “whole”. Using name tags including “gave”, “left”, and “total”, students constructed a basic mathematical relationship “gave + left = total”, which reflected the structural relationship of “part + part = whole”. The second step involved translating the abstract model into a numerical equation by assigning corresponding values. For known quantities, students directly placed the relevant numbers into the equation, while an unknown value was replaced with the placeholder “a” to prepare for further operations. For example, in a problem like B3.5_2, the quantity associated with the name tag “gave” was unknown, “left” was 62, and “total” was 87; students had to construct the equation “a + 62 = 87”, which formed the basis for further problem-solving. Similarly, in Lesson 7, students worked with comparison problems to formulate a conceptual model of “smaller + difference = bigger”, which followed a similar process of abstraction and numerical representation as seen in the Lesson 3 tasks. Notably, B7.1 and B3.5 were the first problems introducing students to these respective problem types, serving as starting points for distinct conceptual frameworks.
Students exhibited a significantly higher error rate in the second step, indicating students’ greater difficulties in assigning numerical values to equations. Further analysis using “Elements Vis” revealed that a substantial number of students may have struggled with understanding the placeholder “a”. This difficulty was evidenced by various behavioral indicators in the visualization. For instance, some students displayed prolonged fixations on “a”, suggesting that they were attempting to comprehend its meaning and its relation to the equation model (Figure 9a). Additionally, the mouse-dragging trajectories of some students for “a” showed significant curvature (Figure 9b). Based on Bezier curve analysis, this indicated extremely slow dragging speeds, which reflected a lack of confidence and hesitation when placing “a” into the equation [15,25]. Furthermore, some students repeatedly repositioned “a” within the equation. As show in Figure 9c, one student initially placed “a” in the position of “left” but subsequently moved it to the position of “gave”. These behaviors collectively indicated that a proportion of students experienced problem-solving difficulties in B3.5_2 due to challenges in comprehending the placeholder “a” and its connection to the corresponding mathematical model equation. The visual analysis suggests that teachers should provide additional support to help students develop a clearer conceptual understanding of placeholders and their role in mathematical problem-solving.

4.5. Outlier Identification

MultiScaleAnalyzer provides a multi-resolution visualization approach that allows educators to thoroughly examine data and uncover subtle outliers or patterns. It is observed that many students exhibited long fixations in areas corresponding to higher positions on the y-axis, and the green color indicates the areas that had no task-related information (Figure 10). In the context of COMPS tasks, this region aligned with the upper section of the interface. Interestingly, this fixation pattern was not limited to a single student but was observed across multiple students. One notable case involved S10, who, after spending an extended period fixating on this region, unintentionally ended the task. Further investigation using the raw data view revealed that the area of focus was actually the menu button located in the upper-right corner of the interface. This button serves as a navigation tool to access the program’s menu page. In the case of S10, curiosity led to the student clicking the menu button, which inadvertently terminated the task. This finding highlights a common phenomenon where students become distracted by navigational elements, especially when their attention is not fully engaged or when they have been working on a single task for a long period. Based on this observation, we recommend weakening the visual prominence of the menu button by using less noticeable colors, removing dynamic cursor changes when hovering over the button, and making it appear less interactive to reduce accidental interactions and mitigate distractions caused by non-essential interface elements.

5. Discussion

The findings from the use cases offer valuable insights into the research questions. 1: How can the visualization framework of MultiScaleAnalyzer support educators in presenting and analyzing complex spatiotemporal learning data? MultiScaleAnalyzer helps educators explore spatiotemporal learning data through hierarchical, multiscale visualizations. The system enables seamless navigation between broad trends, group-level patterns, and individual-level data, making complex datasets more interpretable. 2: What learning behaviors or patterns can MultiScaleAnalyzer help educators identify? MultiScaleAnalyzer reveals important learning behaviors, including attention allocation, problem-solving strategies, cognitive challenges, and behavioral anomalies. For example, it demonstrates how high-performing and low-performing students differ in their attention distribution and problem-solving approaches, helping educators better understand students’ cognitive processes and engagement. 3: What is the role of different types of spatiotemporal learning data in understanding students’ cognition and behavioral processes? Mouse movement data provide insights into decision-making and task strategies, while eye-tracking data reveal attention distribution and cognitive focus. By combining these data types, MultiScaleAnalyzer offers a more holistic view of students’ learning processes and identifies areas where support may be needed.

5.1. Intuitive Design

Spatiotemporal learning data are difficult to comprehend and analyze due to their multidimensional nature. Factors such as spatial locations, time sequences, and additional contextual information introduce significant cognitive barriers, making it difficult for users to effectively interpret the data. With existing spatiotemporal visualization tools such as AOI River, trajectory heat maps often focus on single-modal data or provide static visualizations that lack interactivity. These tools are typically designed for specific use cases and are not equipped to integrate multiple data streams or contextualize data within problem-solving scenarios. In contrast, MultiScaleAnalyzer bridges this gap by providing a user-friendly and intuitive design that links spatiotemporal data with their real-world context. It achieves this by reconstructing problem-solving scenarios and linking data to relevant contextual information, enabling educators to gain deeper insights into student learning behaviors.
Furthermore, MultiScaleAnalyzer employs a variety of visualizations, including a grid matrix, timelines, a gaze plot, and contextual visualization. The harmonious use of colors, the logical organization of visual elements, and alignment with user expectations ensure that the visualizations are not only informative but also esthetically pleasing and easy to navigate.

5.2. Scalability

MultiScaleAnalyzer enables the efficient analysis of large spatiotemporal learning datasets through a hierarchical data exploration approach. By visualizing data through three levels, macro- (overview), meso- (aggregated data view), and micro-levels (raw data view), MultiScaleAnalyzer allows users to seamlessly transition among global patterns, group-level behaviors, and individual actions. This hierarchical structure supports both top–down reasoning (progressing from overall insights to detailed analysis) and bottom–up investigation (linking individual behaviors to larger patterns), ensuring a comprehensive understanding of complex datasets.
By allowing users to move fluidly across these levels of analysis, MultiScaleAnalyzer addresses a key challenge of linking detailed data with broader patterns in large-scale, spatiotemporal datasets. Existing tools, while valuable for providing either high-level overviews or detailed raw data views, frequently lack mechanisms for seamlessly transitioning between these levels. MultiScaleAnalyzer fills this gap by enabling users to explore educational data interactively and flexibly across scales, enhancing both clarity and usability.

5.3. Efficient Interaction

Another advantage of MultiScaleAnalyzer is its efficient interaction design. Multiple options for selection and filtering are provided, allowing users to easily navigate and explore complex datasets. For instance, users can leverage a collapsible menu to select multiple problems or students efficiently. Users can examine all students’ performance for selected problems, analyze a single student’s performance on all questions, or dive into detailed data for a specific student on a specific question. Through these interactions, users can intuitively query and retrieve the most relevant information without being overwhelmed.
Another key feature of the interaction design is the dynamic linking among views. When users perform a selection, all views within the system update simultaneously to provide a coherent and synchronized understanding of the data. Additionally, interactive tools such as zooming and panning enable users to observe data views at various scales. The combination of data selection, dynamic view synchronization, and interactive tools significantly enhances analysis efficiency.

5.4. Generalizability

The generalizability of MultiScaleAnalyzer presents both strengths and limitations. On one hand, MultiScaleAnalyzer can be applied across diverse datasets and educational contexts, such as traditional classroom activities, online collaborative platforms, and virtual simulations. For example, in VR-based learning environments, where students interact with 3D objects or immersive simulations, MultiScaleAnalyzer could jointly analyze gaze data and motion tracking to help educators understand how students navigate virtual spaces or respond to stimuli. Similarly, in traditional classroom settings, where students engage in face-to-face activities or interact with physical learning materials, MultiScaleAnalyzer could analyze gaze patterns and interactions to help educators understand how students focus on specific teaching materials, collaborate with peers, or respond to instructor-led activities. This flexibility is enabled by key design features, including customizable areas of interest (AOIs), adjustable information scales, and compatibility with various data types (e.g., eye-tracking data, interaction data, log data).
However, the aggregated data view “Elements Vis”, which reconstructs problem-solving scenarios, brings challenges for the generalizability of MultiScaleAnalyzer. For different learning scenarios or datasets, while layout reformatting enhances visualization, it may not be easily transferable, especially for layouts that are highly specific or complex. In summary, although MultiScaleAnalyzer demonstrates strong generalizability through its adaptable framework and multiscale approach, its scenario reconstructions constrain its applicability in certain contexts. These trade-offs highlight the need to design context-flexible visualizations in future iterations.

5.5. Limitations and Future Work

This study primarily employs qualitative methods to evaluate the design and application of MultiScaleAnalyzer, focusing on foundational questions such as whether the tool enables educators to identify meaningful patterns in spatiotemporal learning data and what specific patterns can be observed. While qualitative analysis is well suited for these exploratory objectives, it does not provide the quantitative metrics necessary to systematically assess the tool’s efficiency and accuracy. In our future work, we plan to conduct a quantitative evaluation to compare MultiScaleAnalyzer with other analysis methods, utilizing a larger student sample size and involving a broader group of educators to provide a more comprehensive and statistically robust assessment of its capabilities and impact on educational practices.

6. Conclusions

We propose MultiScaleAnalyzer, a novel visualization system for presenting learning data and illustrating complex spatiotemporal patterns, such as eye-tracking and mouse movements in learning analysis. MultiScaleAnalyzer employs a hierarchical structure to present data at varying resolutions, enabling educators to analyze information from overviews to details based on their individual needs. This multiscale framework significantly reduces educators’ cognitive load during data analysis and makes the analysis process more intuitive and user-friendly.
Through the problem-solving use cases, MultiScaleAnalyzer proved its effectiveness in helping educators identify group patterns, common challenges, problem-solving strategies, and specific outliers. It can be applied to a wide range of learning datasets and educational scenarios, thereby supporting educators in understanding students’ learning processes and enhancing their teaching strategies.

Author Contributions

Conceptualization, S.W.; methodology, S.W. and Y.C.; software, S.W. and C.G.; validation, Y.C. and Y.P.X.; formal analysis, S.W. and C.G.; investigation, S.W.; resources, S.W., Y.C. and Y.P.X.; data curation, S.W.; writing—original draft preparation, S.W. and C.G.; writing—review and editing, Q.L. and Y.P.X.; visualization, S.W. and C.G.; supervision, Y.C. and Y.P.X.; project administration, S.W.; funding acquisition, S.W., Y.C. and Y.P.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 2024 Jiangsu Provincial Education Science Planning Project—Special Project (Grant number: C/2024/01/13) and the U.S. National Science Foundation (Grant number: 1503451).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Purdue University (Approval No. 1803020429, approved on 4 November 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data that support the findings of this study are available upon request from the corresponding author. The data are not publicly available due to confidentiality and research ethics.

Acknowledgments

The authors would like to acknowledge the use of ChatGPT, version October 2023 (GPT-4), for providing assistance with language refinement and improving readability during the manuscript preparation. The tool contributed to the improvement of grammar and wording but was not involved in research methodology, data analysis, or the development of scientific content.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rayner, K. Eye Movements in Reading and Information Processing: 20 Years of Research. Psychol. Bull. 1998, 124, 372–422. [Google Scholar] [CrossRef] [PubMed]
  2. Boonen, A.J.; de Koning, B.B.; Jolles, J.; van der Schoot, M. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training. Front. Psychol. 2016, 7, 191. [Google Scholar] [CrossRef] [PubMed]
  3. Moutsios-Rentzos, A.; Stamatis, P.J. One-Step ‘Change’ and ‘Compare’ Word Problems: Focusing on Eye-Movements. Electron. J. Res. Educ. Psychol. 2017, 13, 503–528. [Google Scholar] [CrossRef]
  4. Kabugo, D.; Muyinda, P.B.; Masagazi, F.M.; Mugagga, A.M.; Mulumba, M.B. Tracking Students’ Eye-Movements When Reading Learning Objects on Mobile Phones: A Discourse Analysis of Luganda Language Teacher -Trainees’ Reflective Observations. J. Learn. Dev. 2016, 3, 179–182. [Google Scholar] [CrossRef]
  5. Catrysse, L.; Gijbels, D.; Donche, V.; De Maeyer, S.; Lesterhuis, M.; Van den Bossche, P. How Are Learning Strategies Reflected in The Eyes? Combining Results from Self-Reports and Eye-Tracking. Br. J. Educ. Psychol. 2018, 88, 118–137. [Google Scholar] [CrossRef]
  6. Renkewitz, F.; Jahn, G. Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks. J. Exp. Psychol. Learn. Mem. Cogn. 2012, 38, 1622–1639. [Google Scholar] [CrossRef]
  7. Blignaut, P.; Wium, D. Eye-Tracking Data Quality as Affected by Ethnicity and Experimental Design. Behav. Res. Methods 2014, 46, 67–80. [Google Scholar] [CrossRef]
  8. de Koning, B.B.; Tabbers, H.K.; Rikers, R.M.J.P.; Paas, F. Attention Guidance in Learning from a Complex Animation: Seeing is Understanding? Learn. Instr. 2010, 20, 111–122. [Google Scholar] [CrossRef]
  9. Yüksel, P.; Yıldırım, S. Theoretical Frameworks, Methods, and Procedures for Conducting Phenomenological Studies. Turk. Online J. Qual. Inq. 2015, 6, 1–20. [Google Scholar] [CrossRef]
  10. Susac, A.; Bubic, A.; Kaponja, J.; Planinic, M.; Palmovic, M. Eye Movements Reveal Students’ Strategies in Simple Equation Solving. Int. J. Sci. Math. Educ. 2014, 12, 555–577. [Google Scholar] [CrossRef]
  11. Wu, C.-J.; Liu, C.-Y.; Yang, C.-H.; Jian, Y.-C. Eye-movements reveal children’s deliberative thinking and predict performance on arithmetic word problems. Eur. J. Psychol. Educ. 2020, 36, 91–108. [Google Scholar] [CrossRef]
  12. Wei, S.; Lei, Q.L.; Chen, Y.J.; Xin, Y.P. The Effects of Visual Cueing on Students with and Without Math Learning Difficulties in Online Problem Solving: Evidence from Eye Movement. Behav. Sci. 2023, 13, 927. [Google Scholar] [CrossRef] [PubMed]
  13. Hehman, E.; Stolier, R.M.; Freeman, J.B. Advanced Mouse-Tracking Analytic Techniques for Enhancing Psychological Science. Group Process. Intergroup Relat. 2015, 18, 384–401. [Google Scholar] [CrossRef]
  14. Azcarraga, J.; Suarez, M.T. Recognizing Student Emotions using Brainwaves and Mouse Behavior Data. Int. J. Distance Educ. Technol. 2013, 11, 1–15. [Google Scholar] [CrossRef]
  15. Yamauchi, T. Mouse Trajectories and State Anxiety: Feature Selection with Random Forest. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, 2–5 September 2013; pp. 399–404. [Google Scholar]
  16. Pimenta, A.; Carneiro, D.; Neves, J.; Novais, P. Improving User Privacy and the Accuracy of User Identification in Behavioral Biometrics; Springer: Berlin/Heidelberg, Germany, 2015; pp. 15–26. [Google Scholar]
  17. Mitsumasa, Z.; Yoshinori, M.; Ken, N. Web Application for Recording Learners’ Mouse Trajectories and Retrieving Their Study Logs for Data Analysis. Knowl. Manag. E-Learn. Int. J. 2012, 4, 37–49. [Google Scholar]
  18. Koedinger, K.R.; Baker, R.S.J.; Cunningham, K.; Skogsholm, A.; Leber, B.; Stamper, J. A Data Repository for the EDM Community: The PSLC DataShop. In Handbook of Educational Data Mining; CRC Press: Boca Raton, FL, USA, 2010; pp. 43–55. [Google Scholar] [CrossRef]
  19. Blascheck, T.; Kurzhals, K.; Raschke, M.; Burch, M.; Weiskopf, D.; Ertl, T. Visualization of Eye Tracking Data: A Taxonomy and Survey. Comput. Graph. Forum 2017, 36, 260–284. [Google Scholar] [CrossRef]
  20. Burch, M.; Kull, A.; Weiskopf, D. AOI Rivers for Visualizing Dynamic Eye Gaze Frequencies. Comput. Graph. Forum 2013, 32, 281–290. [Google Scholar] [CrossRef]
  21. Kurzhals, K.; Hlawatsch, M.; Seeger, C.; Weiskopf, D. Visual Analytics for Mobile Eye Tracking. IEEE Trans. Vis. Comput. Graph. 2017, 23, 301–310. [Google Scholar] [CrossRef]
  22. Göbel, F.; Kiefer, P.; Raubal, M. FeaturEyeTrack: Automatic Matching of Eye Tracking Data with Map Features on Interactive Maps. GeoInformatica 2019, 23, 663–687. [Google Scholar] [CrossRef]
  23. Tang, S.; Reilly, R.G.; Vorstius, C. EyeMap: A Software System for Visualizing and Analyzing Eye Movement Data in Reading. Behav. Res. Methods 2012, 44, 420–438. [Google Scholar] [CrossRef]
  24. Thakur, S.; Hanson, A.J. A 3D Visualization of Multiple Time Series on Maps. In Proceedings of the 14th International Conference on Information Visualisation (IV), London, UK, 26–29 July 2010; pp. 336–343. [Google Scholar]
  25. Freeman, J.B.; Ambady, N. MouseTracker: Software for Studying Real-Time Mental Processing Using a Computer Mouse-Tracking Method. Behav. Res. Methods 2010, 42, 226–241. [Google Scholar] [CrossRef] [PubMed]
  26. Duran, N.D.; Dale, R.; McNamara, D.S. The Action Dynamics of Overcoming the Truth. Psychon. Bull. Rev. 2010, 17, 486–491. [Google Scholar] [CrossRef] [PubMed]
  27. Elmqvist, N.; Fekete, J.D. Hierarchical Aggregation for Information Visualization: Overview, Techniques, and Design Guidelines. IEEE Trans. Vis. Comput. Graph. 2010, 16, 439–454. [Google Scholar] [CrossRef] [PubMed]
  28. Krassanakis, V.; Kesidis, A.L. MatMouse: A Mouse Movements Tracking and Analysis Toolbox for Visual Search Experiments. Multimodal Technol. Interact. 2020, 4, 83. [Google Scholar] [CrossRef]
  29. Zgonnikov, A.; Aleni, A.; Piiroinen, P.T.; O’Hora, D.; di Bernardo, M. Decision Landscapes: Visualizing Mouse-Tracking Data. R. Soc. Open Sci. 2017, 4, 170482. [Google Scholar] [CrossRef]
  30. Shneiderman, B. The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations. In Proceedings of the 1996 IEEE Symposium on Visual Languages, Boulder, CO, USA, 3–6 September 1996; pp. 336–343. [Google Scholar] [CrossRef]
  31. Woodring, J.; Shen, H.W. Multiscale Time Activity Data Exploration via Temporal Clustering Visualization Spreadsheet. IEEE Trans. Vis. Comput. Graph. 2009, 15, 123–137. [Google Scholar] [CrossRef]
  32. Viola, I.; Isenberg, T. Pondering the Concept of Abstraction in (Illustrative) Visualization. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2573–2588. [Google Scholar] [CrossRef]
  33. Xin, Y.P.; Kastberg, S.; Chen, Y. Conceptual Model-Based Problem Solving: A Response to Intervention Program for Students with Learning Difficulties in Mathematics (COMPS-RtI); U.S. National Science Foundation Funded Project, Grant No. 1503451. 2015. Available online: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1503451&HistoricalAwards=false (accessed on 8 April 2025).
  34. Xin, Y.P.; Kim, S.J.; Lei, Q.; Liu, B.Y.; Wei, S.; Kastberg, S.E.; Chen, Y.V. The Effect of Model-Based Problem Solving on the Performance of Students Who are Struggling in Mathematics. J. Spec. Educ. 2023, 57, 181–192. [Google Scholar] [CrossRef]
  35. Biswas, A.; Lin, G.; Liu, X.; Shen, H.W. Visualization of Time-Varying Weather Ensembles across Multiple Resolutions. IEEE Trans. Vis. Comput. Graph. 2017, 23, 841–850. [Google Scholar] [CrossRef]
  36. Vera, J.F.; Macías, R. Variance-Based Cluster Selection Criteria in a K Means Framework for One-Mode Dissimilarity Data. Psychometrika 2017, 82, 275–294. [Google Scholar] [CrossRef]
  37. Campello, R.; Moulavi, D.; Zimek, A.; Sander, J. Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection. ACM Trans. Knowl. Discov. Data 2015, 10, 1–51. [Google Scholar] [CrossRef]
  38. Tang, H.; Kirk, J.; Pienta, N.J. Investigating the Effect of Complexity Factors in Stoichiometry Problems Using Logistic Regression and Eye Tracking. J. Chem. Educ. 2014, 91, 969–975. [Google Scholar] [CrossRef]
  39. Bézier, P. Définition numérique des courbes et surfaces. Automatisme 1966, 11, 625–632. [Google Scholar]
  40. Martín-Albo, D.; Leiva, L.A.; Huang, J.; Plamondon, R. Strokes of Insight: User Intent Detection and Kinematic Compression of Mouse Cursor Trails. Inf. Process. Manag. 2016, 52, 989–1003. [Google Scholar] [CrossRef]
  41. Xin, Y.P.; Kim, S.J.; Lei, Q.; Wei, S.; Liu, B.; Wang, W.; Kastberg, S.; Chen, Y.; Yang, X.; Ma, X.; et al. The Impact of a Conceptual Model-based Intervention Program on math problem-solving performance of at-risk English learners. Read. Writ. Q. Overcoming Learn. Difficulties 2020, 36, 104–123. [Google Scholar] [CrossRef]
  42. Hegarty, M.; Mayer, R.E.; Monk, C.A. Comprehension of Arithmetic Word Problems: A Comparison of Successful and Unsuccessful Problem Solvers. J. Educ. Psychol. 1995, 87, 18–32. [Google Scholar] [CrossRef]
Figure 1. MultiScaleAnalyzer platform interface. (a) Overview; (b) Aggregated data view—AOI visualization; (c) Aggregated data view—elements visualization (adapted screen display from ©COMPS-RtI Tutor [33,34]); (d) Raw data view (adapted screen display from ©COMPS-RtI Tutor [33,34]).
Figure 1. MultiScaleAnalyzer platform interface. (a) Overview; (b) Aggregated data view—AOI visualization; (c) Aggregated data view—elements visualization (adapted screen display from ©COMPS-RtI Tutor [33,34]); (d) Raw data view (adapted screen display from ©COMPS-RtI Tutor [33,34]).
Applsci 15 04237 g001
Figure 2. Overview ordered by performance.
Figure 2. Overview ordered by performance.
Applsci 15 04237 g002
Figure 3. AOI-based attention pattern of S1.
Figure 3. AOI-based attention pattern of S1.
Applsci 15 04237 g003
Figure 4. AOI-based attention pattern of S9.
Figure 4. AOI-based attention pattern of S9.
Applsci 15 04237 g004
Figure 5. AOI-based attention pattern of S7.
Figure 5. AOI-based attention pattern of S7.
Applsci 15 04237 g005
Figure 6. AOI-based attention pattern of S16.
Figure 6. AOI-based attention pattern of S16.
Applsci 15 04237 g006
Figure 7. “Elements Vis” showing students’ problem-solving strategies. (a) S7 comprehensively read the entire problem (model-based problem-solving strategy); (b) S16 focused on specific keywords (keyword strategy).
Figure 7. “Elements Vis” showing students’ problem-solving strategies. (a) S7 comprehensively read the entire problem (model-based problem-solving strategy); (b) S16 focused on specific keywords (keyword strategy).
Applsci 15 04237 g007
Figure 8. Students’ performance of B3.5 and B7.1.
Figure 8. Students’ performance of B3.5 and B7.1.
Applsci 15 04237 g008
Figure 9. “Elements Vis” showing students’ problem-solving difficulty. (a) Prolonged fixations on “a”, indicating efforts to comprehend its meaning; (b) Mouse-dragging trajectory of “a” showing significant curvature, reflecting hesitation; (c) Repeated repositioning of “a”, indicating uncertainty.
Figure 9. “Elements Vis” showing students’ problem-solving difficulty. (a) Prolonged fixations on “a”, indicating efforts to comprehend its meaning; (b) Mouse-dragging trajectory of “a” showing significant curvature, reflecting hesitation; (c) Repeated repositioning of “a”, indicating uncertainty.
Applsci 15 04237 g009
Figure 10. “AOI Vis” showing outliers.
Figure 10. “AOI Vis” showing outliers.
Applsci 15 04237 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, S.; Guo, C.; Lei, Q.; Chen, Y.; Xin, Y.P. MultiScaleAnalyzer for Spatiotemporal Learning Data Analysis: A Case Study of Eye-Tracking and Mouse Movement. Appl. Sci. 2025, 15, 4237. https://doi.org/10.3390/app15084237

AMA Style

Wei S, Guo C, Lei Q, Chen Y, Xin YP. MultiScaleAnalyzer for Spatiotemporal Learning Data Analysis: A Case Study of Eye-Tracking and Mouse Movement. Applied Sciences. 2025; 15(8):4237. https://doi.org/10.3390/app15084237

Chicago/Turabian Style

Wei, Shuang, Chen Guo, Qingli Lei, Yingjie Chen, and Yan Ping Xin. 2025. "MultiScaleAnalyzer for Spatiotemporal Learning Data Analysis: A Case Study of Eye-Tracking and Mouse Movement" Applied Sciences 15, no. 8: 4237. https://doi.org/10.3390/app15084237

APA Style

Wei, S., Guo, C., Lei, Q., Chen, Y., & Xin, Y. P. (2025). MultiScaleAnalyzer for Spatiotemporal Learning Data Analysis: A Case Study of Eye-Tracking and Mouse Movement. Applied Sciences, 15(8), 4237. https://doi.org/10.3390/app15084237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop