AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development
Abstract
:1. Introduction
- We present a novel hybrid approach that combines generative AI and MDD to ensure real-time UI/UX optimization in e-commerce. AI leverages KMeans++ for behavior-driven user clustering, GANs for automated UI generation, and RL agents for continuous interface adaptation, while MDD provides the structural foundation for consistency and automated adaptation.
- This study contributes to the growing body of research on AI-driven e-commerce by introducing a unified AI-MDD architecture that enables structured, scalable UX design and intelligent personalization. The framework integrates real-time user behavior analysis with multi-agent reinforcement learning and generative models to dynamically generate user interfaces. Meanwhile, AI continuously monitors user feedback to refine UI elements, ensuring an adaptive and user-centric experience.
- We validate the system performance across diverse interaction scenarios in a Saudi Arabian e-commerce market through technical validation, evaluate system quality for different user behaviors and measure conversion rates, user engagement, and adaptive UI responsiveness for broader deployment.
2. Literature Works
2.1. The Static UI Adaptation Problem: SUIAP
2.2. The Dynamic UI/UX Adaptation Problem: DUIAP
2.3. Comparative Summary and Innovation of This Study
3. Problem Statement and Mathematical Formalization
3.1. Problem Statement
- -
- Real-time UI adaptation: Unlike static UI adaptation, which relies on predefined rules and remains unchanged during runtime, dynamic UI adaptation continuously updates UI components based on real-time user interactions and personalized needs. These updates capture the evolving nature of user behavior, enabling more adaptive and responsive decision-making process.
- -
- Personalized and Evolving User Needs: Dynamic UI adaptation tailors experiences at an individual level, whereas static UI adaptation applies a uniform interface for all users. Since user needs evolve over time, dynamic adaptation requires sophisticated optimization mechanisms to ensure continuous relevance, engagement, and user satisfaction.
- -
- User experiences Optimization: Effective dynamic UI adaptation must prioritize user satisfaction and interaction quality. Traditional static approaches fail to consider real-time user feedback and adaptive refinements. By dynamically personalizing UI layouts, optimizing responsiveness, and reducing dropout rates, dynamic adaptation enhances fosters higher engagement and an enhanced user experience.
3.2. Mathematical Formalization
- -
- Let be the set of users, represent the set of interactions for a given user at time .
- -
- Each interaction consists of multiple behavioral features:
- ○
- Click frequency
- ○
- Page views
- ○
- Session duration
- ○
- Search queries
- ○
- Checkout behavior
- ○
- Device type
- -
- The UI adaptation function, which determines the UI for a given user at time is defined as
- -
- To improve personalization, users are grouped based on interaction patterns using K-Means++ clustering.
- ○
- Let denote the set of user clusters.
- ○
- Each user belongs to a cluster , where:
- -
- The effectiveness of a dynamic UI adapation strategy is evalauted using optimization function:
- ○
- measures the level of user interaction with the system. It quantifies user actions (click frequency, page views, session duration, search queries, checkout behavior) that indicate engagement.
- •
- is the updated weight for the action at time .
- •
- is the previous weight at time − 1.
- •
- is a learning rate controlling the adjustment speed.
- •
- represent the influence of new data or feedback on the action.
- ○
- represent the user experience score which evaluates UI fluidity, responsiveness, and ease of use. It is defined by sum of retention rate , response time and error rate at time t that computed as:
- ○
- represent user satisfaction and capture subjective user feedback through surveys, ratings, or implicit behavior analysis. It is defined by sum of net promoter score , customer satisfaction score and click-through rate that computed as:
- ○
- represent dropout rate (e.g., users abandoning session).
- ○
- are weighting factors for engagement, retention, satisfaction and dropout.
4. Methodology
4.1. Architectural Framework for Real-Time UX Adaptation
- 1.
- Front-End Layer: This layer is responsible for displaying adaptive UI layouts, collecting user interactions and rendering optimized UI components. This layer consists of two main modules:
- -
- User Interaction Tracker Module: captures scrolls, clicks, hovers, dwell time, and gestures. It sends real-time events to the Multi-Agent System (MAS) layer for processing.
- -
- Real-time UI Adjustment and Rendering Module: This module dynamically updates and renders UI elements based on AI-driven recommendations. It consists of:
- ▪
- UI Component Manager: Handles UI components and maintains UI states.
- ▪
- UI Deployer: Converts JSON-based UI models into deployable interfaces for platforms, such as React.js, Flutter, and WebVR, using MDD tools.
- ▪
- UI Render: Displays the final UI layout after adaptation and optimization.
- 2.
- Back-End Layer: This layer processes user interactions, clusters user behavior, and adapts UI layouts through three main modules:
- -
- AI-Behavior Analyzer Module: This module receives user interaction logs and applies unsupervised clustering (KMeans++) to group users into different personas. It also stores personas and preferences in the Knowledge Base (KB) and shares insights with the AI-Driven UI Adaptation and Optimization Module.
- -
- AI-Driven UI Adaptation and Optimization Module: This module enables AI driven personalization and optimization. It consists of:
- ▪
- GAN-Based UI Generator: Generates adaptive UI layouts based on user preferences and behavioral clusters. It also uses a discriminator network to validate usability and coherence.
- ▪
- Multi-Agent Mediator: Acts as an intermediary between GAN and RL components, ensuring smooth decision-making based on adaptive rules and performance metrics. Multiple agents may share policies while executing dedicated adaptation tasks.
- ▪
- RL-Based Optimizer: Implements Q-learning-based UI refinement engine that optimizes layouts and applies logical adaptation rules for early, priority, and predictive guidance.
- -
- Continuous Improvement Module: This module monitors user engagement and interaction data to fine-tune models dynamically. It consists of
- ▪
- AI Performance Monitor: Monitors new user interactions and system performance while computing CTR (Click-Through Rate), engagement scores, and load time performance.
- ▪
- Fine-tuning Module: generate UI updates and adjust learning parameters based on user interaction metrics, and system performance.
- 3.
- Knowledge Base Layer: Serves as the central repository for storing historical user interactions, UI performance data, and contextual information. It includes
- ▪
- User Profiles: stores individual user behaviors and preferences.
- ▪
- Historical UI Performance: Logs past UI adaptations and their success rates, including state–action–reward experiences to improve RL learning.
- ▪
- Adaptation Rules: stores logical rules for priority and predictive guidance.
- ▪
- Persona Repository maintains essential information on user personas and preferences for personalized UI adaptation.
- 4.
- MDD Tools Layer consists of various tools (e.g., eclipse IDE, ATL transformation language and Acceleo) and models organized into four abstraction levels: (1)—Computational Independent Model (CIM) that captures business logic without system details, (2)—Platform Independent Model (PIM) that defines functional behavior independent of implementation technologies, (3)—Platform Specific Model (PSM) that adapts UI models for different target platforms and (4)—Code Generation and Deployment: Converts UI models into executable code for web and mobile platforms.
4.2. Metamodels in AI-MDD-UX Vision
- -
- CIM Metamodel—User Interaction Modeling: Captures user behavior without technical constraints. Key classes include: (1) User, (2) Interaction, (3) Persona, and (4) UI Elements classes. The User is defined by User ID, interaction history, and preferences (e.g., shopping habits, browsing interactions). User Persona represents structured behavior types and shopping patterns. Interaction includes attributes such as click rate, session duration, and device type. The UI Elements class denotes adaptive components with engagement metrics.
- -
- PIM Metamodel—Cluster Modeling and Feature Vectors: The PIM metamodel abstracts behavioral clustering and feature representation. It includes: (1) User-Cluster class that defines a set of similar user behaviors, (2) Feature-Vector class that captures metrics like click rate and purchase history, and (3) Persona Embeddings class encodes behavior into AI-compatible vectors.
- -
- PSM Metamodel—AI-Driven UI Generation and Optimization: The PSM metamodel focuses on UI generation through GANs and RL.
- ▪
- GAN-Driven UI Generation Metamodel: This metamodel defines generative processes for adaptive UI design.
- ▪
- Multi-Agent RL-Driven UI Adaptation and Optimization Metamodel: This meta-model integrates RL to fine-tune UI in response to user interactions. It consists of several key classes: UI State class that represents the current UI layout and personalization option, RL-Agent class that learns optimal UI adjustments with attributes such as state, action, reward, policy, Adaptation Rule class that encapsulates rule-based UI adjustments, and Reward Function class that computes feedback for RL optimization.
- -
- UI Code Generation and Deployment Metamodel: This metamodel defines platform-specific models for web, mobile, and VR interfaces. The Web Platform includes React components, dynamic layouts, and responsive designs. The Mobile Platform focuses on Flutter widgets, touch interactions, and mobile-optimized UIs. The VR Platform covers WebVR-specific components such as 3D models, VR UI elements, and immersive interactions. The core elements include (1) Code Generator, which is defined by language, framework, and target platform (e.g., React.js for Web, Flutter for Mobile, WebVR for VR), and (2) Deployment Manager that manages deployment configurations across platforms.
4.3. The Workflow for Real-Time UI/UX Adaptation in AI-MDD-UX
4.3.1. User Interaction Capture
4.3.2. KMeans++ Based User Behavior Clustering
- K-means++ technique [15]: Grouping users into clusters based on their similar interaction patterns, such as using checkout behavior. It allows the system to generate tailored UI designs dynamically.
- Improve convergence and accuracy: Enhancing the traditional K-Means algorithm by optimizing centroid initialization, leading to faster convergence and more accurate clustering.
- Persona-based Adaptation: Allowing the generative AI models to handle UI experiences according to user persona.
Algorithm 1 Enhanced K-Means++ Clustering for User Behavior Algorithm | |
Inputs: | : Collected User Interaction Data; : Extracted features (e.g., clicks, page views, product preferences), n: number of users |
Outputs: | Groups G = {G1, G2, …, GL} // Groups of similar user behaviors Centroids C = {C1, C2, …, CL} // Representative points of each group |
Begin | |
(1) : | Initialize cluster centers using K-Means++. |
(2) : | Repeat |
(3) : | Assign each user (U) to the nearest cluster based on Euclidean distance. |
(4) : | Update each center by averaging the feature values of assigned users. |
(5) : | Until no significant change in cluster assignments. |
(6) : | Foreach user (U) in a cluster Do |
(7) : | IF (moving U to a different cluster improves grouping quality) Then |
(8) : | Reassign U and update cluster centers. |
(9) : | End IF |
(10) : | UntilStability of centroids |
(11) : | Return final clusters (G) and centroids (C). |
End |
4.3.3. Persona Embeddings Generation
4.3.4. GAN-Based UI Generation
- -
- Automate UI creation and generate diverse variations from historical user data, enabling faster prototyping, and adaptation to user needs.
- -
- Adjust UI layouts, colors, and element placements based on real-time interaction data, ensuring and optimized user experience without human intervention.
- -
- Generate personalized UI designs by training on engagement metrics such as clicks, time spent, and conversion rates.
- -
- Ensure visual appealing and functional UI layouts that enhance engagement.
- -
- : Probability a real is correctly identified.
- -
- : Probability a generated is misclassified as real.
- -
- drives the discriminator to accurately identify genuine UI layouts.
- -
- pushes the generator to create outputs that the discriminator will incorrectly classify as authentic.
4.3.5. Multi-Agents Reinforcement Learning Based UI Optimization
- -
- User Behavior Agent (UBA): Tracks interactions such as button clicks, form inputs, and hover actions.
- -
- Contextual Information Agent (CxIA): Collects environmental data, including device type (mobile or tablet), screen size, and network conditions.
- -
- Urgency Agent (UA): Monitors high-priority alerts, such as flash sales, security notifications, or order status updates.
- -
- Layout Optimization Agent (LOA): Dynamically adjust the position and size of UI elements to improve readability and accessibility based on user behavior and screen size.
Algorithm 2 GAN-Based UI Generation Algorithm | |
Inputs: | : User feature vectors : Personna embedding z: random noise. : Historical UI dataset |
Outputs: | // Optimized UI layouts |
Begin | |
(1) : | Initialize GAN with random weights for generator G and discriminator D. |
(2) : | Extract feature vectors |
(3) : | Compute persona embeddings from clustered user interactions . |
(4) : | While convergence criteria not met Do |
(5) : | Sample real UI layouts from historical data. |
(6) : | Generate fake UI layouts using Equation (5). |
(7) : | Train the discriminator by maximizing discriminative distance using Equation (6) |
(8) : | Train the generator by updating parameters to detect fake UI using Equation (8) |
(9) : | Update weights of and using gradient descent. |
(10) : | Monitor training loss and adjust hyperparameters if needed. |
(11) : | End While. |
(12) : | Select the best UI layouts based on engagement and retention metrics. |
(13) : | Return . |
End |
- -
- Personalization Agent: Adapts content and recommendations in real-time based on browsing history and user preferences to enhance engagement and sales.
- -
- Conflict Resolution Agent (CRA): Resolves conflicts when multiple layout changes need to be made simultaneously.
- represents a set of UI adaptation agents.
- denotes all possible UI states.
- describes UI elements and layout configurations.
- maps UI element states to interaction metrics.
- contains possible adaptation actions.
- defines the transition probability between UI states.
- represents the engagement-based reward function.
- (1)
- Early Guidance Strategy (Algorithm 3)
Algorithm 3 Early Guidance for UI Layout Adaptation | |
Inputs: | s: UI State, UAgt: Urgency Agent, CRA: Conflict Resolution Agent |
Outputs: | UIL: UI Layout. |
Begin | |
(1) : | Foreach UI state (s) monitored by UBA Do |
(2) : | If device type = mobile Then |
(3) : | If form interaction detected Then |
(4) : | Guide (π(s), UI Layout Agent) //UBA suggests increasing form field size. |
(5) : | UIL ← Enlarge Form Fields |
(6) : | If button hover detected Then |
(7) : | Guide (π(s), UI Layout Agent) // UBA advises UI Agent to highlight buttons. |
(8) : | UIL ← Highlight Button |
(9) : | Else if the device type = Desktop Then |
(10) : | If button click detected Then |
(11) : | Guide (π(s), UI Layout Agent) // UBA recommends changing button color |
(12) : | UIL ← Change Button Color |
(13) : | End For |
(14) : | Return UIL |
End |
- (2)
- Priority Guidance Strategy (Algorithm 4)
Algorithm 4 Priority Guidance for UI Layout Adaptation | |
Inputs: | s: UI State, Augst: Urgency Agent, CRA: Conflict Resolution Agent |
Outputs: | UIL: UI Layout. |
Begin | |
(1) : | Foreach UI state (s) monitored by UBA Do |
(2) : | If urgent alert detected Then |
(3) : | Guide (π(s), UI Layout Agent) //Ensures the user responds to the alert immediately. |
(4) : | UIL ← Prioritize Alert (e.g., pop-up, emphasis) |
(5) : | End If |
(6) : | End For |
(7) : | Return UIL |
End |
- (3)
- Predictive Guidance Strategy to UI Layout Adaptation:
Algorithm 5 Predictive Guidance for UI Layout Adaptation | |
Inputs: | UEngLvl: User Engagement Level, ScrSize: Screen Size PastInt: Past Interaction History, CtxAw: external factors (e.g., location, time of day). UI: UI application. // Manage Dynamic UI refinements |
Outputs: | UIAdapt: UI Layout adaptation before user requests. |
(1) : | Foreach UI state (s) in the system Do |
(2) : | If UEngLvl is high OR ScrSize indicates mobile OR PastInt Then |
(3) : | Apply UIAdapt (Predictive UI adjustments) |
(4) : | End If |
(5) : | End For |
(6) : | Return UIAdapt |
End |
4.3.6. (Re) Deployment and Real-Time Monitoring
5. Implementation and User Scenario
5.1. Implementation
5.2. Functional Details
5.3. Diverse User Scenarios: Saudi Arabian E-Commerce
5.4. Technical Validation and Experimental Results
- 1.
- User Behavior Clustering and Persona Modeling
- The first focuses on the clustering of user behaviors and personas. We conducted tests using an e-commerce user behavior dataset to assess the effectiveness of the proposed KMeans++ based approach in terms of cluster quality and execution speed. We compared our method against state-of-the-art techniques, KMeans [16], Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [17], and Agglomerative Algorithm [18]. The objective was to highlight the superior performance of our approach in solving the dynamic UI adaptation problem by improving clustering accuracy and execution efficiency. The selection of KMeans++ and DBSCAN was motivated by their complementary strengths in handling different aspects of user behavior analysis, which we elaborate on below:
- -
- KMeans++ for Efficiency and Scalability: KMeans++ is prioritized due to its computational efficiency in centroid initialization, which ensures faster convergence while maintaining clustering accuracy, which is a critical requirement for real-time adaptive systems. The algorithm’s scalability makes it well-suited for processing large-scale user interaction data, and its widespread adoption in behavior analysis provides a robust baseline for evaluation.
- -
- DBSCAN for Robustness to Irregular Patterns: To address potential limitations of centroid-based methods, we incorporated DBSCAN as a comparative density-based approach. Its ability to: (1) identify arbitrarily shaped clusters, (2) detect and isolate noisy or atypical user behaviors, and (3) operate without predefining the number of clusters makes it invaluable for capturing nuanced or irregular behavior patterns that KMeans++ might overlook.
- 2.
- Dynamic UI Adaptation and Performance Comparison
- The second evaluation focuses on the dynamic UI adaptation problem, conducting a comparative analysis involving AI-MDD-UX, rule-based adaptation, heuristic-driven UI optimization, and traditional machine learning-based adaptation methods. This comparative analysis aims to demonstrate the effectiveness of AI-MDD-UX in enhancing user experiences and addressing dynamic UI adaptation challenges. Performance was evaluated using key metrics, including cluster quality, execution speed, user engagement, and layout adaptability. We provide strong evidence of AI-MDD-UX’s efficiency and suitability for optimizing UI adaptations across various user behaviors and real-world scenarios.
5.4.1. Datasets
5.4.2. User Feedback Collection
- Direct Feedback via Surveys: A structured survey is designed to capture users’ subjective evaluations across key interface dimensions, including ease of navigation, visual appeal, responsiveness, content relevance, and overall satisfaction. Each dimension is rated using a 5-point Likert scale ranging from “Strongly Disagree” to “Strongly Agree”. These surveys are contextually triggered, typically following the completion of key actions, such as purchase and product comparison. The responses are stored in a feedback matrix , where represents the user and represents the corresponding UX dimension.
- Indirect Feedback via Interaction Logs: Behavioral signals such as time spent on the UI, click-through rates, drop-off points, scroll depth, and hesitation metrics are captured anonymously through embedded tracking scripts. These signals are then associated with specific UX dimensions using heuristic mapping rules. For instance, prolonged hesitation during the checkout process may be interpreted as reduced ease of navigation.
5.4.3. Evaluation Criteria and Metrics
- 1.
- Conversion Rate (): Conversion rate measures the percentage of users who successfully complete a target task (e.g., purchase, sign-up):
- : Number of successful conversions (e.g., purchases)
- : Number of unique users
- 2.
- Click-Through Rate (): CTR reflects user interest and engagement by quantifying how often users click on actionable elements:
- : Number of user’s clicks on actionable elements.
- : Number of times elements were shown.
- 3.
- Session Duration (): Session duration measures the average length of time users remain actively engaged with the interface:
- : Duration of session .
- : Number of sessions.
- 4.
- Task Completion Time (): This metric captures the average time users take to complete predefined tasks. Faster completion is considered indicative of a more efficient and user-friendly interface:
- : Average task completion times under static and adaptive conditions.
- 5.
- User Satisfaction (): User satisfaction is derived from direct feedback collected via Likert-scale surveys (see Section 5.4.2). Users rate their experience across key aspects such as ease of navigation, content relevance, and visual appeal. Satisfaction is calculated as an average rating per user across dimensions:
- : rating of user for UX aspect .
- N: number of UX aspects evaluated.
- 6.
- Navigation Efficiency (): As described earlier, navigation efficiency evaluates the directness of user paths during task execution:
- : Optimal number of steps to complete the task.
- : Observed number of steps.
5.4.4. Results of Simulated User Behavior
- Evaluation and Comparison of Cluster Quality
- Evaluation and Comparison of Execution Speed
5.4.5. Results of UI/UX Performance
- Cross-Platform Performance Evaluation
- Evaluation with Simulated User Personas
- Evaluation with Various Business Domains
- Comparative Analysis with Other Approaches
5.4.6. Discussion
- (1)
- Scalability for Emerging Platforms: The current implementation supports UI adaptation across Web (React.js), Mobile (Flutter), and VR interfaces, but extending adaptation to voice-based UIs, wearables, and automotive systems requires platform-specific customization and broader training datasets. Future works will focus on developing platform-agnostic interaction models that unify multimodal inputs (e.g., gesture, voice, and haptic feedback) for AI processing while exploring cross-modal transfer learning to adapt existing interface knowledge across different platforms, such as translating VR interface patterns to voice-controlled environments.
- (2)
- Computational Efficiency: The combination of GAN-based UI generation, Multi-Agent Systems (MAS), and Reinforcement Learning (RL) imposes significant computational costs. While these components enhance UI adaptability, achieving real-time performance on large-scale platforms may necessitate high-performance computing resources or cloud-based solutions to efficiently handle the workload. Future enhancements will prioritize architectural optimization, including lightweight GAN variants and distilled RL policies to reduce inference time, combined with a hybrid cloud-edge processing that strategically distributes computational workloads, maintaining resource-intensive UI generation on cloud servers while preserving real-time adaptation capabilities on end-user devices. Additionally, we will leverage parallel computation, particularly within RL components, to improve throughput and reduce latency, as discussed in [23].
- (3)
- Latency in Real-Time Adaptation: Despite AI-MDD-UX’s ability to dynamically track and respond to user interactions, latency issues may arise in high-traffic scenarios. The computational demands of KMeans++ clustering, RL-based policy learning, and UI rendering updates contribute to potential delays. Future improvements will integrate intelligent caching mechanisms and edge computing infrastructure to optimize processing time while reducing latency. This will be combined with an asynchronous RL architecture that separates policy updates from real-time interaction handling, enabling continuous model improvement without compromising system responsiveness.
- (4)
- Interpretability and Explainability Issues: The black-box nature of GANs and RL-based UI adaptation presents challenges in explainability, making it difficult for UX designers and business stakeholders to fully understand how UI changes are generated. Enhancing transparency through Explainable AI (XAI) techniques could improve trust and provide actionable insights into UI decision-making. Future development will integrate Explainable AI (XAI) tools, particularly SHAP-based visualization tools, to make UI adaptation decisions transparent. We will concurrently assess how these system explanations impact user trust, evaluating statements such as “this interface adjusted based on your frequent interactions with a given feature”.
5.4.7. Practical Implementation Challenges and Ethical Considerations
- (1)
- Data Privacy and Security
- (2)
- Scalability and Platform Compatibility
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sharma, R.; Srivastva, S.; Fatima, S. E-commerce and digital transformation: Trends, challenges, and implications. Int. J. Multidiscip. Res. (IJFMR) 2023, 5, 1–9. [Google Scholar]
- Nationwide Group. Unveiling the Digital Storefront: How and Why You Should Leverage an E-Commerce-Enabled Website. 2024. Available online: https://www.nationwidegroup.org/unveiling-the-digital-storefront-how-and-why-you-should-leverage-an-e-commerce-enabled-website/ (accessed on 22 March 2025).
- Zhu, P.; Wang, X.; Sang, Z.; Yuan, A.; Cao, G. Context-aware Heterogeneous Graph Attention Network for User Behavior Prediction in Local Consumer Service Platform. arXiv 2021, arXiv:2106.14652. [Google Scholar]
- Sun, Q.; Xue, Y.; Song, Z. Adaptive user interface generation through reinforcement learning: A data-driven approach to personalization and optimization. arXiv 2024, arXiv:2412.16837. [Google Scholar]
- Chunchu, A. Adaptive User Interfaces: Enhancing User Experience through Dynamic Interaction. Int. J. Res. Appl. Sci. Eng. Technol. 2024, 12, 949–956. [Google Scholar] [CrossRef]
- Spurlock, J. Bootstrap: Responsive Web Development; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2013. [Google Scholar]
- Peissner, M.; Edlin-White, R. User control in adaptive user interfaces for accessibility. In Proceedings of the Conference on Human-Computer Interaction (IFIP), Cape Town, South Africa, 2–6 September 2013; pp. 623–640. [Google Scholar]
- Zhang, L.; Qu, Q.X.; Chao, W.Y.; Duffy, V.G. Investigating the combination of adaptive UIs and adaptable UIs for improving usability and user performance of complex UIs. Int. J. Hum.-Comput. Interact. 2020, 36, 82–94. [Google Scholar] [CrossRef]
- Abrahão, S.; Insfran, E.; Sluÿters, A.; Vanderdonckt, J. Model-based intelligent user interface adaptation: Challenges and future directions. Softw. Syst. Model. 2021, 20, 1335–1349. [Google Scholar] [CrossRef]
- Jean, G. Dynamic UI/UX Adaptation in Mobile Apps Using Machine Learning for Individualized User Experiences. December 2024. Available online: https://www.researchgate.net/publication/386376034_Dynamic_UIUX_Adaptation_in_Mobile_Apps_Using_Machine_Learning_for_Individualized_User_Experiences (accessed on 22 March 2025).
- Zosimov, V.V.; Khrystodorov, O.V.; Bulgakova, O.S. Dynamically changing user interfaces: Software solutions based on automatically collected user information. Program. Comput. Softw. 2018, 44, 492–498. [Google Scholar] [CrossRef]
- Mezhoudi, N.; Vanderdonckt, J. Toward a task-driven intelligent GUI adaptation by mixed-initiative. Int. J. Hum.-Comput. Interact. 2021, 37, 445–458. [Google Scholar] [CrossRef]
- Yigitbas, E.; Jovanovikj, I.; Biermeier, K.; Sauer, S.; Engels, G. Integrated model-driven development of self-adaptive user interfaces. Softw. Syst. Model. 2020, 19, 1057–1081. [Google Scholar] [CrossRef]
- Nandoskar, V.; Pandya, R.; Bhangale, D.; Dhruv, A. Automated User Interface Generation using Generative Adversarial Networks. Int. J. Comput. Appl. 2021, 174, 4–9. [Google Scholar] [CrossRef]
- Issam, Z.I.D.I.; Al-Omani, M.; Aldhafeeri, K. A new approach based on the hybridization of simulated annealing algorithm and tabu search to solve the static ambulance routing problem. Procedia Comput. Sci. 2019, 159, 1216–1228. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96), Portland, Oregon, 2–4 August 1996; pp. 226–231. [Google Scholar]
- Lei, Y.; Yu, D.; Bin, Z.; Yang, Y. Interactive K-Means Clustering Method Based on User Behavior for Different Analysis Target in Medicine. Comput. Math. Methods Med. 2017, 1, 4915828. [Google Scholar]
- Müllner, D. Modern hierarchical, agglomerative clustering algorithms. arXiv 2011, arXiv:1109.2378. [Google Scholar]
- Liu, Y.; Zhao, M.; Yang, M.; Eswaran, D.; Dhillon, I. Behavior Sequence Transformer for E-commerce Recommendation in Alibaba. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ‘21), Virtual Event, Singapore, 14–18 August 2021; pp. 2724–2732. [Google Scholar]
- Paskalev, P.; Serafimova, I. Rule based framework for intelligent GUI adaptation. In Proceedings of the 12th International Conference on Computer Systems and Technologies (CompSysTech ‘11), Vienna, Austria, 16–17 June 2011; pp. 101–108. [Google Scholar]
- Karchoud, R.; Roose, P.; Dalmau, M.; Illarramendi, A.; Ilarri, S. One app to rule them all: Collaborative injection of situations in an adaptable context-aware application. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 4679–4692. [Google Scholar] [CrossRef]
- Zhou, S.; Zheng, W.; Xu, Y.; Liu, Y. Enhancing user experience in VR environments through AI-driven adaptive UI design. J. Artif. Intell. Gen. Sci. 2024, 6, 59–82. [Google Scholar] [CrossRef]
- Anokhin, I.; Rishav, R.; Riemer, M.; Chung, S.; Rish, I.; Kahou, S.E. Handling Delay in Real-Time Reinforcement Learning. arXiv 2025, arXiv:2503.23478. [Google Scholar]
Related Works | Adaptation Type | Technique Used | Key Limitation |
---|---|---|---|
[7] | Static | Rule-based AUI | Limited runtime flexibility |
[8] | Static | Hybrid Adaptive/Adaptable UI | Lacks deep personalization |
[9] | Static | Model-based UI | Limited AI integration |
[10] | Dynamic | Reinforcement Learning (RL) | No MDD integration |
[11] | Dynamic | Rule-based and Pseudo-ID | Limited to activity logs |
[12] | Dynamic | TADAP (ML and Feedback) | UX not fully automated |
[13] | Dynamic | DSLs for MDD | No AI or learning ability |
[14] | Dynamic | GANs | Manual refinement required |
Proposed | Static and Dynamic | GANs + RL + KMeans++ + MDD | Addresses limitations of exiting studies |
Aspect | Generative AI’s Role | MDD Role |
---|---|---|
UI Personalization | Learns user behavior and preferences | Defines adaptable UI structures through models. |
UI Generation | Generates dynamic UI designs | Transforms models into functional UI components |
UI Optimization | Uses RL to refine UI layouts | Automates adaptation through transformation rules |
UI Adaptation | Adjusts UI in response to user actions | Ensures structured evolution of UI models |
User Behavior Agent | |
---|---|
R1 | |
R2 | |
Personalization Agent | |
R3 | |
CTA Monitoring Agent | |
R4 | |
UI Visibility Agent | |
R5 | |
Feedback Processing Agent | |
R6 | ) |
User Behavior Agent | |
R7 | |
Contextual Information Agent | |
R7 |
Instance Name | Behavioral Features (Count) | Demographic Features (Count) | Engagement Features (Count) | Number of Users |
---|---|---|---|---|
Inst-1 | 10 | 6 | 8 | 10.000 |
Inst-2 | 12 | 7 | 10 | 25.000 |
Inst-3 | 15 | 8 | 12 | 50.000 |
Inst-4 | 18 | 10 | 14 | 100.000 |
Platform Name | Description | Platform | User Type | Interaction Complexity |
---|---|---|---|---|
E-commerce Web | Online shopping platform | React.js | Consumers | High |
Mobile App | Mobile Shopping app | Flutter | App Users | Medium |
VR Store | VR shopping experience | WebVR | VR Users | Very High |
Smartwatch UI | Minimal shopping interface | Watch-OS | Go Users | Low |
Automotive UI | In-car shopping dashboard | In-car OS | Drivers | Medium |
Platform Name | CTR (%) | Conversation Rate (%) | Session Duration(m) | User Experience | User Satisfaction |
---|---|---|---|---|---|
E-commerce Web | 12.5 | 42.3 | 5.8 | 9.0 | 91.2% |
Mobile App | 10.3 | 39.2 | 4.9 | 8.5 | 88.4% |
VR Store | 14.7 | 48.6 | 7.4 | 9.4 | 94.7% |
Smartwatch UI | 8.9 | 32.1 | 2.7 | 8.1 | 85.5% |
Automotive UI | 11.1 | 37.8 | 6.1 | 8.8 | 90.3% |
Platform Name | CTR (%) | Conversation Rate (%) | Session Duration(m) | User Experience | User Satisfaction |
---|---|---|---|---|---|
Power User | 15.2 | 50.1 | 3.5 | 9.6 | 93.8% |
Cautious Browser | 10.4 | 41.5 | 6.8 | 9.1 | 90.1% |
First-Time User | 9.6 | 34.2 | 7.2 | 8.6 | 86.7% |
Accessibility User | 8.8 | 30.5 | 8.3 | 8.9 | 88.4% |
Instance Name | Evaluation Metric | Rule-Based UI Adaptation | Static UI Design (Bootstrap) | GAN Model | Proposed Approach |
---|---|---|---|---|---|
E-commerce Web | CTR (%) | 9.8 | 8.2 | 11.7 | 12.5 |
Conversation Rate (%) | 35.2 | 29.1 | 40.5 | 42.3 | |
Session Duration (m) | 4.6 | 3.9 | 5.5 | 5.8 | |
User Experience | 7.8 | 7.0 | 8.8 | 9.0 | |
User Satisfaction (%) | 79.4 | 72.5 | 88.9 | 91.2 | |
Mobile App | CTR (%) | 8.7 | 7.6 | 10.1 | 10.3 |
Conversation Rate (%) | 32.6 | 26.4 | 37.1 | 39.2 | |
Session Duration (m) | 3.8 | 3.5 | 4.7 | 4.9 | |
User Experience | 7.4 | 6.8 | 7.9 | 8.5 | |
User Satisfaction (%) | 75.6 | 69.3 | 79.8 | 88.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alti, A.; Lakehal, A. AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development. Future Internet 2025, 17, 180. https://doi.org/10.3390/fi17040180
Alti A, Lakehal A. AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development. Future Internet. 2025; 17(4):180. https://doi.org/10.3390/fi17040180
Chicago/Turabian StyleAlti, Adel, and Abderrahim Lakehal. 2025. "AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development" Future Internet 17, no. 4: 180. https://doi.org/10.3390/fi17040180
APA StyleAlti, A., & Lakehal, A. (2025). AI-MDD-UX: Revolutionizing E-Commerce User Experience with Generative AI and Model-Driven Development. Future Internet, 17(4), 180. https://doi.org/10.3390/fi17040180