Next Article in Journal
KRA: K-Nearest Neighbor Retrieval Augmented Model for Text Classification
Previous Article in Journal
A Secure Data-Sharing Model Resisting Keyword Guessing Attacks in Edge–Cloud Collaboration Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LIME-Mine: Explainable Machine Learning for User Behavior Analysis in IoT Applications

1
College of Big Data, Yunnan Agricultural University, Kunming 650201, China
2
The Key Laboratory for Crop Production and Smart Agriculture of Yunnan Province, Kunming 650201, China
3
College of Electrical and Mechanical, Kunming Metallurgy College, Kunming 650033, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(16), 3234; https://doi.org/10.3390/electronics13163234
Submission received: 26 June 2024 / Revised: 30 July 2024 / Accepted: 14 August 2024 / Published: 15 August 2024
(This article belongs to the Special Issue Explainable Artificial Intelligence for IoT and Smart Systems)

Abstract

:
In Internet of Things (IoT) applications, user behavior is influenced by factors such as network structure, user activity, and location. Extracting valuable patterns from user activity traces can lead to the development of smarter, more personalized IoT applications and improved user experience. This paper proposes a LIME-based user behavior preference mining algorithm that leverages Explainable AI (XAI) techniques to interpret user behavior data and extract user preferences. By training a black-box neural network model to predict user behavior using LIME and approximating predictions with a local linear model, we identify key features influencing user behavior. This analysis reveals user behavioral patterns and preferences, such as habits at specific times, locations, and device states. Incorporating user behavioral information into the resource scheduling process, combined with a feedback mechanism, establishes an active discovery network of user demand. Our approach, utilizing edge computing capabilities, continuously fine-tunes and optimizes resource scheduling, actively adapting to user perceptions. Experimental results demonstrate the effectiveness of feedback control in satisfying diverse user resource requests, enhancing user satisfaction, and improving system resource utilization.

1. Introduction

The Internet of Things (IoT) has revolutionized our daily lives, connecting devices and enabling data-driven services that enhance user experience and efficiency. Understanding user behavior within this interconnected ecosystem is crucial for developing personalized and intelligent IoT applications. Mining user application behavior patterns of IoT devices allows us to gain deeper insights into user needs, driving product and service innovation, and ultimately improving user satisfaction.
Traditional methods for mining user behavior patterns, such as behavioral pattern mining, have proven effective in identifying user habits. However, there is a growing need to explore how these insights can be used to discover similar users and personalize services further. Moreover, the presence of data uncertainty in IoT environments presents unique challenges for extracting meaningful information.
Recent research in IoT data mining has focused on addressing these challenges. For instance, Naha et al. (2020) proposed a dynamic resource allocation algorithm in fog computing to minimize processing time and service level agreement violations based on user behavior [1]. Irizar-Arrieta et al. (2020) explored designing interactive everyday objects that promote sustainable user behavior, focusing on reducing energy consumption [2]. Sayakkara et al. (2021) proposed a comprehensive dataset representing popular IoT devices and smartphones for forensic analysis [3]. Jamil et al. (2021) introduced a secure fitness framework utilizing blockchain networks and machine learning algorithms to enhance security and extract insights from IoT data [4]. While these studies highlight the importance of data mining, pattern recognition, user authentication, resource allocation, and privacy protection in IoT applications, there remains a gap in understanding the underlying factors influencing user behavior.
Furthermore, the lack of interpretability in traditional machine learning models has hindered their wider adoption in IoT applications. It is challenging to understand the logic behind model predictions, limiting their application value and user trust. Explainable Artificial Intelligence (XAI) has emerged as a promising solution, offering the ability to explain model predictions and reveal how models infer conclusions from user data. This transparency enables users to understand the decision-making process of the model, increasing trust and encouraging the adoption of intelligent services [5,6,7,8,9,10,11,12,13]. XAI techniques, such as Local Interpretable Model-Agnostic Explanations (LIME), can identify key features influencing user behavior, such as time, location, weather, and device status, and analyze their relationships. This information can be used to discover user behavior patterns in specific scenarios, enabling the design of more personalized services. For instance, recommending relevant devices or features based on user habits at specific times can enhance user experience [14,15,16,17,18,19,20,21,22,23].
Motivated by these challenges and opportunities, this paper proposes a LIME-based user behavior preference mining algorithm to interpret user behavior data and extract user preferences. Our approach leverages the capabilities of XAI to address the limitations of traditional machine learning models in IoT applications. By integrating user behavioral information into resource scheduling and incorporating a feedback mechanism, we aim to establish an active discovery network of user demand and continuously optimize resource allocation, aligning it with user perceptions. Experimental results demonstrate the effectiveness of our approach in satisfying diverse user resource requests, enhancing user satisfaction, and improving system resource utilization.

2. Mining User Behavior Patterns for IoT Device End-Use Applications

Current research on edge cloud deployment for IoT device terminal services faces challenges in effectively addressing user behavior patterns and optimizing resource allocation. This study leverages Explainable Artificial Intelligence (XAI) to analyze user behavior patterns within IoT device applications, aiming to develop an efficient deployment mechanism for IoT services. This approach emphasizes mobile service migration, resource allocation strategies, and collaborative methods between edge and cloud computing platforms.

2.1. Mining User Behavior Patterns

The data used in the study for user behavior analysis of IoT device applications contain two aspects: the record of the relationship chain between users and the users’ dissemination behaviors of different contents in IoT device applications, e.g., actions such as “introducing” and “forwarding” in the system. Based on these two types of data, we can construct a “dissemination tree” of multimedia content in the network, i.e., a graph structure of the relationship chains corresponding to the dissemination of content among different users. Based on the evolution process of the content dissemination tree, the following contents will be mined: temporal pattern, i.e., the pattern found in the evolution of content dissemination within IoT device applications over time; regional pattern, i.e., the difference between the content dissemination evolution of IoT device applications in different regions. The relationship model, i.e., the heterogeneity in the evolution of IoT application content dissemination in different relationships.
We further study the role of user behavior, content generation, processing, and dissemination patterns in guiding the resource requirements in the deployment of application services for IoT devices. Through the correspondence between the actual resource consumption and user behavior given by the data, the correlation between the user behavior pattern and the resource demand of service deployment is explored.
Through the correspondence between the actual resource consumption and user behavior given by the data, the correlation between the user behavior pattern and the resource demand of service deployment is explored to form the support for edge cloud service deployment.
The resource requirement analysis for content processing and distribution in IoT device terminals will be based on content processing rules and content distribution attribute data, such as the CPU time required per second of video in video transcoding, and the average bit rate of video distribution, etc. Based on these data, the resource requirement function for content processing and distribution will be quantified. Based on these data, the computational and network resource requirements for processing and distribution of each content applied by IoT devices will be quantified to derive the resource demand function for content processing and distribution. Combining the user behavior pattern and resource demand quantification, the resource demand in IoT device terminal service deployment is modeled using regression analysis and machine learning tools to form the computational resource demand model and network resource demand model. Since today’s IoT device terminal content has the time-sensitive characteristic of short dissemination time, and computation and network resources are the system performance bottleneck when edge cloud is deployed, this study first concentrates on the computation and network resource optimization problem, and at the same time, it considers the demand analysis problem related to storage resources in the deployment of IoT device terminal services. As the project progresses, if there is any demand, we will also consider the demand analysis problems related to storage resources in the terminal service deployment of IoT devices.
Group Preference Synthesis: Through relationships, obtain user groups (e.g., users and their friends who most often have recently used the same edge cloud hardware) and merge the application preferences of these user groups to obtain a list of the most popular edge cloud hardware. Once you have identified the edge cloud hardware (e.g., the user who most often uses the edge cloud hardware recently and his/her friends), merge the application preferences of these user groups to obtain the group application preferences of this edge cloud device.

2.2. Co-Migration Strategy between Application Side and Service Side of IoT Devices

Migration on the application side of IoT devices mainly involves the switching of different application modules between user devices (e.g., sensing devices) and edge cloud devices (e.g., smart routing), so as to achieve the goals of increasing the computation speed and decreasing the power consumption of user devices, etc. Meanwhile, the migration of the service side refers to the switching the servers for IoT device applications from the traditional data centers to the edge cloud devices, so as to achieve the goals of decreasing user–server latency and increasing the bandwidth between them [24,25,26,27].
Using the individual user’s access pattern to the content in IoT devices and applications, we study the personalized deployment method on the application side, so that the user can effectively get the resources from the edge cloud resources with the best quality of service when actually accessing the application content of IoT devices.
The migration of IoT device applications is based on user preferences. Different users have different degrees of preference for different IoT device end applications. Since edge cloud services have obvious personalized characteristics, for example, smart routing-based edge cloud services are usually located in the user’s home and serve only a limited number of members of the family, using the perception of user preferences to achieve efficient migration and pre-allocation of IoT device application ends in the edge cloud is an important means of IoT device end-application edge cloud deployment.
User preference matrix construction: Using user relationship, application similarity and users’ visit history to media applications, we form user relationship, application similarity, and user visit matrices for the most recent time window. Through the matrix decomposition theory and its fast algorithm, we further study the method of obtaining user–application preference metrics. The method is further investigated using the matrix decomposition theory and its fast algorithm.
Group Preference Synthesis: Through relationships, obtain user groups (e.g., users and their friends who most recently often use the same edge cloud hardware) and merge the application preferences of these user groups to obtain a list of the most popular edge cloud hardware. Once you have identified the edge cloud hardware (e.g., the user who most often uses the edge cloud hardware recently and his/her friends), merge the application preferences of these user groups to obtain the group application preferences of this edge cloud device. As shown in Figure 1, techniques and methods related to group preferences and group content recommendations will be utilized to address the preferences of group applications for that edge cloud device.
Optimal set generation: Due to the limited resources of the edge cloud device, the optimal set of applications for the device is selected by optimization theory and techniques based on the preferences of the edge cloud device for different applications. Meanwhile, the theoretical foundation of matrix decomposition and optimization will be extended in the research, which is expected to aid in the development of distributed optimization strategies and algorithms in the context of the edge cloud.

3. LIME-Mine: A Explainable Machine Learning Approach

This section introduces LIME-Mine, an algorithm designed to extract and interpret user behavioral preferences from IoT data utilizing the Local Interpretable Model-Agnostic Explanations (LIME) technique. This approach facilitates the development of intelligent and user-centric resource allocation strategies within edge cloud environments for IoT applications.

3.1. LIME-Mine: A Explainable Machine Learning Approach for Mining User Behavioral Preferences in IoT Applications

The LIME-Mine algorithm leverages user behavioral data collected from IoT devices and applications. These data comprise a rich set of information capturing the dynamic interactions between users and the IoT ecosystem. The input data are structured as follows:
Timestamp: Each data entry includes a timestamp, indicating the precise time of the user’s interaction with the IoT device or application. This enables the analysis of temporal patterns in user behavior.
Location Data: The geographic location of the user at the time of interaction is recorded. These data are crucial for understanding the context of users’ actions and identifying location-specific preferences.
Device ID: The unique identifier of the IoT device involved in the interaction is recorded. This allows for tracking the frequency and patterns of device usage by individual users.
Function Usage Records: Detailed records of the specific functions or actions performed by the user on the IoT device or application are included. This provides a detailed understanding of the user’s interaction with the device and reveals the specific features or functionalities that are most relevant to them.
LIME-Mine generates a set of identified user behavioral patterns and preferences, providing insights into the factors influencing user actions. These insights are derived from the interpretable explanations provided by LIME for the predictions of the black-box model. The output of LIME-Mine can be summarized as follows:
Interpretable Feature Weights: LIME identifies the key features influencing user behavior by analyzing the feature weights of the local linear models generated for each user interaction. These weights indicate the relative importance of each feature in determining the predicted user action. For example, a high weight for “location” might suggest that a user’s location plays a significant role in their device usage patterns.
User Behavioral Patterns: By analyzing the feature weights and the specific values of those features associated with high weights, LIME-Mine can uncover recurring patterns in user behavior. These patterns may reveal, for example, that a user frequently uses a specific device at a particular time of day or when they are in a specific location.
User Preferences: LIME-Mine can identify the preferences of individual users by analyzing the features that consistently contribute to positive predictions. For instance, if a user consistently uses a specific application when they are at home, this suggests a preference for that application in a home setting.
The steps of the algorithm are as follows:
Step 1: Data Preprocessing: The initial step involves cleaning and preprocessing the collected user behavioral data. This process ensures data quality and consistency, addressing issues such as outlier removal, data normalization, and handling missing data points.
Step 2: Black-Box Model Training: A supervised multi-layer perceptron (MLP) neural network is trained to predict user behavior based on the preprocessed data. The MLP model used is a three-layer network with 128 neurons in the first hidden layer, 64 neurons in the second hidden layer, and an output layer with the number of neurons corresponding to the number of user actions to be predicted. The activation function used for the hidden layers is the ReLU function, while the output layer utilizes the softmax function for multi-class classification. The model is trained using the Adam optimizer with a learning rate of 0.001 and a batch size of 32. The model is trained for 50 epochs.
Step 3: Model Interpretation with LIME: The Local Interpretable Model-Agnostic Explanations (LIME) technique is employed to interpret the predictions made by the black-box model. For each user behavior data point, LIME generates a local linear model that approximates the predictions of the black-box model. This approach provides a simplified and interpretable explanation for each prediction, revealing the key factors driving the model’s output.
Step 4: Analyzing LIME Model Results: The feature weights of the LIME model are analyzed to identify the key features that influence user behavior. These features represent the most significant factors contributing to the predicted user actions, offering valuable insights into user preferences and decision-making processes.
Step 5: Application and Integration: The identified user behavioral preferences, user scheduling information, and other behavioral insights are integrated into the resource scheduling process for edge cloud platforms. This integration enables dynamic resource allocation based on user preferences, ensuring efficient resource utilization and enhancing user experience. A feedback mechanism is implemented to continuously fine-tune and optimize the scheduling process, adapting to evolving user behaviors and resource constraints. The pseudo-code form of the specific implementation of the LIME-Mine algorithm is shown in Algorithm 1.
Algorithm 1. LIME-Mine user behaviour analysis algorithm
01: Data collectionuser_behavior_data = collect_user_behavior_data()
02: Data preprocessingpreprocessed_data = preprocess_data(user_behavior_data)
03: LIME explainer traininglime_explainer = train_lime_explainer(preprocessed_data)
04: Model trainingmodel = train_model(preprocessed_data)
05: User behavior analysispredicted_behavior
=model.predict(user_behavior_data)
lime_explanation
=lime_explainer.explain_instance(user_behavior_data[0])
06: Result interpretation and optimizationevaluate_model_performance(model,
preprocessed_data,user_behavior_data) optimize_model(model)
07: Application deploymentdeploy_model(model)
08: def collect_user_behavior_data(): # Implement data collection logic
  pass
09: def train_lime_explainer(data): # Implement LIME explainer training logic
pass
10: def train_model(data): # Implement model training logic
pass
11: def evaluate_model_performance(model, preprocessed_data, user_behavior_data): # Implement model performance evaluation logic
pass
12: def optimize_model(model): # Implement model optimization logic
pass
13: def deploy_model(model): # Implement model deployment logic
pass

3.2. Server-Side Migration Mechanism for IoT Device Applications Based on Group Access Characteristics

This research proposes a server-side migration mechanism for IoT device applications, leveraging group access patterns to enhance user experience and ensure a consistent connection quality across diverse network environments.
The increasing prevalence of IoT devices and applications demands efficient resource allocation and optimized network performance. Traditional cloud-based deployments often suffer from high latency, especially in geographically dispersed scenarios. Server-side migration addresses this challenge by relocating services or specific service modules from centralized data centers to edge cloud platforms located closer to users. This approach minimizes latency and improves user responsiveness, crucial for real-time data processing in IoT applications.
This study investigates the feasibility and benefits of migrating server-side modules within IoT applications, considering the potential network overhead associated with relocation. A migration gain function is developed to quantify the performance improvements versus the associated costs of migrating modules from the cloud to the edge.
A migration gain function is defined to assess the suitability of migrating specific server-side modules. This function accounts for both performance gains, such as reduced latency, and the associated costs, including bandwidth consumption for data replication. The function reflects the difference between the performance gain of a module after migration and the migration cost, which is prototyped as in Equation (1).
M(a) = ΔP(a, c, c) − XΔC(a, c, c)
where M(a) represents the migration gain for application ‘a’. ΔP(a, c, c′) reflects the performance improvement gained by the server-side migration of the IoT device application ‘a’ from the big cloud platform c to the edge cloud c′. CΔC(a, c, c′) represents the cost incurred by this migration, including bandwidth consumption and resource requirements for the migrated application ‘a’.
The larger the XΔC(a, c, c′), the more migratable the application is. A higher value of M(a) indicates a greater potential benefit from migration, suggesting the application is more suitable for relocation to the edge cloud.
Edge cloud resources are typically more limited than traditional cloud platforms. To address this, the research proposes a dynamic server management strategy that prioritizes migrating applications with high user group preferences to the edge cloud. This strategy optimizes resource utilization by allocating edge cloud resources to applications with a greater user base.
The research further explores a “replacement” mechanism for dynamically updating the set of applications hosted on the edge cloud based on changing user preferences and application popularity. This ensures that the edge cloud resources are always allocated to the most relevant and frequently used applications. Computing and network resources perform different tasks in media services, and the unified model will first characterise the impact of content processing and content distribution on the user’s QoS (e.g., the impact of bandwidth on video quality); further, a matching method will be designed to form a model for the joint impact of computing and network resources on QoS as follows:
P(a, xa) + β(a)P′(a, ya)
where P(a, xa) represents the QoS gain achieved by allocating computational resources xa to IoT application a. P′(a, ya) represents the QoS gain achieved by allocating network resources ya to application a. β(a) is a matching function that weighs the relative importance of computing and network resources for application a. The optimization problem for the edge cloud computing platform involves maximizing the overall QoS, subject to resource constraints:
max Σ P(a, xa) + βP′(a, ya)
a x a   < X , a y a < Y
where X and Y represent the total allocation of computing and network resources, respectively. This optimization problem explores solutions through the investigation of Karush–Kuhn–Tucker (KKT) conditions and the development of heuristic algorithms. The research also analyzes the optimization model to accommodate heterogeneous application requirements, considering scenarios where some applications rely solely on network resources.
When both application-side and service-side modules of an IoT device terminal application can be migrated to the edge cloud, new optimization opportunities arise. This research proposes two key techniques: Application–Service “De-transmission”: By strategically selecting service-side and application-side modules for co-deployment on the edge cloud, it is possible to eliminate or significantly reduce the amount of data transmission between the two sides. For example, deploying a video compositing server on the edge cloud can minimize the transmission of large video files to a centralized server. Content Processing “De-iteration”: Many IoT device terminal applications involve iterative content processing, which can introduce latency. By co-deploying application-side and service-side modules with iterative computation modes on the edge cloud, the latency caused by data iteration can be substantially reduced.

4. Experimental Evaluation and Analysis

This section presents the experimental evaluation of the proposed LIME-Mine approach. The experiments focus on assessing the potential advantages of service localization for edge IoT users within enhanced broadband, low latency, and high-reliability scenarios. The evaluation considers the impact of integrating user preferences on key performance metrics of traditional wireless networks, including Upstream and Downstream Traffic Volumes, examining the influence of user preferences on the volume of data transmitted and received over the network; End-to-End Delay, analyzing the impact of service localization on the overall latency experienced by users in the network; and Upstream and Downstream Throughput, evaluating the effect of user preferences on the data transfer rates in both upload and download directions.
The experimental validation is conducted within a real-world indoor LTE network environment. Table 1 outlines the configuration of the main parameters used in the LTE network.

4.1. Explainable AI (XAI) Communication Carve-Outs for IoT User Behavior

Group recommendation plays a significant role in IoT device end-use systems, where users form relational groups to collectively receive multimedia content and communicate with each other. Traditional group recommendation methods face limitations:
Preference Inference: they often rely on historical behavior data to infer group member preferences, struggling to capture preferences from inactive users or sparse historical data.
Relationship Modeling: these methods typically neglect the relationships between group members, hindering the ability to accurately capture individual preferences within a group.
To address these limitations, this research proposes a group recommendation framework for IoT devices that jointly exploits interrelationships and user behaviors. This framework not only infers the preferences of IoT application groups but also models the tolerant and altruistic characteristics of group members, enhancing the accuracy and relevance of recommendations.

4.2. Experimental Evaluation of User Behavior Integration

To evaluate the effectiveness of integrating user behavior data through LIME-Mine, three scenarios were compared:
Scenario 1: Traditional Networks (Without MEC): baseline scenario without Multi-access Edge Computing (MEC) platforms.
Scenario 2: Traditional Networks (With MEC): traditional networks with MEC platforms deployed but without user preference considerations.
Scenario 3: Networks with User Behavior Data: traditional networks with MEC platforms and the integration of user behavioral data through LIME-Mine.
Experimental Methodology: Service Testing: upstream and downstream FTP service tests were conducted in each scenario. Location Control: terminal locations were fixed to ensure consistent wireless network environments and minimize the impact of sudden changes on test results. Table 2 and Figure 2 present the results of the network throughput tests.
The test results indicate that integrating user behavior data through LIME-Mine has a minimal impact on network throughput. The MAC throughput for both uplink and downlink remains comparable across all scenarios. A paired t-test was conducted to assess the statistical significance of the difference in throughput between the public network and public network with LIME-Mine. The p-value for the downlink throughput was 0.87, and the p-value for the uplink throughput was 0.92. These results indicate that there is no statistically significant difference in throughput between the two scenarios, suggesting that incorporating user preferences does not significantly affect the overall network capacity. Furthermore, the results demonstrate the ability to seamlessly integrate local services without interfering with the performance of regular public network services. This observation highlights the flexibility of the LIME-Mine approach in supporting local connectivity without compromising network throughput.

4.3. Experimental Testing of Network End-to-End Latency

To evaluate the impact of incorporating LIME-Mine user behavior data on network end-to-end latency, ping service tests were conducted using packet sizes of 32 bytes and 1500 bytes. Two scenarios were compared: a baseline scenario without user behavior data and a scenario with user behavior data integrated. Multiple IP addresses were tested to ensure comprehensive evaluation.
To assess the difference in network delay between local access and traditional circuitous access through the core network, the local server was assigned both a local IP address and a public IP address. These IP addresses represented the two access methods, respectively (Table 3).
The experimental results demonstrate the effectiveness of LIME-Mine in reducing network latency, particularly when leveraging local access and service deployment.
Public Network Services: The introduction of user behavior data results in a minimal increase in end-to-end delay (approximately 0.25 ms) primarily attributed to the processing of user data. This slight increase is negligible compared to the potential latency reduction achieved through service localization. A paired t-test was conducted to assess the statistical significance of the difference in latency between the public network and public network with LIME-Mine. The p-value for the 32-byte packet size was 0.21, and the p-value for the 1500-byte packet size was 0.38. These results indicate that there is no statistically significant difference in latency between the two scenarios, suggesting that the increase in latency introduced by LIME-Mine is minimal.
Service Localization: Compared to traditional access through the public network, localized service deployment significantly reduces network end-to-end latency, achieving a latency reduction of 60% to 91%. The specific latency saving is correlated with the distance between the public network service deployment location and the user access location. This observation highlights the significant benefits of service localization in reducing latency, especially for users located farther from the core network.
Local Access: Utilizing local access (local IP address) compared to circuitous access (public IP address) for local servers yields a network end-to-end delay reduction of 1.5 ms. This reduction highlights the efficiency of local access, with the core network processing delay contributing approximately 1.25 ms. This result underscores the effectiveness of minimizing network hops by accessing services directly from local edge devices.

4.4. A Data-Driven Content Distribution Strategy Based on LIME-Mine User Data

The effectiveness of IoT edge content delivery strategies is significantly influenced by a complex interplay of user behaviors, preferences, small group characteristics, and local content popularity. Unlike centralized content servers, edge devices serve a significantly smaller user base, leading to sparse user requests and data, resulting in high stochasticity and a prevalence of out-group statistics. Consequently, traditional content delivery strategies that rely on global popularity metrics are no longer suitable for effectively exploiting the distinct characteristics of edge networks.
To address this challenge, the LIME-Mine algorithm provides a foundation for edge content distribution. This algorithm enables real-time analysis of user behavior, content interaction patterns, small group dynamics, and local content popularity. By understanding these factors, the LIME-Mine algorithm can optimize content distribution to meet the unique needs of edge networks. Application of the algorithm user preference content hotspot prediction is shown in Figure 3.
To address the overhead associated with content deployment in edge networks, we propose a strategy that leverages content similarity. This approach aims to mitigate the overhead incurred when user requests are redirected between hotspots with varying load levels. Specifically, we encourage a group of overloaded hotspots with high content similarity to redirect their requests to underutilized hotspots. This strategy is informed by migration and replacement policies, seeking to optimize resource allocation. Our experimental analysis demonstrates how the integration of user preferences through LIME-Mine can significantly enhance the efficiency of content distribution in edge networks, minimizing the overhead associated with network-wide content deployment.

5. Discussion

User behavior in IoT device terminal applications is influenced by a complex interplay of factors, including network structure, user activities, location, and context. Content access within IoT systems often exhibits characteristics such as edge computing, coterie formation, and group dynamics. Analyzing user behavior patterns within IoT device terminal applications through Explainable Artificial Intelligence (XAI) methods and establishing a comprehensive mapping between user behavioral patterns and deployment resource requirements is crucial for achieving high-quality service deployment.
In collaborative edge cloud and cloud computing environments, the distributed computing of IoT device terminal applications presents two key characteristics: Application End Module Migration: The ability to migrate application modules to edge cloud platforms. Application Service End Migration: The ability to migrate service modules to edge cloud platforms.
The realization of high-quality edge cloud deployment for IoT multimedia access necessitates modeling different cloud computing resources and establishing a collaborative migration mechanism between application endpoints, service endpoints, and user equipment.

6. Conclusions

User behavior in IoT device terminal applications is influenced by a complex interplay of factors, including network structure, user activities, location, and context. Content access within IoT systems often exhibits characteristics such as edge computing, coterie formation, and group dynamics. Understanding and leveraging these user behavior patterns is crucial for achieving high-quality service deployment in IoT applications.
This research leverages Explainable Artificial Intelligence (XAI) methods to analyze user behavior patterns within IoT device terminal applications. The goal is to establish a comprehensive mapping between user behavioral patterns and deployment resource requirements, enabling the optimization of resource allocation strategies.
In collaborative edge cloud and cloud computing environments, the distributed computing of IoT device terminal applications presents two key aspects: Application End Module Migration, which is the ability to migrate application modules to edge cloud platforms, bringing computational resources closer to users and reducing latency and Application Service End Migration, which is the ability to migrate service modules to edge cloud platforms, allowing for more efficient distribution of services and improved responsiveness.
Realizing high-quality edge cloud deployment for IoT multimedia access necessitates the modeling of different cloud computing resources and the establishment of a collaborative migration mechanism between application endpoints, service endpoints, and user equipment. This collaborative approach ensures a seamless and optimized experience for users, balancing resource utilization and performance.
IoT device terminal applications, particularly those involving rich media, have high demands for both computing resources for content processing and network resources for content distribution. The increasing need for real-time content processing and distribution in IoT applications further underscores the importance of optimizing the joint allocation of computing and network resources.
Efficiently allocating and coordinating computing and network resources in the edge cloud platform is crucial for ensuring the effective cooperation between content processing and content transmission. This optimization is critical for achieving high-quality mobile media applications and providing a theoretical foundation for the large-scale deployment of IoT device terminal services.

Author Contributions

X.Y. and J.Z. prepared the data layers, figures, and tables; K.H. and Y.Z. performed the experiments and analyses. X.C. supervised the research, finished the first draft of the manuscript, edited and reviewed the manuscript, and contributed to the model construction and verification. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a grant from Key Laboratory for Crop Production and Smart Agriculture of Yunnan Province, Yunnan Provincial Agricultural Basic Research Joint Project (No. 202301BD070001-203), Yunnan Provincial Basic Research Project (No. 202101AT070267), the scientific research fund project of Kunming Metallurgy College (No. 2020XJZK01), the scientific research fund project of Yunnan Provincial Education Department (No. 2021J0943).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Naha, R.K.; Garg, S. Multi-criteria-based Dynamic User Behaviour—Aware Resource Allocation in Fog Computing. ACM Trans. Internet Things 2021, 2, 1–31. [Google Scholar] [CrossRef]
  2. Irizar-Arrieta, A.; Casado-Mansilla, D.; Garaizar, P.; López-de-Ipiña, D.; Retegi, A. User Perspectives in the Design of Interactive Everyday Objects for Sustainable Behaviour. Int. J. Hum.-Comput. Stud. 2020, 137, 102393. [Google Scholar] [CrossRef]
  3. Sayakkara, A.P.; Le-Khac, N.A. Electromagnetic Side-Channel Analysis for IoT Forensics: Challenges, Framework, and Datasets. IEEE Access 2021, 9, 113585–113598. [Google Scholar] [CrossRef]
  4. Jamil, F.; Kahng, H.K.; Kim, S.; Kim, D.H. Towards Secure Fitness Framework Based on IoT-Enabled Blockchain Network Integrated with Machine Learning Algorithms. Sensors 2021, 21, 1640. [Google Scholar] [CrossRef] [PubMed]
  5. Dazeley, R.; Vamplew, P.; Cruz, F. Explainable reinforcement learning for broad-xai: A conceptual framework and survey. Neural Comput. Appl. 2023, 35, 16893–16916. [Google Scholar] [CrossRef]
  6. Nadeem, A. Understanding Adversary Behavior via XAI: Leveraging Sequence Clustering to Extract Threat Intelligence. Ph.D. Thesis, TU Delft, Delft, The Netherlands, 2024. [Google Scholar]
  7. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  8. Kozielski, M. Contextual Explanations for Decision Support in Predictive Maintenance. Appl. Sci. 2023, 13, 10068. [Google Scholar] [CrossRef]
  9. Wang, Q.; L’Yi, S.; Gehlenborg, N. DRAVA: Aligning Human Concepts with Machine Learning Latent Dimensions for the Visual Exploration of Small Multiples. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023. [Google Scholar]
  10. Ghada, E.; Mervat, A.; Manfred, R. Explainability of Predictive Process Monitoring Results: Can You See My Data Issues? Appl. Sci. 2022, 12, 8192. [Google Scholar] [CrossRef]
  11. Patro, P.S.; Padhy, N. A Secure Remote Health Monitoring for Heart Disease Prediction Using Machine Learning and Deep Learning Techniques in Explainable Artificial Intelligence Framework. Eng. Proc. 2023, 58, 78. [Google Scholar] [CrossRef]
  12. Kenny, E.M.; Ford, C.; Quinn, M.; Keane, M.T. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artif. Intell. 2021, 294, 103459. [Google Scholar] [CrossRef]
  13. Bernardo, E.; Seva, R. Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective. Informatics 2023, 10, 32. [Google Scholar] [CrossRef]
  14. Mahmud, M.; Kaiser, M.S.; Rahman, M.A.; Wadhera, T.; Brown, D.J.; Shopland, N.; Burton, A.; Hughes-Roberts, T.; Mamun, S.A.; Ieracitano, C.; et al. Towards explainable and privacy-preserving artificial intelligence for personalisation in autism spectrum disorder. In International Conference on Human-Computer Interaction; Springer International Publishing: Cham, Switzerland, 2022. [Google Scholar]
  15. Mill, E.; Garn, W.; Ryman-Tubb, N.; Turner, C. Opportunities in real time fraud detection: An explainable artificial intelligence (XAI) Research Agenda. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1172–1186. [Google Scholar] [CrossRef]
  16. Ramon, Y.; Farrokhnia, R.A.; Matz, S.C.; Martens, D. Explainable AI for psychological profiling from behavioral data: An application to big five personality predictions from financial transaction records. Information 2021, 12, 518. [Google Scholar] [CrossRef]
  17. Rodriguez, S.; Thangarajah, J. Explainable Agents (XAg) by Design. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems, Auckland, New Zealand, 6–10 May 2024. [Google Scholar]
  18. Chamola, V.; Hassija, V.; Sulthana, A.R.; Ghosh, D.; Dhingra, D.; Sikdar, B. A review of trustworthy and explainable artificial intelligence (xai). IEEE Access 2023, 11, 78994–79015. [Google Scholar] [CrossRef]
  19. Wijekoon, A.; Wiratunga, N.; Martin, K.; Corsar, D.; Nkisi-Orji, I.; Palihawadana, C.; Bridge, D.; Pradeep, P.; Agudo, B.D.; Caro-Martínez, M. CBR Driven Interactive Explainable AI. In International Conference on Case-Based Reasoning; Springer Nature: Cham, Switzerland, 2023. [Google Scholar]
  20. Gyamfi, E.O.; Qin, Z.; Adu-Gyamfi, D.; Danso, J.M.; Browne, J.A.; Adom, D.K.; Botchey, F.E.; Opoku-Mensah, N. A Model-agnostic XAI Approach for Developing Low-cost IoT Intrusion Detection Dataset. J. Inf. Secur. Cybercrimes Res. 2023, 6, 74–88. [Google Scholar] [CrossRef]
  21. Kumar, J.N.; Stan, K. Current and Potential Applications of Ambient Artificial Intelligence. Mayo Clin. Proc. Digit. Health 2023, 1, 241–246. [Google Scholar]
  22. Alani, M.M.; Damiani, E. XRecon: An Explainbale IoT Reconnaissance Attack Detection System Based on Ensemble Learning. Sensors 2023, 23, 5298. [Google Scholar] [CrossRef] [PubMed]
  23. Algirdas, D.; Egidijus, K.; Laura, K. Building XAI-Based Agents for IoT Systems. Appl. Sci. 2023, 13, 4040. [Google Scholar] [CrossRef]
  24. Khan, W.Z.; Ahmed, E.; Hakak, S.; Yaqoob, I.; Ahmed, A. Edge computing: A survey. Future Gener. Comput. Syst. 2019, 97, 219–235. [Google Scholar] [CrossRef]
  25. Nisha Angeline, C.V.; Lavanya, R. Fog computing and its role in the Internet of Things. In Advancing Consumer-Centric Fog Computing Architectures; IGI Global: Hershey, PA, USA, 2019; pp. 63–71. [Google Scholar]
  26. Wang, S.; Xu, J.; Zhang, N.; Liu, Y. A survey on service migration in mobile edge computing. IEEE Access 2018, 6, 23511–23528. [Google Scholar] [CrossRef]
  27. Agrawal, N.; Saxena, A. Artificial Intelligence (AI) Equipped Edge Internet of Things (IoT) Devices in Security. In Advanced IoT Technologies and Applications in the Industry 4.0 Digital Economy; CRC Press: Boca Raton, FL, USA, 2024; pp. 296–308. [Google Scholar]
Figure 1. Matrix of the impact of IoT user interactions on user preferences.
Figure 1. Matrix of the impact of IoT user interactions on user preferences.
Electronics 13 03234 g001
Figure 2. MAC network throughput test.
Figure 2. MAC network throughput test.
Electronics 13 03234 g002
Figure 3. User preference content hotspot prediction. (a) Day 1 to 5. (b) Day 5 to 10. (c) Day 10 to 15. (d) Day 15 to 20.
Figure 3. User preference content hotspot prediction. (a) Day 1 to 5. (b) Day 5 to 10. (c) Day 10 to 15. (d) Day 15 to 20.
Electronics 13 03234 g003
Table 1. Wireless network main parameter configuration.
Table 1. Wireless network main parameter configuration.
Network ParameterValue
System Bandwidth15 MHz
Number of PRBs75
Maximum Transmit Power of Terminal23 dBm
BTS Transmit Power15 dBm
Uplink centre frequency1927.5 MHz
Downlink centre frequency point21,175.5 MHz
Frequency multiplexing factor1
Test ServiceFTP, Web, Ping
Table 2. Network throughput experimental test.
Table 2. Network throughput experimental test.
ClassificationConfigureRSRP/dBmSINF/dBMAC Throughput (Mbit/s)
Downlink throughputPublic network−54.121.654.8
Public network (LIME-Mine)−54.122.554.8
Intranet−53.722.155.6
Uplink throughputPublic network−53.323.335.8
Public network (LIME-Mine)−53.923.935.5
Intranet−53.423.036.6
Table 3. Network end-to-end latency testing results.
Table 3. Network end-to-end latency testing results.
ClassificationConfigurewww.test1.cnwww.test2.cnwww.test3.cnLatency Introduced by LIME-Mine/ms
32 bytesPublic network39.035.086.0−0.5
Public network (LIME-Mine)38.039.087.0
Discrepancy1.0−4.0−1.0
1500 bytePublic network43.036.091.01.0
Public network (LIME-Mine)40.038.088.0
Discrepancy3.0−2.03.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, X.; Zhang, J.; Zhang, Y.; Yang, X.; Han, K. LIME-Mine: Explainable Machine Learning for User Behavior Analysis in IoT Applications. Electronics 2024, 13, 3234. https://doi.org/10.3390/electronics13163234

AMA Style

Cai X, Zhang J, Zhang Y, Yang X, Han K. LIME-Mine: Explainable Machine Learning for User Behavior Analysis in IoT Applications. Electronics. 2024; 13(16):3234. https://doi.org/10.3390/electronics13163234

Chicago/Turabian Style

Cai, Xiaobo, Jiajin Zhang, Yue Zhang, Xiaoshan Yang, and Ke Han. 2024. "LIME-Mine: Explainable Machine Learning for User Behavior Analysis in IoT Applications" Electronics 13, no. 16: 3234. https://doi.org/10.3390/electronics13163234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop