Multi-Network Latency Prediction for IoT and WSNs
Abstract
:1. Introduction
2. Literature Review
2.1. Understanding Data Packet
2.2. Data Packet Prediction in Low-Rate and Low-Power Network
- Energy Efficiency: WSNs are often deployed in resource-constrained environments, and power efficiency is a critical concern. Predicting data packets’ contents allows sensor nodes to make informed decisions about when and how to transmit data [18]. By transmitting only necessary or relevant information, nodes can conserve energy, thereby extending the network’s operational lifetime [19].
- Data Reduction: In WSNs, data transmission consumes a significant amount of energy. Predicting data packets’ contents can enable data reduction techniques [17]. For instance, if a sensor node can predict that its data will remain relatively constant over a certain period of time, it may send only occasional updates instead of repeatedly transmitting the same data.
- Quality of Service (QoS): Data packet prediction could help maintain or enhance QoS in WSNs. By predicting when important data will arrive and prioritizing its transmission, you can ensure that critical information is delivered with lower latency and higher reliability.
- Routing Optimization: Data packet prediction can also play a role in optimizing routing algorithms within the network. If a node can predict that certain data packets are likely to be forwarded to a particular sink node, it can optimize its routing decisions accordingly, reducing unnecessary hops and improving network efficiency [20,21].
- Context-Aware Applications: WSNs are often used in applications where context-awareness is essential, such as environmental monitoring or healthcare. Data packet prediction can help in providing timely context updates to applications, allowing them to make informed decisions based on the predicted data.
- Data Analytics: Data packet prediction often involves the use of predictive tools or techniques, which can learn patterns and trends in datasets. This can be valuable not only for prediction but also for data analytics and anomaly detection. For example, if a sensor node predicts a certain data pattern but observes an anomaly, alerts can be triggered for further checks and resolution.
- Reduced Network Congestion: By predicting when and where data packets are likely to be generated, WSNs can avoid network congestion, which can occur when multiple nodes simultaneously transmit data. Predictive algorithms can help in scheduling data transmissions to minimize collisions and contention for network resources.
2.3. Existing Network Prediction Models
2.4. Understanding Network Prediction Models
- Linear Interpolation Model: is a method used to estimate a value within a range based on the known values at the endpoints of that range. It involves constructing a straight line between two known data points and using that line to approximate the value at an intermediate point. Linear interpolation assumes that the relationship between the data points is linear, meaning that the change in the dependent variable y is constant for each unit change in the independent variable x. It provides a straightforward and relatively accurate approximation when the data points are well-behaved and follow a linear trend [36,37,38].
- B.
- Linear Extrapolation Model: is a mathematical technique used to estimate values beyond the range of observed data points by assuming a linear relationship between the data points. In other words, it extends a straight line or linear trend that fits the observed data points into the future or past [39]. Consider a set of data points ( ), where i = 1, 2, …, n. These data points are assumed to lie along a linear trend, and we want to predict the value of y at some point x beyond the range of our data. The linear extrapolation model can be expressed using the equation of a straight line: , where y is the predicted value, x is the value at which we want to make the prediction, m is the slope of the line, and b is the y-intercept of the line [40,41,42].
- C.
- Univariate Linear Regression Model: is a statistical technique used for modelling the relationship between a single independent variable (x), and a dependent variable (y) [43]. The goal is to find a linear equation that best fits the data. A univariate linear regression is represented using the expressions:
2.5. Comparison: Interpolation, Extrapolation and Machine Learning for Predictions
3. Development Process
- Experimental Approach: Involves collection of data through experiment [51,52]. The experimental method aligns with RO1 and RO2. At this stage of the research, a rigorous amount of time was spent capturing data (end-to-end latency time and data packet size) from the experimental hardware setup detailed in Section 3.1 below, and the data collected is then used for analysis and training the prediction model discussed in Section 3.4 and Section 4.1. The experimental approach also involves setting up different network connection pairs to ensure a functional routing schedule while maintaining the interoperability of heterogeneous WSNs.
- Analytical Approach: Involves the use of computational or mathematical approaches to analyze data and then the data is used to develop a model [53,54]. The analytical approach aligns with RO3 and RO4 which are designed to provide insight into data packet behavior and to analyze and evaluate the level of accuracy, respectively. The use of interpolation, extrapolation, univariate regression, and statistical models and/or algorithms to evaluate the outcome or performance of JosNet predictions is part of the analytical method of this research.
- Network Performance Analysis: This method cuts across all the research objectives and involves careful analysis of the network performance metrics to identify network drawbacks and provide credible solutions [55].
- Research and Investigation:
- Research Questions and Objectives Defined: Here, existing gaps are identified, and the research scope and objectives are defined as discussed in Section 1.
- Literature Review: Detailed study on what has been done, existing work, understanding the architecture and the availability of the source code for each network protocol implemented in this study.
- Hardware Acquisition: This process requires careful consideration of the specific parameters on each development board to ensure compatibility with the low-rate and low-power wireless network protocols or expected programming configuration and resources.
- Implementation and Experimentation: This stage required a combination of software and hardware technical trial, testing and experimentation and addressing different software and hardware compatibility concerns as the project progresses.
- JosNet Platform Development: Programming of the software integration platform using the development source code for each network technology (Bluetooth LE, Zigbee network protocol, Thread, and WirelessHART) and creating an interpretation mapping for each network.
- Phase 1: Heterogeneous Network Integration and Interoperability: Further details on this point can be found in [3].
- Testing, Results and Validation: For phase 1, the parameters tested are primary throughout and end-to-end latency time. To ensure communication is up to standard, dedicated hardware devices are further acquired for various reasons.
- Phase 2: Prediction Model, Dataset Development, JosNet Controller and PAFP: As part of the implementation and experimentation, phase 2 focuses on data packet size and end-to-end latency time predictions as described in Section 4 of this paper. Also, ensuring that timestamps (such as arrival time, destination and source network, and every required parameter) are captured and stored in a local database and then used by the model for prediction and routing is critical to this study—further discussion is provided in Section 3.2.2 and Section 4 of this paper.
- Analytical and Performance Evaluation: After successful implementation, collecting, storing and evaluating the data is vital.
- Data capture and comparative analysis: The data is collected over a long period of time over 400 times for each connection pair. The average value is then used to perform interpolation or extrapolation. Section 4.2 and Section 4.3 discuss the comparative analysis using R-square and other statistical correlations.
- Performance evaluation, results, and findings: Finally, we provide evidential discussions on the outcome and benefits of the findings from the study in Section 4.4 and Section 5.
3.1. Hardware
- i.
- For Bluetooth LE, the Core51822 module [56] has been used because it provides flexibility for developers and multiple connection interfaces.
- ii.
- Zigbee network in JosNet, we have utilized the CC2530 Zigbee Module and the development ZB502 board. Both are developed by Waveshare and offer flexible connectivity options for the Zigbee network protocol.
- iii.
- Thread network is represented using a low-cost IoT device, the nRF52840_MDK IoT development kit from Makerdiary [57]. It is versatile and compatible with a wide range of network protocols.
- iv.
3.2. Process Flow of JosNet Brokerage Intergration for Multi-Network Communication
3.2.1. Typical Devices on the Network Setup Are
- (a)
- ZBx represents a Zigbee external device, connected wirelessly to ZB1.
- (b)
- ZB1, BL1, and TH1 represent Zigbee node 1, Bluetooth LE node 1, and Thread node 1, respectively, all attached via USB to JosNet brokerage station 1.
- (c)
- ZB2, BL2, TH2, and WH1 represent Zigbee node 1, Bluetooth LE node 2, Thread node 2, and WirelessHART node 1, respectively, all attached via USB to JosNet Brokerage station 2.
- (d)
- WHx represents a WirelessHART external device, connected wirelessly to WH1.
3.2.2. Process Flow for Multi-Network Data Packet Exchange and Integration
- ZBx initiates a request after connecting to the nearest Zigbee Node (ZB1) attached to a JosNet Brokerage Station (on this occasion, station 1) about 20 m away. Initiating a request requires selecting the destination address and clicking “send”. The list of external devices is available for selection, hence, devices that are not currently connected or hidden cannot be selected.
- ZB1 receives incoming packet messages and hands them over to JosNet at the Packet Arrival and Forwarding Point (PAFP). Note that network communications maintain the original network standard, these are not altered in anyway, however, the PAFP is designed to receive data packet, extract the payload information of the sender & intending destination, pass the payload information to the brokerage station controller. The PAFP receives a “modified version” of the same packet and then forwards it to the required destination node.
- At JosNet Brokerage Station 1, the controller and routing table receives data packet from the PAFP, then processes various information/commands attached within the packet (e.g., source protocol, destination address, destination protocol). The JosNet Station controller also reads the routing table via the Serial Port Manager (SPM) for the destination device, an error message is sent if the destination cannot be found on other stations. If the network exists in the routing table and the “prediction model” is activated, the routing table uses this to modify the payload and provide a routing map, at the same time attaching a readable command for the next node (TH1). The appropriate COM port and network protocol are selected and handed over for the best routing (in this case TH1 is selected). All COM ports receive the message if “Send Broadcast” is activated from the source (ZBx).
- TH1 receives the packets from JosNet and forwards them to TH2. Again, the Thread network communication maintains the original network standard, and is not altered in anyway.
- TH2 hands over the received packets to JosNet brokerage station 2 at the PAFP.
- JosNet Brokerage Station 2 processes the packet again as described in (iii) above for information (e.g., routing map or tag prediction model, from the previous station). When a tag is identified, the PAFP simply forwards it as described on the routing map (in this case, WH1 receives this packet).
- WH1 then forwards the message to WHx which is the intended destination connected wirelessly to the network.
- WHx receives the message packet from the source device (ZBx) in less than 150 milliseconds.
- Integration between Zigbee and WirelessHART is complete.
3.3. Time Synchronizing between Devices
3.4. Prediction Model
- (a)
- At Devices: Communication is initiated at the hardware devices through any available network protocol, and this occurs as described in Section 3.2, through the JosNet Brokerage Station.
- (b)
- At JosNet: There is a list of information being captured by JosNet for every communication. This includes source and destination network, source and destination device ID, COM port, baud rate, packet size, packet departure time, etc. This is essential for prediction and network routing. Currently, the source and destination network name (ZB, BL, TH, WH), actual (initial) data size, and time are the only input features for the prediction model. At JosNet, the controller, which is considered JosNet’s brain, manages all the activities, links all the resources and processes required information and commands for each wireless network.
- (c)
- Network Pairing (Filter): Packets are filtered and arranged according to wireless network pair. For example, if the source network name is captured as ZB and the intended destination is also captured as ZB, this represents a Zigbee-Zigbee communication, therefore, all further details about the communication are booked for storage. This is vital because multiple networks are going through JosNet stations, details about each communication pair must be recorded accurately, and also avoid misleading the routing table.
- (d)
- Local Storage: For prediction, three parameters are captured and stored, these are: (i) data packet size, (ii) time at the source (when the “send” is triggered), and (iii) arrival time at the destination (when the last data packet arrives at destination). Points (ii) and (iii) are known as “end-to-end latency time” [59,60]. This is achieved by using the “EPPlus” library in the NuGet Package Manager of Visual Studio, which permits multiple inputs to be written into the same CSV Excel file. For easy retrieval, files are stored according to connection pair. This means that data for ZB-TH is stored separately from BL-BL or TH-BL, etc.
- (e)
- Prediction Algorithm (Manual or Automatic): Next, the prediction tool reads data from the stored file directory and performs calculations based on available data in the CSV file. Prediction can be triggered in two ways: (i) Manually—For results validation and authentication purposes, JosNet makes room for a manual prediction check, which can be done by selecting a wireless network pair followed by graphical display of data packet size (in bytes), and predicted time in the point marked “4” in Figure 3, and (ii) Automatically—Data packet arrival time can be requested by the routing table automatically during communication between protocols to facilitate easy access to destination node.
- (f)
- Displayed Result: For manual predictions, results are displayed showing the prediction value and followed by a point appearance in the prediction graph.
- (g)
- Routing Table: For large-scale implementation, which involves two or more JosNet stations, the routing table which is managed by the controller has the list of devices connected to the entire network, helping with data packet routing, and hopping across multiple devices. The knowledge of end-to-end data packet time provides the routing table with efficient network routing management and therefore can be requested or triggered automatically.
4. Implementation and Analysis
4.1. Prediction in JosNet
4.1.1. Time-Based Prediction
4.1.2. Data-Based Prediction
4.2. Comparative Analysis of JosNet Interpolation/Extrapolation Time-Based Algorithm and Univariate Regression Prediction Model
Graphical Representation of Time-Based Algorithm and Univariate Regression Prediction Model
4.3. Comparative Analysis of JosNet Interpolation/Extrapolation Latency Time-Based Algorithm and Univariate Regression Prediction Model
Graphical Representation of Data-Based Algorithm and Univariate Regression Prediction Model
4.4. Discussion of Findings
- ZB-ZB: Recorded a mean of 0.0461 for M1 and 0.0428 for M2, with 84.3% in R-squared value (R2)
- BL-BL: Recorded a mean of 0.03615 for M1 and 0.03457 for M2, with 80.65% in R-squared value (R2)
- TH-TH: Recorded a mean of 0.0249 for M1 and 0.02147 for M2, with 99.95% in R-squared value (R2)
- ZB-BL: Recorded a mean of 0.0827 for M1 and 0.0773 for M2, with 96% in R-squared value (R2)
- ZB-TH: Recorded a mean of 0.0684 for M1 and 0.0631 for M2, with 99% in R-squared value (R2)
- TH-ZB: Recorded a mean of 0.0856 for M1 and 0.0793 for M2, with 99% in R-squared value (R2)
- ZB-ZB: Recorded a mean of 122.61 for M1 and 129.68 for M2, with 90% in R-squared value (R2)
- BL-BL: Recorded a mean of 103.41 for M1 and 115.63 for M2, with 69.63% in R-squared value (R2)
- TH-TH: Recorded a mean of 54.12 for M1 and 56. for M2, with 99.61% in R-squared value (R2)
- ZB-BL: Recorded a mean of 97.08 for M1 and 122.30 for M2, with 99.25% in R-squared value (R2)
- ZB-TH: Recorded a mean of 87.34 for M1 and 103.95 for M2, with 84.2% in R-squared value (R2)
- TH-ZB: Recorded a mean of 69.00 for M1 and 74.94 for M2, with 99% in R-squared value (R2)
5. Conclusions
- RO1 and RO2: The retention and utilization of historical data are fundamental in predicting and forecasting events in network communication. In this study, we have successfully implemented the utilization of existing multi-network live end-to-end latency data for predicting future arrival times. This was achieved through the application of linear interpolation and extrapolation algorithms, resulting in enhanced network performance, increased throughput, and improved response time.
- RO3: The research reveals that prediction accuracy is higher for interpolations and diminishes as the input value deviates further from the existing data size graph segment (8–256 bytes). From a practical perspective, network nodes maintaining a regular communication packet load range are likely to receive more accurate routing predictions compared to nodes with irregular patterns or fluctuating usage.
- RO4: The research surpasses anticipated benchmarks, with most connection pairs achieving over a 95% prediction accuracy marker, while others fall within the range of 70% to 95% prediction accuracy.
- Furthermore, our analysis underscores that the application of data packet prediction and end-to-end latency time prediction for heterogeneous low-rate and low-power Wireless Sensor Networks (WSN) using a localized dataset significantly enhances network performance and minimizes end-to-end latency time.
- The research contributes to the control and management of data packets for mesh network routing, leading to improved throughput and enhanced network efficiency within WSNs.
- An additional contribution lies in the establishment of a simplified and unambiguous approach for WSN prediction. This is achieved by applying linear interpolation and extrapolation algorithms, streamlining the prediction process for enhanced efficiency.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ajay, P.; Nagaraj, B.; Pillai, B.M.; Suthakorn, J.; Bradha, M. Intelligent Ecofriendly Transport Management System Based on IoT in Urban Areas. Environ. Dev. Sustain. 2022. [Google Scholar] [CrossRef]
- Alghofaili, Y.; Rassam, M.A. A Trust Management Model for IoT Devices and Services Based on the Multi-Criteria Decision-Making Approach and Deep Long Short-Term Memory Technique. Sensors 2022, 22, 634. [Google Scholar] [CrossRef] [PubMed]
- Balota, J.E.; Kor, A.-L. Brokerage System for Integration of LrWPAN Technologies. Sensors 2022, 22, 1733. [Google Scholar] [CrossRef] [PubMed]
- Shihao, W.; Qinzheng, Z.; Han, Y.; Qianm, L.; Yong, Q. A Network Traffic Prediction Method Based on LSTM. ZTE Commun. 2019, 17, 19–25. [Google Scholar] [CrossRef]
- Ma, Y.; Li, L.; Yin, Z.; Chai, A.; Li, M.; Bi, Z. Research and Application of Network Status Prediction Based on BP Neural Network for Intelligent Production Line. Procedia Comput. Sci. 2021, 183, 189–196. [Google Scholar] [CrossRef]
- Alaerjan, A. Towards Sustainable Distributed Sensor Networks: An Approach for Addressing Power Limitation Issues in WSNs. Sensors 2023, 23, 975. [Google Scholar] [CrossRef]
- Jecan, E.; Pop, C.; Ratiu, O.; Puschita, E. Predictive Energy-Aware Routing Solution for Industrial IoT Evaluated on a WSN Hardware Platform. Sensors 2022, 22, 2107. [Google Scholar] [CrossRef]
- Adu-Manu, K.S.; Engmann, F.; Sarfo-Kantanka, G.; Baiden, G.E.; Dulemordzi, B.A. WSN Protocols and Security Challenges for Environmental Monitoring Applications: A Survey. J. Sens. 2022, 2022, e1628537. [Google Scholar] [CrossRef]
- Chandnani, N.; Khairnar, C.N. An Analysis of Architecture, Framework, Security and Challenging Aspects for Data Aggregation and Routing Techniques in IoT WSNs. Theor. Comput. Sci. 2022, 929, 95–113. [Google Scholar] [CrossRef]
- El-Sayed, H.; Mellouk, A.; George, L.; Zeadally, S. Quality of Service Models for Heterogeneous Networks: Overview and Challenges. Ann. Telecommun. 2008, 63, 639–668. [Google Scholar] [CrossRef]
- Bello, O.; Zeadally, S.; Badra, M. Network Layer Inter-Operation of Device-to-Device Communication Technologies in Internet of Things (IoT). Ad Hoc Netw. 2017, 57, 52–62. [Google Scholar] [CrossRef]
- Cloudflare What Is a Packet?|Network Packet Definition. Available online: https://www.cloudflare.com/learning/network-layer/what-is-a-packet/ (accessed on 16 June 2023).
- Stallings, W.; Case, T.L. Business Data Communications—Infrastructure, Networking and Security; Department of Enterprise Systems and Analytics Faculty Publications, 7th ed.; Pearson: London, UK, 2012; ISBN 978-0-273-76916-3. [Google Scholar]
- Sikos, L.F. Packet Analysis for Network Forensics: A Comprehensive Survey. Forensic Sci. Int. Digit. Investig. 2020, 32, 200892. [Google Scholar] [CrossRef]
- Mazumdar, N.; Nag, A.; Nandi, S. HDDS: Hierarchical Data Dissemination Strategy for Energy Optimization in Dynamic Wireless Sensor Network under Harsh Environments. Ad Hoc Netw. 2021, 111, 102348. [Google Scholar] [CrossRef]
- Forero, F.; da Fonseca, N.L. Distribution of Multi-Hop Latency for Probabilistic Broadcasting Protocols in Grid-Based Wireless Sensor Networks. Ad Hoc Netw. 2022, 126, 102754. [Google Scholar] [CrossRef]
- Jain, K.; Agarwal, A.; Abraham, A. A Combinational Data Prediction Model for Data Transmission Reduction in Wireless Sensor Networks. IEEE Access 2022, 10, 53468–53480. [Google Scholar] [CrossRef]
- Narayan, V.; Daniel, A.K. Energy Efficient Protocol for Lifetime Prediction of Wireless Sensor Network Using Multivariate Polynomial Regression Model. J. Sci. Ind. Res. 2022, 81, 1297–1309. [Google Scholar] [CrossRef]
- Evangelakos, E.A.; Kandris, D.; Rountos, D.; Tselikis, G.; Anastasiadis, E. Energy Sustainability in Wireless Sensor Networks: An Analytical Survey. J. Low Power Electron. Appl. 2022, 12, 65. [Google Scholar] [CrossRef]
- Yan, B.; Liu, Q.; Shen, J.; Liang, D. Flowlet-Level Multipath Routing Based on Graph Neural Network in OpenFlow-Based SDN. Future Gener. Comput. Syst. 2022, 134, 140–153. [Google Scholar] [CrossRef]
- Zhang, M.; Dong, C.; Yang, P.; Tao, T.; Wu, Q.; Quek, T.Q.S. Adaptive Routing Design for Flying Ad Hoc Networks. IEEE Commun. Lett. 2022, 26, 1438–1442. [Google Scholar] [CrossRef]
- Wang, N.; Li, B.; Yang, M.; Yan, Z.; Wang, D. Traffic Arrival Prediction for WiFi Network: A Machine Learning Approach. In Proceedings of the IoT as a Service, Xi’an, China, 19–20 November 2020; Li, B., Zheng, J., Fang, Y., Yang, M., Yan, Z., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 480–488. [Google Scholar] [CrossRef]
- Liu, B.; Niu, D.; Li, Z.; Zhao, H.V. Network Latency Prediction for Personal Devices: Distance-Feature Decomposition from 3D Sampling. In Proceedings of the 2015 IEEE Conference on Computer Communications (INFOCOM), Hong Kong, 26 April–1 May 2015; pp. 307–315. [Google Scholar] [CrossRef]
- Choi, S.; Shin, K.; Kim, H. End-to-End Latency Prediction for General-Topology Cut-Through Switching Networks. IEEE Access 2020, 8, 13806–13820. [Google Scholar] [CrossRef]
- Natarajan, V.A.; Kumar, M.S. Improving QoS in Wireless Sensor Network Routing Using Machine Learning Techniques. In Proceedings of the 2023 International Conference on Networking and Communications (ICNWC), Chennai, India, 5–6 April 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Ge, Z.; Hou, J.; Nayak, A. GNN-Based End-to-End Delay Prediction in Software Defined Networking. In Proceedings of the 2022 18th International Conference on Distributed Computing in Sensor Systems (DCOSS), Los Angeles, CA, USA, 30 May–1 June 2022; pp. 372–378. [Google Scholar] [CrossRef]
- Larrenie, P.; Bercher, J.-F.; Venard, O.; Lahsen-Cherif, I. Low Complexity Approaches for End-to-End Latency Prediction. In Proceedings of the 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT), Virtual, 3–5 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Despaux, F.; Song, Y.-Q.; Lahmadi, A. Combining Analytical and Simulation Approaches for Estimating End-to-End Delay in Multi-Hop Wireless Networks. In Proceedings of the 2012 IEEE 8th International Conference on Distributed Computing in Sensor Systems, Hangzhou, China, 16–18 May 2012; pp. 317–322. [Google Scholar] [CrossRef]
- He, Y.; Sun, Y.; Yang, Y.; Li, H.; Wu, X. End-to-End Latency Bottleneck Analysis for Multi-Class Traffic in Data Center Networks. In Proceedings of the 2015 Sixth International Conference on Intelligent Systems Design and Engineering Applications (ISDEA), Guiyang, China, 18–19 August 2015; pp. 366–371. [Google Scholar] [CrossRef]
- Guo, Y.; Hu, G.; Shao, D. Multi-Path Routing Algorithm for Wireless Sensor Network Based on Semi-Supervised Learning. Sensors 2022, 22, 7691. [Google Scholar] [CrossRef] [PubMed]
- Morales, C.R.; Rangel de Sousa, F.; Brusamarello, V.; Fernandes, N.C. Evaluation of Deep Learning Methods in a Dual Prediction Scheme to Reduce Transmission Data in a WSN. Sensors 2021, 21, 7375. [Google Scholar] [CrossRef] [PubMed]
- Rizky, R.; Mustafid; Mantoro, T. Improved Performance on Wireless Sensors Network Using Multi-Channel Clustering Hierarchy. J. Sens. Actuator Netw. 2022, 11, 73. [Google Scholar] [CrossRef]
- Kocian, A.; Chessa, S. Iterative Probabilistic Performance Prediction for Multiple IoT Applications in Contention. IEEE Internet Things J. 2022, 9, 13416–13424. [Google Scholar] [CrossRef]
- Zhong, L.; Liu, R.; Miao, X.; Chen, Y.; Li, S.; Ji, H. Compressor Performance Prediction Based on the Interpolation Method and Support Vector Machine. Aerospace 2023, 10, 558. [Google Scholar] [CrossRef]
- Cranmer, S.J.; Desmarais, B.A. What Can We Learn from Predictive Modeling? Political Anal. 2017, 25, 145–166. [Google Scholar] [CrossRef]
- Joarder, A.H. Six Ways to Look at Linear Interpolation. Int. J. Math. Educ. Sci. Technol. 2001, 32, 932–937. [Google Scholar] [CrossRef]
- Lepot, M.; Aubin, J.-B.; Clemens, F.H.L.R. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment. Water 2017, 9, 796. [Google Scholar] [CrossRef]
- Wmcnamara Linear Interpolation Explained. Available online: https://gamedev.net/tutorials/programming/general-and-gameplay-programming/linear-interpolation-explained-r5892 (accessed on 3 October 2023).
- Said, N.; Fischer, H. Extrapolation Accuracy Underestimates Rule Learning: Evidence from the Function-Learning Paradigm. Acta Psychol. 2021, 218, 103356. [Google Scholar] [CrossRef]
- Chapman, T.; Larsson, E.; von Wrycza, P.; Dahlman, E.; Parkvall, S.; Sköld, J. Chapter 9—High-Speed Uplink Packet Access. In HSPA Evolution; Chapman, T., Larsson, E., von Wrycza, P., Dahlman, E., Parkvall, S., Sköld, J., Eds.; Academic Press: Oxford, UK, 2015; pp. 163–218. ISBN 978-0-08-099969-2. [Google Scholar]
- FlexBooks, C.-12 F. Linear Interpolation and Extrapolation|CK-12 Foundation. 2022. Available online: https://flexbooks.ck12.org/cbook/ck-12-conceptos-de-álgebra-nivel-básico-en-español/section/5.11/related/lesson/linear-interpolation-and-extrapolation-bsc-alg/ (accessed on 2 October 2023).
- x-engineer Linear Interpolation and Extrapolation with Calculator—X-Engineer.Org 2023. Available online: https://x-engineer.org/linear-interpolation-extrapolation-calculator/ (accessed on 2 October 2023).
- Schneider, A.; Hommel, G.; Blettner, M. Linear Regression Analysis. Dtsch. Arztebl. Int. 2010, 107, 776–782. [Google Scholar] [CrossRef]
- Kanade, V. What Is Linear Regression?—Spiceworks. Spiceworks 2023. Available online: https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-linear-regression/ (accessed on 3 October 2023).
- Prabhakaran, S. Linear Regression with R. Available online: http://r-statistics.co/Linear-Regression.html (accessed on 3 October 2023).
- Souza, F.; Araújo, R. Online Mixture of Univariate Linear Regression Models for Adaptive Soft Sensors. IEEE Trans. Ind. Inform. 2014, 10, 937–945. [Google Scholar] [CrossRef]
- Uyanık, G.K.; Güler, N. A Study on Multiple Linear Regression Analysis. Procedia—Soc. Behav. Sci. 2013, 106, 234–240. [Google Scholar] [CrossRef]
- Acharige, D.; Johlin, E. Machine Learning in Interpolation and Extrapolation for Nanophotonic Inverse Design. ACS Omega 2022, 7, 33537–33547. [Google Scholar] [CrossRef] [PubMed]
- An, J.; Wang, Z.-O.; Yang, Q.; Ma, Z. A SVM Function Approximation Approach with Good Performances in Interpolation and Extrapolation. In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 8–21 August 2005; Volume 3, pp. 1648–1653. [Google Scholar] [CrossRef]
- Cao, X.; Yousefzadeh, R. Extrapolation and AI Transparency: Why Machine Learning Models Should Reveal When They Make Decisions beyond Their Training. Big Data Soc. 2023, 10, 20539517231169731. [Google Scholar] [CrossRef]
- Bhattacherjee, A. Experimental Research. In Social Science Research: Principles, Methods and Practices (Revised Edition); University of Southern Queensland: Toowoomba, Australia, 2019; pp. 80–89. Available online: https://usq.pressbooks.pub/socialscienceresearch/chapter/chapter-10-experimental-research/ (accessed on 18 October 2023).
- Cash, P.; Stanković, T.; Štorga, M. An Introduction to Experimental Design Research. In Experimental Design Research: Approaches, Perspectives, Applications; Cash, P., Stanković, T., Štorga, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 3–12. ISBN 978-3-319-33781-4. [Google Scholar]
- Kipping, M.; Wadhwani, R.D.; Bucheli, M. Analyzing and Interpreting Historical Sources: A Basic Methodology. In Organizations in Time: History, Theory, Methods; Bucheli, M., Wadhwani, R.D., Eds.; Oxford University Press: Oxford, UK, 2013; ISBN 978-0-19-964689-0. [Google Scholar]
- Kuckartz, U. Qualitative Text Analysis: A Systematic Approach. In Compendium for Early Career Researchers in Mathematics Education; Kaiser, G., Presmeg, N., Eds.; ICME-13 Monographs; Springer International Publishing: Cham, Switzerland, 2019; pp. 181–197. ISBN 978-3-030-15636-7. [Google Scholar]
- Alazzawi, L.; Elkateeb, A. Performance Evaluation of the WSN Routing Protocols Scalability. J. Comput. Netw. Commun. 2009, 2008, e481046. [Google Scholar] [CrossRef]
- Waveshare NRF51822 Eval Kit BLE4.0 Bluetooth 2.4 G Development/Evaluation Kit Designed for nRF51822. Available online: https://www.waveshare.com/product/iot-communication/short-range-wireless/bluetooth/nrf51822-eval-kit.htm (accessed on 21 September 2021).
- Makerdiary nRF52840-MDK IoT Development Kit. Available online: https://makerdiary.com/products/nrf52840-mdk-iot-development-kit (accessed on 15 August 2020).
- Centero Tech WiHaRT WirelessHART™ Development Kit|Centero. Available online: https://centerotech.com/product/wihart-wirelesshart-development-kit/ (accessed on 4 October 2023).
- Singhal, S.; Zyda, M. Networked Virtual Environments: Design and Implementation; SIGGRAPH Series; Addison-Wesley: Reading, MA, USA, 1999; ISBN 978-0-201-32557-7. [Google Scholar]
- Smed, J.; Kaukoranta, T.; Hakonen, H. Aspects of Networking in Multiplayer Computer Games. In Proceedings of the Proceedings of the International Conference on Application and Development of Computer Games in the 21st Century (ADCOG), Hong Kong, 1 November 2001; pp. 74–81. Available online: https://www.researchgate.net/publication/269251176_Aspects_of_Networking_in_Multiplayer_Computer_Games (accessed on 2 February 2023).
- Lu, H.; Ma, X.; Ma, M.; Zhu, S. Energy Price Prediction Using Data-Driven Models: A Decade Review. Comput. Sci. Rev. 2021, 39, 100356. [Google Scholar] [CrossRef]
- Wynants, L.; Bouwmeester, W.; Moons, K.G.M.; Moerbeek, M.; Timmerman, D.; Van Huffel, S.; Van Calster, B.; Vergouwe, Y. A Simulation Study of Sample Size Demonstrated the Importance of the Number of Events per Variable to Develop Prediction Models in Clustered Data. J. Clin. Epidemiol. 2015, 68, 1406–1414. [Google Scholar] [CrossRef]
- Chugh, A. MAE, MSE, RMSE, Coefficient of Determination, Adjusted R Squared—Which Metric Is Better? Analytics Vidhya 2022. Available online: https://becominghuman.ai/univariate-linear-regression-clearly-explained-with-example-4164e83ca2ee (accessed on 3 October 2023).
- Yang, X.; Zheng, Y.; Zhang, Y.; Wong, D.S.-H.; Yang, W. Bearing Remaining Useful Life Prediction Based on Regression Shapalet and Graph Neural Network. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
Connection Pairs | Time-Based Univariant Regression Equation |
---|---|
ZB-ZB | D + 0.0343 |
BL-BL | D + 0.0343 |
TH-TH | D + 0.0098 |
ZB-BL | D + 0.0098 |
ZB-TH | D + 0.0490 |
TH-ZB | D + 0.0490 |
Connection Pair | Graph Segment | Data Packet Size Inputs, D (Bytes) | Predicted Average End-to-End Latency, T (s) | Values | Conclusion for a 2-Tailed t-Test (α = 0.05) | |
---|---|---|---|---|---|---|
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
ZB-ZB | <8 | 5 | 0.0279 | 0.0345 | n1 = 12, n2 = 12, M1 mean = 0.0461 M2 mean = 0.0428 R2 for M1 = 0.843 R2 for M2 = 1 df = 22 | p-value = 0.5055 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 10 | 0.0278 | 0.0355 | |||
8–16 | 15 | 0.0296 | 0.0360 | |||
16–32 | 20 | 0.0333 | 0.0365 | |||
16–32 | 30 | 0.0414 | 0.0375 | |||
32–64 | 40 | 0.0453 | 0.0385 | |||
32–64 | 50 | 0.0481 | 0.0395 | |||
64–128 | 80 | 0.0533 | 0.0425 | |||
64–128 | 100 | 0.0548 | 0.0445 | |||
128–256 | 140 | 0.0574 | 0.0485 | |||
128–256 | 200 | 0.0593 | 0.0545 | |||
>256 | 300 | 0.0749 | 0.0645 | |||
(a) | ||||||
Connection Pair | Graph Segment | Data Packet Size Inputs, D (Bytes) | Predicted Average End-to-End Latency, T (s) | Values | Conclusion for a 2-tailed t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
BL-BL | <8 | 5 | 0.0208 | 0.0270 | n1 = 12, n2 = 12, M1 mean = 0.03615 M2 mean = 0.03475 R2 for M1 = 0.8065 R2 for M2 = 1 df = 22 | p-value = 0.9019 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 10 | 0.0203 | 0.0275 | |||
8–16 | 15 | 0.0209 | 0.0280 | |||
16–32 | 20 | 0.0248 | 0.0285 | |||
16–32 | 30 | 0.0341 | 0.0295 | |||
32–64 | 40 | 0.0370 | 0.0305 | |||
32–64 | 50 | 0.0383 | 0.0315 | |||
64–128 | 80 | 0.0415 | 0.0345 | |||
64–128 | 100 | 0.0434 | 0.0365 | |||
128–256 | 140 | 0.0462 | 0.0405 | |||
128–256 | 200 | 0.0471 | 0.0465 | |||
>256 | 300 | 0.0594 | 0.0565 | |||
(b) | ||||||
Connection Pair | Graph Segment | Data Packet Size Inputs, D (Bytes) | Predicted Average End-to-End Latency time, T (s) | Values | Conclusion for a 2-tailed t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
TH-TH | <8 | 5 | 0.0116 | 0.0108 | n1 = 10, n2 = 10, M1 mean = 0.0249 M2 mean = 0.02147 R2 for M1 = 0.9995 R2 forM2 = 0.9886 df = 18 | p-value = 0.9293 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 10 | 0.0115 | 0.0118 | |||
8–16 | 15 | 0.0128 | 0.0128 | |||
16–32 | 20 | 0.0138 | 0.0138 | |||
16–32 | 30 | 0.0156 | 0.0158 | |||
32–64 | 40 | 0.0175 | 0.0178 | |||
32–64 | 50 | 0.0194 | 0.0198 | |||
64–128 | 80 | 0.0250 | 0.0258 | |||
64–128 | 100 | 0.0288 | 0.0298 | |||
>128 | 300 | 0.0797 | 0.0698 | |||
(c) | ||||||
Connection Pair | Graph Segment | Data Packet Size Inputs, D (Bytes) | Predicted Average End-to-End Latency time, T (s) | Values | Conclusion for a 2-tailed t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
ZB-BL | <8 | 5 | 0.0372 | 0.0463 | n1 = 12, n2 = 12, M1 mean = 0.0827 M2 mean = 0.0773 R2 for M1 = 0.9589 R2 for M2 = 1 df = 22 | p-value = 0.7452 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 10 | 0.0365 | 0.0483 | |||
8–16 | 15 | 0.0403 | 0.0503 | |||
16–32 | 20 | 0.0473 | 0.0523 | |||
16–32 | 30 | 0.0629 | 0.0563 | |||
32–64 | 40 | 0.0708 | 0.0603 | |||
32–64 | 50 | 0.0767 | 0.0643 | |||
64–128 | 80 | 0.0920 | 0.0763 | |||
64–128 | 100 | 0.1008 | 0.0843 | |||
128–256 | 140 | 0.1163 | 0.1003 | |||
128–256 | 200 | 0.1327 | 0.1243 | |||
>256 | 300 | 0.1789 | 0.1643 | |||
(d) | ||||||
Connection Pair | Graph Segment | Data Packet Size Inputs, D (Bytes) | Predicted Average End-to-End Latency time, T (s) | Values | Conclusion for a 2-tailed t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
ZB-TH | <8 | 5 | 0.0415 | 0.0451 | n1 = 10, n2 =10, M1 mean = 0.0684 M2 mean = 0.0631 R2 for M1 = 0.9865 R2 for M2 = 1 df = 18 | p-value = 0.7181 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 10 | 0.0410 | 0.0466 | |||
8–16 | 15 | 0.0435 | 0.0481 | |||
16–32 | 20 | 0.0480 | 0.0496 | |||
16–32 | 30 | 0.0580 | 0.0526 | |||
32–64 | 40 | 0.0638 | 0.0556 | |||
32–64 | 50 | 0.0684 | 0.0586 | |||
64–128 | 80 | 0.0760 | 0.0676 | |||
64–128 | 100 | 0.0773 | 0.0736 | |||
>128 | 300 | 0.1663 | 0.1336 | |||
(e) | ||||||
Connection Pair | Graph Segment | Data Packet Size Inputs, D (Bytes) | Predicted Average End-to-End Latency time, T (s) | Values | Conclusion for a 2-tailed t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
TH-ZB | <8 | 5 | 0.0510 | 0.0553 | n1 =10, n2 =10, M1 mean = 0.0853 M2 mean = 0.0793 R2 for M1 = 0.9889 R2 for M2 = 1 df = 18 | p-value = 0.7581 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 10 | 0.0508 | 0.0573 | |||
8–16 | 15 | 0.0551 | 0.0593 | |||
16–32 | 20 | 0.0608 | 0.0613 | |||
16–32 | 30 | 0.0726 | 0.0653 | |||
32–64 | 40 | 0.0783 | 0.0693 | |||
32–64 | 50 | 0.0823 | 0.0733 | |||
64–128 | 80 | 0.0913 | 0.0853 | |||
64–128 | 100 | 0.0953 | 0.0933 | |||
>128 | 300 | 0.2159 | 0.1733 | |||
(f) |
Connection Pair | Data-Based Univariant Regression Equation |
---|---|
ZB-ZB | T − 165.42 |
BL-BL | T − 136.83 |
TH-TH | T − 183.00 |
ZB-BL | T − 105.85 |
ZB-TH | T − 52.631 |
TH-ZB | T − 86.610 |
Connection Pair | Graph Segment | End-to-End Latency Time Inputs, T (s) | Predicted Data Packet Size, D (Bytes) | Values | Conclusion for a 2-Tail t-Test (α = 0.05) | |
---|---|---|---|---|---|---|
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
ZB-ZB | <8 | 0.020 | 6 | - | n1=12, n2=9, M1 mean = 122.61 M2 mean = 129.68 R2 for M1 = 0.9005 R2 for M2 = 1 df = 16 | p-value = 0.4016 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 0.028 | 11 | - | |||
8–16 | 0.0295 | 15 | - | |||
16–32 | 0.035 | 22.15 | 28.641 | |||
16–32 | 0.040 | 28.31 | 56.364 | |||
32–64 | 0.045 | 39.11 | 84.087 | |||
32–64 | 0.050 | 56.89 | 111.81 | |||
64–128 | 0.054 | 89.6 | 133.9884 | |||
64–128 | 0.056 | 115.2 | 145.0776 | |||
128–256 | 0.059 | 192 | 161.7114 | |||
128–256 | 0.060 | 224 | 167.256 | |||
>256 | 0.080 | 336.2 | 278.148 | |||
(a) | ||||||
Connection Pair | Graph Segment | End-to-End Latency Time Inputs, T (s) | Predicted Data Packet Size, D (Bytes) | Values | Conclusion for a 2-tail t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
BL-BL | <8 | 0.013 | 5.00 | - | n1=12, n2=9, M1 mean = 103.41 M2 mean = 115.63 R2 for M1 = 0.6924 R2 for M2 = 0.9932 df = 16 | p-value = 0.2674 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 0.0205 | 12.00 | - | |||
8–16 | 0.0211 | 16.00 | - | |||
16–32 | 0.0233 | 18.00 | 9.692 | |||
16–32 | 0.034 | 30.00 | 76.979 | |||
32–64 | 0.038 | 48.00 | 102.133 | |||
32–64 | 0.039 | 56.00 | 108.422 | |||
64–128 | 0.042 | 85.33 | 114.71 | |||
64–128 | 0.044 | 106.67 | 139.864 | |||
128–256 | 0.046 | 128.00 | 152.441 | |||
128–256 | 0.047 | 192.00 | 158.795 | |||
>256 | 0.050 | 266.71 | 177.595 | |||
(b) | ||||||
Connection Pair | Graph Segment | End-to-End Latency Time Inputs, T (s) | Predicted Data Packet Size, D (Bytes) | Values | Conclusion for a 2-tail t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
TH-TH | <8 | 0.010 | 7.08 | 1.005 | n1 = 10, n2 = 10, M1 mean = 54.12 M2 mean = 56.79 R2 for M1 = 0.9961 R2 for M2 = 1 df = 18 | p-value = 0.9235 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 0.011 | 8.00 | 6.369 | |||
8–16 | 0.012 | 12.00 | 11.732 | |||
16–32 | 0.014 | 21.33 | 22.459 | |||
16–32 | 0.015 | 26.67 | 27.823 | |||
32–64 | 0.018 | 42.67 | 43.914 | |||
32–64 | 0.020 | 53.33 | 54.641 | |||
64–128 | 0.024 | 74.67 | 76.095 | |||
64–128 | 0.030 | 106.67 | 108.277 | |||
>128 | 0.050 | 188.77 | 215.549 | |||
(c) | ||||||
Connection Pair | Graph Segment | End-to-End Latency Time Inputs, T (s) | Predicted Data Packet Size, D (Bytes) | Values | Conclusion for a 2-tail t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
ZB-BL | <8 | 0.018 | 4.00 | - | n1 = 12, n2 = 9, M1 mean = 97.08 M2 mean = 122.30 R2 for M1 = 0.9925 R2 for M2 = 1 df = 19 | p-value = 0.9133 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 0.036 | 9.00 | - | |||
8–16 | 0.040 | 15.00 | - | |||
16–32 | 0.0462 | 19.00 | 10.422 | |||
16–32 | 0.061 | 29.00 | 41.506 | |||
32–64 | 0.068 | 35.37 | 56.208 | |||
32–64 | 0.080 | 55.58 | 81.412 | |||
64–128 | 0.090 | 75.43 | 102.415 | |||
64–128 | 0.100 | 98.29 | 123.418 | |||
128–256 | 0.120 | 136.60 | 165.424 | |||
128–256 | 0.130 | 147.17 | 186.427 | |||
>256 | 0.200 | 277.26 | 333.448 | |||
(d) | ||||||
Connection Pair | Graph Segment | End-to-End Latency Time Inputs, T (s) | Predicted Data Packet Size, D (Bytes) | Values | Conclusion for a 2-tail t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
ZB-TH | <8 | 0.0250 | 5.00 | - | n1 = 10, n2 = 7, M1 mean = 87.34 M2 mean = 103.95 R2 for M1 = 0.8420 R2 for M2 = 1 df = 12 | p-value = 0.3328 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 0.0438 | 15.60 | - | |||
8–16 | 0.0439 | 15.80 | - | |||
16–32 | 0.0480 | 20.00 | 14.98 | |||
16–32 | 0.0500 | 22.00 | 23.23 | |||
32–64 | 0.0650 | 42.67 | 85.099 | |||
32–64 | 0.0700 | 53.33 | 105.722 | |||
64–128 | 0.0760 | 80.00 | 130.469 | |||
64–128 | 0.0780 | 112.00 | 138.719 | |||
>128 | 0.1000 | 281.35 | 229.46 | |||
(e) | ||||||
Connection Pair | Graph Segment | End-to-End Latency Time Inputs, T (s) | Predicted Data Packet Size, D (Bytes) | Values | Conclusion for a 2-tail t-test (α = 0.05) | |
JosNet Interpolation/ Extrapolation Algorithm (M1) | Univariate Regression Model (M2) | |||||
TH-ZB | <8 | 0.031 | 5.00 | - | n1 = 10, n2 = 9, M1 mean = 69.00 M2 mean = 77.94 R2 for M1 = 0.9908 R2 for M2 = 1 df = 16 | p-value = 0.7084 Accept Ho mean for M1 is not significantly different from the mean for M2 |
8–16 | 0.054 | 13.71 | 8.079 | |||
8–16 | 0.055 | 14.86 | 10.189 | |||
16–32 | 0.060 | 19.37 | 20.738 | |||
16–32 | 0.065 | 23.58 | 31.287 | |||
32–64 | 0.080 | 44.31 | 62.934 | |||
32–64 | 0.085 | 56.62 | 73.483 | |||
64–128 | 0.090 | 73.85 | 84.032 | |||
64–128 | 0.095 | 98.46 | 94.581 | |||
>128 | 0.200 | 276.21 | 316.11 | |||
(f) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Balota, J.E.; Kor, A.-L.; Shobande, O.A. Multi-Network Latency Prediction for IoT and WSNs. Computers 2024, 13, 6. https://doi.org/10.3390/computers13010006
Balota JE, Kor A-L, Shobande OA. Multi-Network Latency Prediction for IoT and WSNs. Computers. 2024; 13(1):6. https://doi.org/10.3390/computers13010006
Chicago/Turabian StyleBalota, Josiah E., Ah-Lian Kor, and Olatunji A. Shobande. 2024. "Multi-Network Latency Prediction for IoT and WSNs" Computers 13, no. 1: 6. https://doi.org/10.3390/computers13010006
APA StyleBalota, J. E., Kor, A. -L., & Shobande, O. A. (2024). Multi-Network Latency Prediction for IoT and WSNs. Computers, 13(1), 6. https://doi.org/10.3390/computers13010006