The simulations make a comparison for the following evaluations.
The testing and evaluation experiments were performed to assess the validity and effectiveness of the proposed methods by using MATLAB 2020 as follows.
8.1. Performance of Swarm Algorithms
The results of swarm intelligence algorithms for CH selection in the IoUT network with 50 underwater nodes and several simulated iterations of 200, 400, 600, 800, and 1000 and the data shown in
Table 3 provide a comparison of how each optimization approach performs regarding CH selection.
In the case of 200 iterations, the performance reveals that the PSO algorithm exhibits the highest mean performance and stability, making it a good contender for CH selection in IoUT networks. While choosing a sizable number of CHs, the ACO algorithm performs with a lower mean and higher variability. With reasonable CH selection ratios, the ABC and GA algorithms provide a balanced performance. The PSO method is notable for its average performance. In contrast, the ABC algorithm provides stability and a fair CH selection ratio. While PSO performs well, ABC also does well. However, PSO’s greater mean performance and lower standard deviation imply that it is more appropriate for the specified CH selection optimization job in the context of IoUT networks with 50 nodes and 200 iterations.
In the case of 400 iterations, the ABC algorithm maintains its excellent mean performance and stability during iteration increase. It keeps up its good work and chooses a respectable number of CHs. ACO algorithm, which has a high CH selection ratio, continues to be reliable. Out of all the algorithms, it chooses the most CHs. With the highest mean performance and smallest standard deviation, the PSO algorithm continues to be effective in CH selection. It selects a large number of CHs. The average mean performance of the GA method is maintained, although it displays more variability. Additionally, it chooses a sizable number of CHs. PSO is still a great candidate if energy stability and efficiency are important factors.
ACO is appropriate if increasing CHs is crucial. While GA may be utilized in situations when a greater number of CHs are needed, ABC offers a balanced approach. On the other hand, the ABC keeps its mean performance high and its standard deviation low, demonstrating both stability and effectiveness in CH selection. With a CH selection ratio of 23%, it chooses a manageable number of CHs that can aid in striking a balance between network coverage and energy efficiency. Through multiple iterations, ABC continuously outperforms, showing its dependability in CH selection. ABC can be a good option since it can provide stability and consistent performance while regulating cluster sizes, allowing for a compromise between CH selection efficiency, stability, and a moderate number of CHs.
In the case of 600 iterations, the PSO algorithm shows good performance having the highest mean performance and steady behavior. Additionally, it keeps a moderate CH selection ratio, which is compatible with controlled cluster sizes and energy efficiency. With a moderate mean performance, ABC exhibits consistent performance. To maximize energy efficiency and manage cluster sizes, it chooses a small number of CHs. Continued use of ACO and GA algorithms that choose many CHs may lead to smaller clusters but more intra-cluster communication. ABC consistently picks just about 18% of CHs. This has the advantage of lowering the danger of cluster congestion by avoiding extensive intra-cluster communication and limiting cluster sizes. Despite its cautious CH choice, ABC retains a good mean performance equal to 0.81. It successfully chooses CHs while striking a decent compromise between network performance and cluster size control, as seen by this. ABC stands out for its cautious CH selection, balanced performance, stability, and dependability in the specific context of the network requirement to regulate cluster sizes and avoid communication difficulties. It is ideal for situations where managing cluster sizes is a crucial factor.
In the case of 800 iterations, ABC retains a high mean performance, demonstrating persistent effectiveness. The average performance of ACO is still very stable. However, it is less than that of other algorithms. Among the algorithms, PSO maintains the greatest mean performance. GA continues to perform at a moderate mean level. The PSO algorithm emerges as the top performer with the highest mean performance and the steadiest behavior. Additionally, it keeps a cautious CH selection with a 15.98% ratio, which aligns with energy efficiency and limited cluster sizes. With an average mean performance, ABC exhibits consistent performance. And because of the controlled cluster sizes and the average amount of CHs chosen with a 22% CH selection ratio, it is good for energy efficiency. Although there is a possibility of smaller clusters and more intra-cluster communication, ACO and GA algorithms continue to choose many CHs. It is possible that this conflicts with the directive to prevent intense intra-cluster communication. According to the most recent data, PSO and ABC are still excellent choices for CH selection optimization. The mean performance is where PSO thrives, whereas ABC strikes a balance between performance and prudent CH selection.
In the case of 1000 iterations, PSO performs better by constantly choosing a small number of CHs with a 15.98% selection ratio, which is in line with energy efficiency and managed cluster sizes. However, ABC has a consistent performance with a reasonable mean performance. Concerning energy efficiency and managed cluster sizes, it chooses many CHs with a 20% selection ratio. It can balance network performance, cluster size control, and reliability. ACO continues to select a reasonable number of CHs and perform at a lower mean performance level. However, it is more variable than PSO and ABC. GA chooses a lot of CHs with a 44% selection ratio, which can result in smaller clusters and more communication within them.
Overall, PSO and ABC are both still excellent choices for CH selection. While ABC balances performance and efficient CH selection, PSO performs better with low iterations. Although PSO gives better average performance and conservative CH selection in mean iterations, the ABC approach is more moderate. It performs better, especially at increasing iterations in scenarios where the trade-off between performance and control of group size is important.
Figure 4 shows the performance of swarm intelligence algorithms regarding the number of CH selections in case the number of underwater nodes increases from 50 to 250. It shows that when managing cluster sizes, ABC frequently beats other methods. Compared to PSO, ABC tends to choose fewer CHs. This cautious method of choosing a CH fits well with the desire to prevent intense intra-cluster communication. The possibility of congestion within clusters can be decreased with smaller clusters, making ABC a preferable option for this particular requirement. PSO may outperform ABC in terms of mean performance. However, ABC consistently achieves a modest mean performance that is frequently adequate for real-world network operations. ABC focuses on striking a balance between controlled cluster sizes and network performance. It aims to provide a stable and reliable network, and this balance can be more aligned with the requirements compared to PSO’s potentially higher but less controlled performance.
Additionally, ABC is known for being reliable and performing well across iterations. Network stability is essential for reliable communication to continue in unexpected underwater environments. The network’s stability in performance is a result of ABC. The network is kept operational even in the worst-case scenarios thanks to ABC’s reasonable minimum performance level maintenance. This dependability is crucial for mission-critical applications since network outages or performance deterioration might have negative effects.
8.2. Performance of Improved ABC-QL Algorithm
This section discusses the performance evaluation results of the improved ABC algorithm by improving Q-learning compared to the classical ABC algorithm. Through the analysis, the numerical performance of the proposed algorithm was extracted in addition to other metrics, such as the number of live and dead nodes, the number of best CH selections, and the total energy consumption. They consider that all underwater nodes are stationary or in a state of precise movement depending on the vagaries of the underwater environment. The results were drawn in two scenarios: the first is a general evaluation of the proposed algorithm, and the second scenario is an evaluation based on the density of underwater nodes and their impact on the performance of the proposed algorithm.
8.2.1. Numerical and General Case Evaluation
We evaluate the proposed algorithm with 80 underwater nodes in different iterations from 200 to 1000. The underwater nodes are randomly distributed in an underwater environment with a max depth of 50 m. The packet size is 1024 bits. Distances between modes are 50 m. The fractions of residual energy levels are given between 0.7, 0.9, 0.5, and 0.8, with 0.5 for the lowest and 0.9 for the highest level. According to the given simulation settings, the results are obtained in
Table 4, and
Figure 5,
Figure 6 and
Figure 7 show the evaluated improved ABC algorithm by Q-learning optimization compared to conventional ABC as follows.
As shown in
Table 4. In terms of standard deviation, min and mean fitness values, results show that both classical ABC and improved ABC-QL achieved close min fitness values around 0.03 to 0.05. However, the improved ABC-QL is slightly higher compared to conventional ABC in low iterations, 200, and high iterations, 1000. This means that improved ABC-QL can find solutions with better fitness, indicating a potential advantage in selecting CHs that improve network performance. The mean fitness values for conventional ABC and improved ABC-QL are quite close. This indicates that, on average, both algorithms perform similarly in selecting CHs with reasonable fitness values. However, in the higher iterations 600, 800, and 1000, improved ABC-QL has a slightly higher mean fitness value between 53, 55, and 49 compared to conventional ABC with values of 50, 53, and 48, respectively. This indicates that, on average, improved ABC-QL tends to select CHs with slightly better fitness values, potentially improving network performance. For standard deviation, improved ABC-QL exhibits slightly lower values compared to conventional ABC, which means that it has more consistent performance in the quality of selected CHs.
By considering the CH selection ratios in
Table 4 and observing the performance in terms of the number of the best selected CHs in
Figure 5, the results show that the improved ABC-QL selects a higher number of best CHs compared to conventional ABC with low iterations. The ABC-QL selects 18% of the total number of underwater nodes to serve as CHs according to adjusted selection parameters. However, it remains low in higher iterations according to optimization in selecting the CH with a stable clustering size. It gives the best CH selection ratio decreased to 13.5% in a high iteration based on the optimization achieved, which means it has been fine-tuned to select the best balance between exploration and exploitation.
The Q-learning enables adjusting the exploration and exploitation parameters based on the underwater nodes’ state. It also helps to explore a broader solution space early in the process to identify potential CHs while exploiting promising solutions and convergence quickly. This implies that improved ABC-QL may offer better coverage or more efficient CH selection, potentially improving network performance. According to these metrics, improved ABC-QL enables the maintenance of a better average best selection cost lower than conventional ABC, which can benefit energy-efficient CH selection.
Figure 6 displays the efficiency of the improved ABC-QL algorithm in terms of the exhausted (dead) underwater nodes. The results show that the improved ABC-QL outperforms the conventional ABC by having significantly fewer average exhausted (avg. exh.) nodes than exhausted nodes by conventional ABC in the initial 200 and 400 iterations, namely 18.5%. This demonstrates that a more stable network setup was initially attained via an enhanced ABC-QL. While the upgraded ABC-QL continues to improve slightly, the conventional ABC performed marginally better with fewer dead nodes through iterations up to 600. However, despite the low number of dead nodes, there was still a big disparity in how well the two methods performed. Improved ABC-QL reduced the number of exhausted nodes in iterations 800 and 1000 by 27.3%, respectively.
These results show that modified ABC-QL performed better regarding the number of exhausted nodes in the initial iterations, demonstrating its capacity to construct a more stable network configuration. Additionally, it offers superior network stability and resource utilization for CH selection optimization in IoUT applications, especially in the early and later iterations. According to the results, improved ABC-QL gives more advantages in configuring a stable IoUT network. It focused on early iterations while adjusting algorithm parameters by incorporating heuristics and prioritizing stability in the initial stages. It can also adapt over time by dynamically re-adjusting parameters to ensure its advantages are maintained and built upon as the optimization process progresses.
The performance of the proposed algorithm’s energy consumption is reviewed in
Figure 7. The results demonstrate that the enhanced ABC-QL uses substantially less energy between 1121 and 1524 Joules than the conventional ABC, which uses 2471 to 2485 Joules between 200 and 1000 iterations. As a crucial characteristic for any network, this marked drop in energy consumption shows that improved ABC-QL has a clear energy efficiency advantage, increasing the network lifetime and reducing energy costs. The traditional ABC algorithm shows relatively stable energy consumption levels, which indicates that the algorithm has reached a certain level of convergence in terms of energy consumption; it gives a −0.56% convergence percentage in all iterations. However, ABC-QL showed a significant reduction in energy consumption, giving −36.01% convergence, indicating that it continues converging more energy-efficient solutions with iterations.
When comparing results from all iterations, modified ABC-QL consistently achieved considerably lower energy usage than conventional ABC, indicating a clear benefit in terms of energy efficiency. This is a crucial consideration in IoUT applications where energy supplies are scarce. As demonstrated in
Table 4, the enhanced ABC-QL also tended to select CHs with marginally higher mean fitness values. Occasionally, conventional ABC chose more of the best CHs, indicating better coverage or more effective CH selection in some iterations. This benefit, however, was not present in all iterations. A lower CH selection ratio regularly displayed by the enhanced ABC-QL suggests that it was more discriminating when selecting CHs, which can help with energy conservation and network optimization by lowering the number of dead nodes. The improved ABC-QL is also preferred for CH selection in IoUT applications due to its obvious benefit in energy efficiency, which is essential for prolonging network lifetime, keeping individual nodes active for a long time, and lowering operational expenses.
The modified ABC-QL algorithm outperforms standard ABC in terms of energy efficiency in underwater conditions. The incorporation of the Q-learning technique is crucial to its success. Q-learning enables ABC-QL to establish the best balance between exploration and exploitation, guaranteeing that it rapidly converges on energy-efficient solutions. This dynamic adaptation and iterative learning process improves its energy-saving capacities greatly. Another notable feature is ABC-QL’s ability to pick energy-efficient CHs, which reduces energy usage during the selection process. Notably, this improved energy efficiency is not a one-time event; ABC-QL continues to learn and adapt, guaranteeing that its higher performance endures in the face of changing environmental conditions. As a result, ABC-QL increases network lifetime, reduces operational expenses, and shows great promise for sustaining IoT networks in resource-limited underwater environments.
Overall, the improved ABC-QL surpasses the conventional ABC in several important domains, including selection cost, fitness values, standard deviation, the number of best-selected CHs, and energy consumption. These results show that increased ABC-QL enhances CH selection effectiveness, especially when network longevity and energy efficiency are priorities.
8.2.2. Evaluation Based on Underwater Node Density
The performance of the enhanced ABC-QL method is assessed in this section based on the density of the underwater nodes. The results show how more underwater nodes affect the performance of both the enhanced ABC-QL and traditional ABC algorithms. The simulated IoUT network scenarios vary between 50 and 250 nodes, which evaluates the impact of various network sizes on the proposed algorithm.
Figure 8 shows the percentage of exhausted underwater nodes, which indicates the number of dead nodes concerning network size for both improved ABC-QL and conventional ABC. Both algorithms see an increase in dead nodes as the network size grows. However, improved ABC-QL consistently shows fewer exhausted nodes over various network sizes than conventional ABC. This indicates that compared to conventional ABC, improved ABC-QL tends to deliver a more optimized CH selection, resulting in a more stable network with fewer dead nodes. With 50 nodes in a small IoUT network, both algorithms have a close number of exhausted nodes. However, as the network size grows, the modified ABC-QL continues to perform better than conventional ABC, with a 33% reduction in dead nodes.
In the proposed algorithm, the use of Q-learning enables a more efficient exploration of different CH selection options. It can focus on actions that have yielded better results regarding fewer exhausted nodes and avoid actions that led to higher dead node counts. The proposed algorithm can adapt and learn from past iterations, improving network stability. The algorithm becomes more adept at choosing CHs that maintain network connectivity and minimize node failures.
The proposed method to improve ABC makes better use of Q-learning to examine different CH selection alternatives. They can focus on measures that produce better results regarding fewer exhausted nodes and avoid those with more exhaustion. This indicates that the ability of the proposed method to adapt and learn from previous iterations helps to improve network stability. The improved ABC algorithm is better at selecting CHs that maintain network connectivity and reduce node failure.
According to
Figure 9, the number of top CH selections concerning network size is identical for both improved ABC-QL and conventional ABC. The number of best CH selections increases for both algorithms as the network size grows. Comparing improved ABC-QL with conventional ABC, the number of best CH selections is typically significantly lower. This suggests that the improved ABC-QL may be more sensitive when selecting CHs, concentrating on the most stable nodes to be the best CHs. The improvement in ABC-QL’s selectivity is probably due to the inclusion of Q-learning. The algorithm can learn and modify its CH selection method using Q-learning, emphasizing choosing nodes with the greatest potential to improve network performance.
The more effective and selective CH selection procedure of improved ABC-QL is a good indicator of the improvement by Q-learning. By taking into account previous nodes’ experiences and the state of the network, the proposed algorithm can better choose which nodes to choose as CHs. As a result of this selection, CHs are distributed more optimally, potentially enhancing network efficiency and stability while reducing the unwanted increase in the number of selected CHs. The integration of Q-learning explains the ability of improved ABC-QL to be more selective when selecting CHs compared with conventional ABC.
Using the Q-learning approach allows us to learn from previous iterations and make more informed CH selection decisions. It focuses on nodes that contribute the most to network performance and gives the proposed algorithm the capacity to change its CH selection method depending on historical data and network conditions, resulting in a more optimal CH distribution. This selective strategy may result in improved network stability and efficiency because it reduces the number of selected CHs while maintaining a strong network.
Figure 10 illustrates the energy consumption results, which reveal some interesting behaviors between improved ABC-QL and conventional ABC across various network sizes. Improved ABC-QL consistently shows decreased energy consumption in cases with smaller network sizes of 50 and 100 nodes. This indicates that improved ABC-QL has greater energy efficiency in these smaller-scale scenarios. As the network grows to 150 nodes, the improved ABC-QL retains an energy-efficient profile, consuming 3500 Joules versus 4000 Joules for conventional ABC. Incorporating Q-learning in improved ABC-QL most certainly adds to this efficiency by selecting CHs with the lowest energy consumption.
In the case of 200 nodes, improved ABC-QL consumes 5800 Joules, which is somewhat less than the conventional ABC consumption of 6500 Joules. Remarkably, with 250 nodes, the improved ABC-QL regains its energy efficiency advantage, consuming 4500 Joules compared to 7900 Joules for conventional ABC. These results indicate that the improved ABC-QL can adapt to greater network sizes in this scenario. The introduction of Q-learning into improved ABC-QL is most likely responsible for its improved energy efficiency in smaller network sizes.
Q-learning enables the ABC algorithm to learn and change its CH selection method, emphasizing the selection of energy-efficient nodes as CHs. With the aid of Q-learning, the ABC algorithm can adapt to changes in network conditions. It can modify its CH selection approach to maintain performance in response to network size or topology changes. A more effective CH selection process is achieved by integrating Q-learning into the ABC algorithm, which adds adaptive learning and optimization capabilities. The major advancement resides in the algorithm’s capacity to keep as many active nodes as feasible for a long time, contributing to network stability and energy efficiency.
Compared to the conventional ABC algorithm, the improved ABC with Q-learning consistently outperforms it regarding CH selection, network lifetime, and total network energy consumption. This makes it especially valuable in dynamic and evolving network environments. Overall, integrating Q-learning with the ABC algorithm for CH selection optimization in IoUT shows promise in improving energy efficiency, network stability, and adaptability. This approach can be a valuable tool for optimizing underwater communication networks, especially in scenarios with limited energy resources and dynamic conditions.