Next Article in Journal
Simulation-Based Optimization of Material Supply in Automotive Production Using RTLS Data
Previous Article in Journal
Occurrence State and Extraction of Lithium from Jinyinshan Clay-Type Lithium Deposit, Southern Hubei: Novel Blank Roasting–Acid Leaching Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UNet-INSN: Self-Supervised Algorithm for Impulsive Noise Suppression in Power Line Communication

1
China Electric Power Research Institute, Beijing 100192, China
2
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 9101; https://doi.org/10.3390/app15169101
Submission received: 10 July 2025 / Revised: 1 August 2025 / Accepted: 8 August 2025 / Published: 19 August 2025
(This article belongs to the Special Issue Advanced Communication and Networking Technology for Smart Grid)

Abstract

Impulsive noise suppression plays a crucial role in enhancing the reliability of power line communication (PLC). In view of the rapid advancement of deep learning methodologies, recently, studies on deep learning-based impulsive noise suppression have garnered extensive attention. Nevertheless, on one hand, the training of deep learning-based impulsive noise suppression models relies on a large number of labeled data, whose acquisition incurs high costs. On the other hand, the currently proposed models struggle to adapt to the dynamic variations in impulsive noise distributions. To address these two issues, in this paper, a UNet-based self-supervised learning model for impulsive noise suppression (UNet-INSN) is proposed. Firstly, by using the designed global mask mapper, UNet-INSN can utilize the entire noisy signal for model training, resolving the information loss issue caused by partial signal masking in traditional mask-driven algorithms. Secondly, a reproducibility loss function is introduced to effectively prevent the model from degenerating into an identity mapping, thereby enhancing the denoising performance of UNet-INSN. Simulation results show that the required SNRs for the proposed algorithm to achieve a bit error rate of 10−6 under ideal and non-ideal conditions are 12 dB and 26 dB, respectively, significantly outperforming comparison methods. Moreover, it still exhibits excellent robustness and generalization capabilities when the impulsive noise distribution changes dynamically.

1. Introduction

Power line communication (PLC) is a technology that utilizes existing power cables as a medium for data transmission [1]. Due to the convenience of directly leveraging existing power infrastructure without the need for additional communication lines, it has attracted extensive attention and in-depth research from the industry and has been applied in smart homes, new-energy grid-connected monitoring, and the Industrial Internet of Things. Moreover, to further advance the standardization and large-scale adoption of related technologies, a series of PLC standards have been established, including G3-PLC by Maxim Integrated, HomePlug AV by the HomePlug Powerline Alliance, IEEE 1901.1 by the Institute of Electrical and Electronics Engineers (IEEE), and ITU-T G.hn by the International Telecommunication Union (ITU) [2,3]. However, power cables are not inherently dedicated communication lines, and in actual operation, they generate random, sudden, and high-amplitude impulsive noise, which seriously affects the communication reliability or efficiency of PLC systems. Therefore, how to effectively suppress impulsive noise in PLC systems has become a critical problem that needs to be solved urgently to promote the development and ubiquitous application of PLC technology.
Currently, researchers have conducted systematic studies on noise suppression in PLC systems and have proposed a series of model methods, which can be roughly divided into four categories: suppression methods based on error handling mechanisms, nonlinear preprocessing-based suppression methods, sparse reconstruction-based suppression methods, and deep learning-based suppression methods. Among them, error handling mechanism-based suppression methods achieve impulsive noise counteraction by introducing channel error correction coding/decoding and interleaving/deinterleaving technologies at the transmitter and receiver, respectively, and include methods based on Turbo [4], LDPC [5], Polar [6], and Bit-Interleaved Coded Modulation (BICM) [7]. However, due to the suddenness of impulsive noise, the error correction capability of channel coding methods is relatively limited. Nonlinear preprocessing methods mainly achieve noise suppression through the blanking, clipping, and deep clipping [8] of impulsive noise-interfered signals at the receiver. Although they are simple to implement, these methods lack adaptability to changing noise characteristics. Sparse reconstruction-based suppression algorithms, such as compressive sensing [9], sparse Bayesian methods [10], and sparse iterative covariance estimation [11], can achieve effective noise suppression and signal reconstruction. However, their requirements for additional zero subcarrier bandwidth and strict noise sparsity pose challenges to their practicality. Meanwhile, in recent years, with the rapid development of deep learning theories and model methods, the design of PLC systems and impulsive noise suppression based on deep neural networks have attracted increasing attention in the industry community [12,13,14,15,16].
Specifically, for noise suppression in PLC systems based on deep neural networks, researchers have carried out studies and proposed model methods from different perspectives, such as channel estimation and signal reconstruction [12], noise modeling [13,14], and denoising network design [15]. For example, Reference [12] uses deep learning to optimize channel estimation and integrates compressive sensing to reduce channel measurement redundancy, thereby improving signal reconstruction accuracy and noise suppression. Reference [13] employs Deep Convolutional Generative Adversarial Networks (GANs) to train and obtain a PLC noise model, enabling the generated noise model to match the statistical distribution of real-world noise and thus achieving noise suppression. Reference [14] also proposes a GAN-based noise signal model, which introduces mixed training data containing measurement noise to train the model, so as to obtain high-fidelity noise samples and enhance sample data and system robustness. Essentially, noise suppression is a denoising task, and given that CNNs have demonstrated excellent performance in image denoising [15,17,18], CNN-based PLC noise suppression algorithms have also attracted researchers’ attention. Reference [15] proposes an MFSDF-Net denoising algorithm which effectively alleviates the impulsive noise problem in MIMO-PLC systems through multi-convolution kernel feature extraction and continuous convolution fusion via skip connections.
From the above discussion, it can be seen that although researchers have carried out some preliminary studies on deep neural network-based PLC noise suppression and the proposed noise suppression models have strong noise feature learning capabilities, the training of existing models relies on a large number of labeled data, namely, noisy–clean signal pairs. However, in PLC systems, on the one hand, acquiring noisy–clean signal training samples is costly and time-consuming; on the other hand, the impulsive noise in PLC systems usually exhibits time-varying characteristics, and existing methods have weak adaptability to time-varying impulsive noise characteristics. In view of this, this paper conducts research on noise suppression methods for PLC systems in dynamic impulsive noise environments. Based on classic self-supervised learning models such as Noise2Void [18] and Zero-Shot Noise2Noise [19], a UNet-based impulsive noise suppression network (UNet-Impulsive Noise Suppression Network, UNet-INSN) which can achieve good denoising performance by using only a small number of noisy signals without the need for noisy–clean signal sample pairs for training is proposed. Specifically, the main contributions of this paper include the following two aspects:
(1)
A self-supervised learning-based algorithm, UNet-INSN (UNet Impulsive Noise Suppression Network), is proposed. Through the designed global mask generation and global mask mapper, the algorithm not only effectively utilizes the internal structural information of the signal but also avoids the dependence on external clean signals, enabling the model to be trained using only noisy signals. Additionally, a reproducibility loss function is introduced to enable the model to directly recover the masked information from the original noisy signal, thus avoiding the performance degradation caused by the model degenerating into an identity mapping.
(2)
A PLC simulation system was built to verify the performance of the proposed model. The results show that the signal-to-noise ratios (SNRs) required for the proposed algorithm to achieve a bit error rate (BER) of 10 6 under ideal and non-ideal conditions are 12 dB and 26 dB, respectively, which are significantly lower than those of traditional methods and comparison self-supervised learning methods. Moreover, the results also demonstrate that the proposed method exhibits better robustness and generalization capabilities in dynamic impulsive noise environments.
The remainder of this paper is organized as follows: Section 2 introduces the PLC system, signal, and noise models. Section 3 first presents the framework of the proposed algorithm, UNet-INSN, and then elaborates on the design of its key functional modules in detail, including the denoising network, global mask generator, and mapper; defines the model training loss function; and expounds on the model training method. Section 4 shows and analyzes the model training results, conducts simulation verification and analysis on the denoising performance of the proposed model in the PLC system, and finally presents the conclusions of this paper.

2. System and Signal Model

2.1. System Model

This paper considers a power line carrier communication (PLC) system as shown in Figure 1. At the transmitter, the original bit stream undergoes Turbo coding and QPSK modulation to obtain a sequence of modulated symbols. The symbols are then subjected to OFDM multicarrier modulation to generate time-domain signals for carrier communication. After adding a cyclic prefix, the carrier signals are transmitted over the power line channel via a coupler. During signal transmission, the signals are affected by impulsive noise and other interferences. To enhance the communication reliability of the PLC system in complex impulsive noise environments, this paper designs a novel receiver impulsive noise suppression method, namely, UNet-INSN. Therefore, the receiver workflow is as follows: First, the receiver performs synchronization adjustment on the received signals to correct time–frequency offsets and remove the cyclic prefix. The UNet-INSN module then suppresses noise in the received signals. Subsequently, OFDM demodulation, symbol inverse mapping (QPSK demodulation), and Turbo decoding are performed to obtain the decoded bit stream. This section first establishes the transmission and reception signal model of the system, followed by an introduction to the noise model. The design of the proposed noise suppression method, UNet-INSN, will be elaborated in the next section.

2.2. Signal and Noise Models

From the above, we define the PLC multipath channel matrix shown in Figure 1 as H; then, the received signal r at the PLC receiver is
r = Hs + g + i ,
where s represents the transmitted signal vector from the transmitter, g represents the background noise vector in the power line channel, and i represents the impulsive noise vector in the power line channel.
For the background noise, only a relatively narrow bandpass frequency band is considered here. The power spectral density of the background noise changes smoothly, which is similar to the characteristics of additive white Gaussian noise (AWGN). Therefore, AWGN is used to simulate the background noise in this paper. For impulsive noise in carrier communication, researchers have proposed impulsive noise models such as the Bernoulli–Gaussian (BG) model [20], the Middleton Class A model, and the Alpha-stable distribution model. Herein, the BG model is adopted to model impulsive noise due to its broader applicability.
Since the total noise at the receiver can be expressed as the superposition of background noise and impulsive noise, where the background noise is replaced by AWGN, the total noise is thus composed of the weighting of two stationary random processes with Gaussian distributions. The probability density function (PDF) of the total noise model is
f ( x ) = ( 1 p ) 2 π σ g 2 e x 2 2 σ g 2 + p 2 π ( σ g 2 + σ i 2 ) e x 2 2 ( σ g 2 + σ i 2 ) ,
where p denotes the occurrence probability of impulsive noise, σ g 2 is the power of background noise, and σ i 2 is the power of impulsive noise.
Thus, the signal-to-noise ratio (SNR), defined as the ratio of signal power to background noise power, is given by
S N R = 10 log 10 σ s 2 σ g 2 .
The signal-to-impulsive noise ratio (SINR) is defined as
SINR = 10 log 10 σ s 2 σ i 2 .
Based on the aforementioned background noise and impulsive noise signal model, when S N R = 20 dB, S I N R = 15 dB, and p = 0.01 , the resulting time-domain waveform of impulsive noise is shown in Figure 2. As evident from the figure, impulsive noise exhibits sparsity and randomness in the time domain, characterized by burst occurrences and significantly large pulse amplitudes. Consequently, it can severely compromise signal integrity within short durations, thereby degrading system performance. This leads to a sharp increase in the bit error rate (BER) and reduced communication efficiency of the system. Therefore, suppressing impulsive noise is of critical importance for enhancing PLC system performance. To address this challenge, the following section will introduce the proposed UNet-based self-supervised learning method for impulsive noise suppression in PLC systems.

3. Self-Supervised Learning Algorithm Based on UNet-INSN

As mentioned earlier, for PLC (power line communication) noise suppression, most of the existing deep learning-based model training relies on noisy signal–clean signal pairs. However, in practical applications, the cost of collecting sample data is high, and the performance of existing model methods deteriorates seriously when the statistical characteristics of noise change. Inspired by the Noise2Void method proposed in [18], which trains directly on noisy images by introducing random masks to occlude partial pixels and constructs training inputs by randomly replacing target pixels with values from surrounding pixels, thereby reducing dependence on clean signals, we propose a self-supervised learning-based approach for impulsive noise suppression, namely, the UNet-INSN model. Specifically, unlike Noise2Void, which employs random masking to obscure partial signals and restricts the objective function’s focus to masked regions, UNet-INSN adopts a joint mechanism of a global mask generator and a global mask mapper. The mask generator produces a masking pattern applied to the entire signal, enabling comprehensive coverage of the whole signal space. The mask mapper reconstructs the denoised signal by reassembling it according to the masking pattern after denoising network processing. This mechanism ensures full signal participation in modeling while enhancing information flow within masked regions, significantly improving denoising performance. Specifically, in this section, we first present its model framework, then elaborate on the design of each functional module of the model and the definition of the model loss function, and finally present the model training method.

3.1. Algorithm Framework

The framework of the UNet-INSN model is illustrated in Figure 3, where Figure 3a corresponds to the model training phase, primarily consisting of the following four components: a global mask generator, a denoising neural network, a global mask mapper, and a Loss Function Calculation Unit. Figure 3b corresponds to the inference phase, which includes only the denoising neural network. During the training phase, for the noisy signal y received by the PLC receiver, the global mask generator first creates mask containers to obtain a masked noisy signal group Ω y . This group is then fed into the denoising neural network to generate an output signal f θ ( Ω y ) . Next, the global mask mapper samples the denoised output signal f θ ( Ω y ) to produce a reconstructed output C ( f θ ( Ω y ) ) . Additionally, the denoising neural network f θ ( · ) processes the original received signal y to generate a denoised signal f θ ( y ) ^ . Finally, both C ( f θ ( Ω y ) ) and f θ ( y ) ^ are input into the Loss Function Calculation Unit to compute the loss, which is used to optimize the network parameters via gradient-based methods. After completing model training, the inference phase begins, where the trained denoising network processes the noisy input signal to generate the denoised output f θ ( y ) . Herein, the deployment and application of the model involve embedding the trained model f θ ( y ) into the PLC receiver, as illustrated in Figure 1. This enables the processing of received noisy signals, thereby achieving noise suppression. The following sections detail the design of each functional module, the definition of the loss function, and the model training methodology.

3.2. Denoising Neural Network

The denoising neural network is the core of UNet-INSN, as it learns the characteristics of impulsive noise and performs denoising accordingly. Specifically, for PLC systems and denoising network design, on one hand, the time-domain features of impulsive noise can be learned by the denoising neural network; on the other hand, the input and output signals of the denoising network should have the same structure. Given that UNet exhibits excellent feature extraction capabilities for multi-dimensional data and its input–output signals share the same structure, this paper designs a denoising neural network model that integrates 1D convolution with the UNet architecture. Its overall structure is shown in Figure 4. This network consists of two symmetrical paths: The left side is the downsampling path, primarily for feature extraction, composed of four downsampling modules. Each module contains two 1D convolutional layers followed by a 1D max-pooling layer; the fifth layer is a transition layer consisting of two 1D convolutional layers. The right side is the upsampling path, responsible for reconstructing and restoring detailed information of the data, corresponding to the downsampling path. Each layer includes a deconvolutional layer and two 1D convolutional layers. Except for the max-pooling layer, all other convolutional operations adopt boundary padding strategies to ensure that the input and output lengths remain consistent. Finally, a convolutional layer with a 1 × 1 convolution kernel and 2 output channels realizes the point-wise mapping of the input signal by the network, with the output in the form of a 1D sequence. Considering that the experimental data used are 1D time-series signals, the convolutional layers, pooling layers, and sampling modules in the network have been adapted and adjusted according to the characteristics of 1D data. The parameter settings for each layer of UNet are detailed in Table 1.

3.3. Global Mask Generator and Mapper

Global mask generators and global mask mappers form the foundation of self-supervised learning models. For signal denoising applications, a typical self-supervised learning method is Noise2Void [18], which masks some pixels of an image by introducing random masks and replaces the target pixels with surrounding pixel values to achieve image denoising. However, the random masking method of Noise2Void only masks part of the signal during training, causing the objective function to focus solely on the masked regions, which leads to reduced accuracy and slower convergence. To address this issue, this paper proposes a global mask generation and mapper that generates global masks for the entire signal group and performs global denoising on blind spots. This approach can more fully utilize the global information of the signal, promote information exchange across all masked regions, and improve noise reduction accuracy. Additionally, the global mask mapper enables faster training compared with the random masking scheme.
Specifically, for a given set of noisy signal sequences y with width W and height H, the workflow of the global mask mapper C ( f θ ( Ω y ) ) is shown in Figure 5: First, the original noisy signal y is divided into H h × W w units, each with a size of h × w . Subsequently, the signal samples in the i-th row and j-th column of each unit are filled with black (set to 0) to achieve signal masking. Therefore, the masked signal Ω y i j consists of H h × W w units, where i { 0 , , h 1 } , j { 0 , , w 1 } , forming a total of h × w masked signal sequences. These masked signals are superimposed to form a mask container Ω y . Next, the mask container Ω y is input into the denoising network f θ ( · ) for processing to obtain the denoised output f θ ( Ω y ) . Finally, at the positions corresponding to the mask in the output f θ ( Ω y ) , the mask mapper C ( · ) samples them and assigns them to the same positions to ultimately obtain the globally denoised signal C ( f θ ( Ω y ) ) .

3.4. Loss Function

In deep networks, the reasonable definition of the loss function is crucial to guiding model training and achieving the desired effect. For the model established in this paper, the loss function for model training is defined as
L = L msa + η L reg .
Among them, L m s e and L r e g represent the Mean Squared Error (MSE) term and the regularization loss term, respectively, and η is a tunable hyperparameter. The specific definitions of these two terms are as follows:
L m s e = C f θ Ω y y + λ f ^ θ y y 2 2 ,
L r e g = C f θ Ω y y 2 2 .
Here, Ω y denotes the mask codeword, f θ ( · ) represents the denoising network, C ( · ) denotes the global mapping combination process, and f ^ θ ( y ) represents the output branch of the denoising network with gradient updates disabled. Specifically, the Mean Squared Error (MSE) loss term in the loss function is composed of an invisible component C f θ Ω y y and a visible component λ f ^ θ y y . The purpose of introducing the invisible component is to prevent the neural network from merely learning the identity mapping, while the visible component compensates for the information loss caused by the invisible component. Notably, gradient updates are disabled when processing the visible component. Additionally, to prevent the error accumulation in the invisible component from exacerbating training instability, this paper introduces a regularization term L r e g for correction.

3.5. Model Training

Based on the established UNet-INSN self-supervised learning model, its training process is shown in Algorithm 1 and Figure 6. For the received noisy signal sequence Y = y i i = 1 n , the input signal y is first sampled and a global mask generator Ω · is created based on the input signal scale. Subsequently, the input signal y passes through the global mask generator to construct a masked signal container Ω y . Next, the masked signal container Ω y is fed into the denoising network f θ · to obtain the network output f θ ( Ω y ) . After that, the global mask mapper C ( · ) samples and combines the masked regions to generate the masked denoised signal C ( f θ ( Ω y ) ) . Simultaneously, the original noisy signal y is directly processed by the denoising network to produce another output f ^ θ y . Finally, the network parameters θ are updated by minimizing the loss function L until convergence. This algorithm effectively leverages the internal structural information of the signal through a self-supervised masking–inpainting mechanism, eliminating the dependence on external clean data.
Algorithm 1 Self-supervised learning based on UNet-INSN
Input: Noisy signal sequence Y = y i i = 1 n ;
            Denoising neural network f θ · ;
            Hyperparameter β , α ;
1:   While non-convergence of the network do:
2:   Sample the noisy signal sequence to obtain y;
3:   Generate a global mask generator Ω ( · ) based on the data scale of the sampled signal;
4:   Feed y into the global mask generator to obtain the masked signal container Ω y ;
5:   Input Ω y to the denoising neural network f θ · to obtain the output f θ ( Ω y ) ;
6:   Global mask mapper C ( · ) samples and combines the masked regions of f θ ( Ω y )
      to generate the masked denoised signal C ( f θ ( Ω y ) ) ;
7:   Compute the denoised signal f ^ θ y without gradient updates for the original
      noisy signal y ;
8:   Compute the MSE term
       L m s e = C f θ Ω y y + λ f ^ θ y y 2 2 ;
9:   Compute the regularization loss term
       L r e g = C f θ Ω y y 2 2 ;
10: Update the network parameters θ by minimizing the loss function L = L msa + η L reg ;
11: end

4. Simulation Results and Performance Analysis

In this section, we conduct a simulation analysis of the performance of the proposed model. First, we introduce the model training dataset, training environment, and hyperparameter settings, and analyze the model training results. Subsequently, based on the established MIMO-PLC simulation system, we perform simulation verification and analysis of the system performance of the model.

4.1. Analysis of Model Training Results

(1)
Training dataset: Since self-supervised learning does not require clean signals, only a noisy dataset needs to be constructed. Considering the absence of a publicly available MIMO-PLC signal dataset, this paper used MATLAB (R2024b) to build a PLC system and impulsive noise signal model and generated the corresponding noisy signal dataset accordingly. The relevant parameter settings of the system and simulation model are shown in Table 2. It can be seen that the PLC channel model proposed by Tonello et al. [21] and the BG impulsive noise model were used in dataset generation. In addition, the signal-to-impulsive noise ratio (SINR) of the PLC signal was −15 dB, while the signal-to-background noise ratio (SNR) of the PLC signal ranged from 0 dB to 40 dB in steps of 2 dB. A total of 21 groups of data were generated, with 200 symbols in each group, where 80% were used for training and 20% for validation.
(2)
Model training environment and hyperparameter settings: For model training, this paper used a server equipped with an NVIDIA RTX 4080 GPU as the training platform. The model was implemented using the PyTorch 1.13 deep learning framework, and programming development was conducted in Python 3.6 under the PyCharm 2024 integrated development environment. The hyperparameters of the model were set as follows: the optimizer was Adam, the activation function was ReLU, the training batch size was 50, and the initial learning rate was 2 × 10 4 , which gradually decreased to 1 × 10 4 in steps of 5 × 10 5 .
(3)
Based on the generated training dataset, constructed training environment, and corresponding model hyperparameter settings, the model was trained. The changing trends of the loss functions for the training set and validation set are shown in Figure 7. It can be observed that as the number of training epochs increases, the training loss decreases rapidly, indicating that the model can quickly capture the key features of impulsive noise. However, when the number of training epochs further increases (i.e., Epoch > 5), the decline in training loss gradually slows down and tends to stabilize, and the model reaches a convergent state. It can also be seen that the changing trend of the validation set loss is generally consistent with that of the training set loss, but after the 37th epoch, the validation loss is greater than the training loss, and the model exhibits overfitting. Therefore, the training was paused after 37 epochs, and model performance verification was carried out accordingly.

4.2. Model System Performance Analysis

This section conducts a simulation analysis of the system performance of the trained model. Specifically, the trained model parameters were first saved and imported into the MATLAB environment for simulation analysis, where the model was applied to the previously established MIMO-PLC system to evaluate its impact on the system’s bit error rate (BER) performance. To comprehensively verify the performance of the proposed model method, this paper analyzes the performance of the proposed algorithm in ideal and non-ideal channel estimation scenarios for different impulsive noise occurrence probabilities and different signal-to-impulsive noise ratios (SINR). Additionally, two self-supervised benchmark methods, i.e., Noise2Void (labeled N2V [18] in the figures) and Zero-Shot Noise2Noise (labeled ZSN2N in the figures) [19], and one traditional denoising benchmark method, that is, DeepClipping, are introduced for performance comparison. Herein, Noise2Void achieves image denoising by introducing random masks to occlude partial pixels of the image and randomly replacing the target pixels with surrounding pixel values. Zero-Shot Noise2Noise decomposes an image into a pair of downsampled images and trains a network model to map the source downsampled image to the target downsampled image for denoising. DeepClipping achieves signal denoising by using a piecewise linear function to constrain the amplitude of the original signal sequence. Furthermore, to more thoroughly validate the model’s performance and ensure the reliability of the results, we conducted 10,000,000 simulations under varying parameter configurations and averaged the outcomes. Thus, each data point in the following figures represents the mean of 10,000,000 simulation runs. The specific simulation results and analysis are as follows.
First, we analyze the influence of the hyperparameters λ and η in the loss function on system performance. Specifically, for both ideal and non-ideal channel estimation scenarios, simulation experiments were conducted with η = 1 and λ 1 ,   2 ,   5 , as well as with λ = 2 and η 0 ,   1 ,   2 . The results are shown in Figure 8 and Figure 9, respectively. It can be observed that different hyperparameter settings affect system performance in both ideal and non-ideal channel estimation scenarios, with relatively more significant impacts under non-ideal conditions. Additionally, based on the selected three groups of parameter settings, the optimal hyperparameter configuration is determined to be λ = 2 and η = 1 . Therefore, subsequent simulations adopted these settings for performance evaluation.
Secondly, under the conditions of ideal channel estimation and different impulsive noise occurrence probabilities, the influence of different signal-to-background noise ratios (SNRs) on the model performance is analyzed, and the results are shown in Figure 10. It can be seen that under different impulsive noise occurrence probabilities, as the SNR increases, the BER performance of all model methods is significantly improved. Nevertheless, UNet-INSN can achieve better performance, which is manifested in the lower SNR required to achieve the same bit error rate. Specifically, to achieve the BER performance of 10 6 , UNet-INSN requires an SNR that is 3 dB lower than that of Noise2Void and 4 dB lower than that of Zero-Shot Noise2Noise. In addition, it can also be seen from the figure that when the impulsive noise occurrence probability changes, that is, when the impulsive noise occurrence probability increases from 0.01 to 0.05, the performance of the three self-supervised learning algorithms is relatively less affected, and the SNR required to achieve the expected BER performance increases by about 1 dB. However, for DeepClipping, since it uses a fixed threshold for impulsive noise removal, when the impulsive noise occurrence probability changes, the probability of missing detection is greatly increased, leading to significant performance degradation.
Next, under the conditions of ideal channel estimation and different signal-to-impulsive noise ratios (SINRs), the influence of different signal-to-background noise ratios (SNRs) on model performance is also analyzed. Here, the impulsive noise occurrence probability p is set to 0.01, and the simulation results are shown in Figure 11. As expected, the bit error rate (BER) performance of all models improves with the increase in the SNR. Nevertheless, UNet-INSN still achieves better BER performance, demonstrating approximately 3 dB and 4 dB performance gains over Noise2Void and Zero-Shot Noise2Noise, respectively, and significantly outperforming DeepClipping by about 7 dB. Additionally, changes in the SINR have relatively little impact on the performance of different model methods, with BER variations of less than 1 dB. Combining with Figure 10, it can be found that the three self-supervised learning algorithms exhibit stable performance under changes in impulsive noise probability and the SINR, which is significantly different from the DeepClipping method. The reason for the obvious performance variation of DeepClipping is attributed to its fixed threshold setting based on Gaussian distribution 3 σ principles. When the impulsive noise probability changes, missed detection occurs to a certain extent, while changes in SINR do not affect the impulsive noise probability, leading to minimal performance difference.
Finally, considering that PLC systems in practical applications often face non-ideal channel estimation, this paper further validates the performance of the proposed model method under non-ideal channel estimation conditions. The relevant parameter settings are consistent with those in Figure 10 and Figure 11, and the simulation results are shown in Figure 12 and Figure 13. As expected, the performance of all model methods degrades under non-ideal channel estimation. Specifically, the performance of the UNet-INSN algorithm drops by 12 dB compared with that under ideal channel estimation. Nevertheless, it is evident that the proposed algorithm, UNet-INSN, still significantly outperforms the comparison algorithms. Similarly, under different scenario conditions, the DeepClipping method exhibits relatively poor adaptability to time-varying conditions, primarily due to its fixed threshold design. Summarizing all simulation results, it can be seen that the proposed algorithm, UNet-INSN, demonstrates good performance under different conditions, indicating its robustness and excellent generalization ability.

5. Conclusions

This paper studies impulsive noise suppression in PLC systems and proposes an impulsive noise suppression method based on self-supervised learning: UNet-INSN. First, by using the designed global mask mapper, UNet-INSN can use the complete noisy signal for model training, addressing the information loss problem caused by local signal masking in traditional mask-driven algorithms. Second, a reproducibility loss function is proposed to effectively prevent the model from degenerating into an identity mapping, which affects the model’s noise suppression performance. Simulation results show that the SNRs required for the proposed algorithm to achieve a bit error rate of 10 6 under ideal and non-ideal conditions are 12 dB and 26 dB, respectively, which are significantly smaller than those of the comparison methods. In addition, the results also show that the proposed method has better robustness and generalization ability in dynamic impulsive noise environments. Nevertheless, from a practical application perspective, there are still some limitations to the proposed model, UNet-INSN. First, in UNet-INSN, we adopt a fixed masking strategy in self-supervised learning, but impulsive noise in the time domain may exhibit dynamic change in temporal scales. Therefore, future research should explore dynamic parameter update methods for denoising networks based on time-varying noise parameter estimation to further enhance impulsive noise suppression performance. Second, the model still requires offline training followed by online deployment. If the data distribution of the offline training samples significantly deviates from the real-world distribution, model performance will degrade. Consequently, further exploration of lightweight, online-trainable model methods remains necessary.

Author Contributions

Conceptualization, E.Z., Y.R. and R.L.; methodology, E.Z., Y.R. and R.L.; software, E.Z., Y.R. and R.L.; validation, R.L. and S.O.; formal analysis, S.O. and Y.M.; investigation, Y.M. and X.Y.; resources, E.Z., Y.R. and R.L.; data curation, Y.R. and R.L.; writing—original draft preparation, E.Z. and Y.R.; writing—review and editing, R.L., S.O., G.L. and Y.M.; visualization, G.L. and X.Y.; supervision, E.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research study was funded by the 2024 State Grid Corporation of China Science and Technology Program, grant number 5700-202455278A-1-1-ZN.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

Authors Enguo Zhu, Yi Ren and Ran Li were employed by the China Electric Power Research Institute. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors declare that this study received funding from State Grid Corporation of China. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

References

  1. Aderibole, A.O.; Saathoff, E.K.; Kircher, K.J.; Leeb, S.B.; Norford, L.K. Power line communication for low-bandwidth control and sensing. IEEE Trans. Power Deliv. 2021, 37, 2172–2181. [Google Scholar] [CrossRef]
  2. Cano, C.; Pittolo, A.; Malone, D.; Lampe, L.; Tonello, A.M.; Dabak, A.G. State of the Art in Power Line Communications: From the Applications to the Medium. IEEE J. Sel. Areas Commun. 2016, 34, 1935–1952. [Google Scholar] [CrossRef]
  3. Ndjiongue, A.; Ferreira, H. Power line communications (PLC) technology: More than 20 years of intense research. Trans. Emerg. Telecommun. Technol. 2019, 30, e3575. [Google Scholar] [CrossRef]
  4. Berrou, C.; Glavieux, A.; Thitimajshima, P. Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1. In Proceedings of the ICC ’93—IEEE International Conference on Communications, Geneva, Switzerland, 23–26 May 1993; Volume 2, pp. 1064–1070. [Google Scholar] [CrossRef]
  5. Gallager, R. Low-density parity-check codes. IRE Trans. Inf. Theory 1962, 8, 21–28. [Google Scholar] [CrossRef]
  6. Arikan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  7. Caire, G.; Taricco, G.; Biglieri, E. Bit-interleaved coded modulation. IEEE Trans. Inf. Theory 1998, 44, 927–946. [Google Scholar] [CrossRef]
  8. Juwono, F.H.; Guo, Q.; Huang, D.; Wong, K.P. Deep clipping for impulsive noise mitigation in OFDM-based power-line communications. IEEE Trans. Power Deliv. 2014, 29, 1335–1343. [Google Scholar] [CrossRef]
  9. Caire, G.; Al-Naffouri, T.Y.; Narayanan, A.K. Impulse noise cancellation in OFDM: An application of compressed sensing. In Proceedings of the IEEE International Symposium on Information Theory, Toronto, ON, Canada, 6–11 July 2008; pp. 1293–1297. [Google Scholar] [CrossRef]
  10. Lin, J.; Nassar, M.; Evans, B.L. Impulsive noise mitigation in powerline communications using sparse Bayesian learning. IEEE J. Sel. Areas Commun. 2013, 31, 1172–1183. [Google Scholar] [CrossRef]
  11. Liu, Z.; Li, Y.; Shi, S. Impulsive noise suppressing method in power line communication system using sparse iterative covariance estimation. Radio Sci. 2022, 57, e2022RS007424. [Google Scholar] [CrossRef]
  12. Sadeghi, N.; Azghani, M. Deep learning based channel estimation in PLC systems. Ann. Telecommun. 2024, 80, 581–590. [Google Scholar] [CrossRef]
  13. Letizia, N.A.; Tonello, A.M.; Righini, D. Learning to synthesize noise: The multiple conductor power line case. In Proceedings of the 2020 IEEE International Symposium on Power Line Communications and Its Applications (ISPLC), Malaga, Spain, 11–13 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  14. Chien, Y.R.; Chou, P.H.; Peng, Y.J.; Huang, C.Y.; Tsao, H.W.; Tsao, Y. NGGAN: Noise generation GAN based on the practical measurement dataset for narrowband powerline communications. IEEE Trans. Instrum. Meas. 2024, 74, 2505415. [Google Scholar] [CrossRef]
  15. Ouyang, S.; Liu, G.; Huang, T.; Liu, Y.; Xu, W.; Wu, Y. Impulsive Noise Suppression Network for Power Line Communication. IEEE Commun. Lett. 2024, 28, 2628–2632. [Google Scholar] [CrossRef]
  16. Tang, Q.; Qu, S.; Zheng, W.; Tu, Z. Fast finite-time quantized control of multi-layer networks and its applications in secure communication. Neural Netw. 2025, 185, 107225. [Google Scholar] [CrossRef]
  17. Mohsin, M.; Rovetta, S.; Masulli, F.; Greco, D.; Cabri, A. Deep Learning-Powered Computer Vision System for Selective Disassembly of Waste Printed Circuit Boards. In Proceedings of the 2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI), Milano, Italy, 18–20 September 2024; pp. 115–119. [Google Scholar] [CrossRef]
  18. Krull, A.; Buchholz, T.O.; Jug, F. Noise2void-learning denoising from single noisy images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2129–2137. [Google Scholar]
  19. Mansour, Y.; Heckel, R. Zero-shot noise2noise: Efficient image denoising without any data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14018–14027. [Google Scholar]
  20. Kim, I.H.; Varadarajan, B.; Dabak, A. Performance analysis and enhancements of narrowband OFDM powerline communication systems. In Proceedings of the 2010 First IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, 4–6 October 2010; pp. 362–367. [Google Scholar] [CrossRef]
  21. Tonello, A.M.; Versolatto, F.; Béjar, B.; Zazo, S. A fitting algorithm for random modeling the PLC channel. IEEE Trans. Power Deliv. 2012, 27, 1477–1484. [Google Scholar] [CrossRef]
Figure 1. Power line carrier communication system model.
Figure 1. Power line carrier communication system model.
Applsci 15 09101 g001
Figure 2. Impulsive noise sample.
Figure 2. Impulsive noise sample.
Applsci 15 09101 g002
Figure 3. UNet-INSN self-supervised learning framework.
Figure 3. UNet-INSN self-supervised learning framework.
Applsci 15 09101 g003
Figure 4. The network structure of UNet-INSN.
Figure 4. The network structure of UNet-INSN.
Applsci 15 09101 g004
Figure 5. Global mask generation and mapping.
Figure 5. Global mask generation and mapping.
Applsci 15 09101 g005
Figure 6. The flow diagram of the proposed algorithm, UNet-INSN.
Figure 6. The flow diagram of the proposed algorithm, UNet-INSN.
Applsci 15 09101 g006
Figure 7. Loss function curves of UNet-INSN training and validation.
Figure 7. Loss function curves of UNet-INSN training and validation.
Applsci 15 09101 g007
Figure 8. Comparison of bit error rate curves for different λ .
Figure 8. Comparison of bit error rate curves for different λ .
Applsci 15 09101 g008
Figure 9. Comparison of bit error rate curves for different η .
Figure 9. Comparison of bit error rate curves for different η .
Applsci 15 09101 g009
Figure 10. Comparison of bit error rate curves for different impulsive noise occurrence probabilities under ideal channel estimation.
Figure 10. Comparison of bit error rate curves for different impulsive noise occurrence probabilities under ideal channel estimation.
Applsci 15 09101 g010
Figure 11. Comparison of bit error rate curves under different SINR conditions and ideal channel estimation.
Figure 11. Comparison of bit error rate curves under different SINR conditions and ideal channel estimation.
Applsci 15 09101 g011
Figure 12. Comparison of bit error rate curves for different impulsive noise occurrence probabilities under non-ideal channel estimation.
Figure 12. Comparison of bit error rate curves for different impulsive noise occurrence probabilities under non-ideal channel estimation.
Applsci 15 09101 g012
Figure 13. Comparison of bit error rate curves under different SINR conditions and non-ideal channel estimation.
Figure 13. Comparison of bit error rate curves under different SINR conditions and non-ideal channel estimation.
Applsci 15 09101 g013
Table 1. UNet-INSN parameters.
Table 1. UNet-INSN parameters.
AreaLayerNameKernel SizeNumber of ChannelsStridePaddingActivation Function
Downsampling1Conv3 × 32/6411ReLU
Conv3 × 36411ReLU
Maxpool1 × 264/128(1,2)--
2Conv3 × 312811ReLU
Conv3 × 312811ReLU
Maxpool1 × 2128/256(1,2)--
3Conv3 × 325611ReLU
Conv3 × 325611ReLU
Maxpool1 × 2256/512(1,2)--
4Conv3 × 351211ReLU
Conv3 × 351211ReLU
Maxpool1 × 2512/1024(1,2)--
Transition5Conv3 × 3102411ReLU
Conv3 × 3102411ReLU
Upsampling6Up1 × 2102421ReLU
Conv3 × 31024/51211ReLU
Conv3 × 351211ReLU
Upsampling7Up1 × 251221ReLU
Conv3 × 3512/25611ReLU
Conv3 × 325611ReLU
8Up1 × 225621ReLU
Conv3 × 3256/12811ReLU
Conv3 × 312811ReLU
9Up1 × 212821ReLU
Conv3 × 3128/6411ReLU
Conv3 × 36411ReLU
Output10Conv1 × 164/21-ReLU
Table 2. Simulation parameters for MIMO-PLC system based on UNet-INSN.
Table 2. Simulation parameters for MIMO-PLC system based on UNet-INSN.
Simulation ParameterSetting
Simulation PlatformMIMO-PLC System
Port Configuration2 × 2
Sampling Frequency25 MHz
OFDM Length40.96 μs
FFT/IFFT Length1024
Noise ModelBG Model
p{0.01, 0.05}
SINR{−10, −15} dB
Channel EnvironmentMIMO Multipath Channel Extension Model
Modulation SchemeQPSK
Channel Coding1/3-Turbo Code
Number of Effective Subcarriers512
Channel EstimationIdeal/Non-Ideal
Number of Symbols21 × 200
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, E.; Ren, Y.; Li, R.; Ouyang, S.; Ma, Y.; Yang, X.; Liu, G. UNet-INSN: Self-Supervised Algorithm for Impulsive Noise Suppression in Power Line Communication. Appl. Sci. 2025, 15, 9101. https://doi.org/10.3390/app15169101

AMA Style

Zhu E, Ren Y, Li R, Ouyang S, Ma Y, Yang X, Liu G. UNet-INSN: Self-Supervised Algorithm for Impulsive Noise Suppression in Power Line Communication. Applied Sciences. 2025; 15(16):9101. https://doi.org/10.3390/app15169101

Chicago/Turabian Style

Zhu, Enguo, Yi Ren, Ran Li, Shuiqing Ouyang, Yang Ma, Ximin Yang, and Guojin Liu. 2025. "UNet-INSN: Self-Supervised Algorithm for Impulsive Noise Suppression in Power Line Communication" Applied Sciences 15, no. 16: 9101. https://doi.org/10.3390/app15169101

APA Style

Zhu, E., Ren, Y., Li, R., Ouyang, S., Ma, Y., Yang, X., & Liu, G. (2025). UNet-INSN: Self-Supervised Algorithm for Impulsive Noise Suppression in Power Line Communication. Applied Sciences, 15(16), 9101. https://doi.org/10.3390/app15169101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop