Next Article in Journal
Combined Sous-Vide and High Hydrostatic Pressure Treatment of Pork: Is the Order of Application Decisive When Using Minimal Processing Technologies?
Previous Article in Journal
Computer-Guided Bone Lid Technique for Surgical Extraction of Deeply Impacted Mandibular Third Molars: A Technical Report
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Temporal Methodology for Assessing the Performance of Concatenated Codes in OFDM Systems for 4K-UHD Video Transmission

by
Thiago de A. Costa
*,
Alex S. Macedo
*,
Edemir M. C. Matos
*,
Bruno S. L. Castro
*,
Fabricio de S. Farias
*,
Caio M. M. Cardoso
*,
Gervásio P. dos S. Cavalcante
* and
Fabricio J. B. Barros
*
Computer and Telecommunications Laboratory (LCT), Institute of Technology (ITEC), Federal University of Para (UFPA), Belém 66075-110, Brazil
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3581; https://doi.org/10.3390/app14093581
Submission received: 24 January 2024 / Revised: 5 April 2024 / Accepted: 5 April 2024 / Published: 24 April 2024
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
The communication channel is a critical part of the process of information degradation. In the 4K ultra-resolution video transmission domain, the communication channel is a crucial part where information degradation occurs, inevitably leading to errors during reception. To enhance the transmission process in terms of fidelity, advanced technologies such as digital video broadcasting terrestrial (DVB-T) and its evolutionary successor, digital video broadcasting terrestrial second generation (DVB-T2), are utilized to mitigate the effects of data transmission errors. Within this scenario, this research presents an innovative methodology for the temporal analysis of 4K ultra-resolution video quality under the influence of additive white Gaussian noise (AWGN) and Rayleigh channels. This analytical endeavor is facilitated through the application of concatenated coding schemes, specifically, the Bose–Chaudhuri–Hocquenghem concatenated low-density parity check (BCH-LDPC) and Reed–Solomon concatenated convolutional (RS-CONV) coders. A more comprehensive understanding of video quality can be attained by considering its temporal variations, a crucial aspect of the ongoing evolution of technological paradigms. In this study, the Structural Similarity Index (SSIM) serves as the main metric for quality assessment during simulations. Furthermore, the simulated Peak Signal-to-Noise Ratio (PSNR) values validate these findings, exhibiting consistent alignment with the SSIM-based evaluations. Additionally, the performance of the BCH-LDPC significantly outperforms that of RS-CONV under the 64-QAM modulation scheme, yielding superior video quality levels that approximate or surpass those achieved by RS-CONV under QPSK (Quadrature Phase Shift Keying) modulation, leading to an increase in spectral efficiency. This enhancement is evidenced by SSIM gains exceeding 78% on average. The computation of average gains between distinct technologies in video quality analysis furnishes a robust and comprehensive evaluation framework, empowering stakeholders to make informed decisions within this domain.

1. Introduction

During video transmission, the channel introduces noise and degrades the transmitted information. To minimize the negative impacts experienced while transmitting data, various measures are implemented by employing tactics belonging to a specialized category of channel coding techniques known as forward error correction (FEC). These highly efficient strategies entail the incorporation of different codes with error-resolving capabilities, including Reed–Solomon, convolutional, Bose–Chaudhuri–Hocquenghem (BCH), and low-density parity-checking (LDPC) codes [1]. This approach has demonstrated significant success in reducing transmission errors.
The main advantage of channel coding lies in enhancing the system’s performance over an uncoded transmission. Certain digital systems utilize a specific class of codes known as concatenated code pairs, which employ two levels of coding, namely, internal and external codes. Several systems use pairs of concatenated codes, exemplified by the digital video broadcasting terrestrial (DVB-T) system. With the escalating demand for broadcasting services, there has emerged a requisite for developing a novel, more efficient standard [2]. This necessity led to the inception of DVB-T2 (digital video broadcasting—second-generation terrestrial), which offers advanced technology and augmented capacity across diverse terrestrial domains. Several notable enhancements have been introduced, with the evolution in FEC codes standing out as one of the most prominent. Assessing this transition is imperative to gauge the efficacy of the implemented improvements and comprehend their ramifications.
Although the DVB-T and DVB-T2 systems prove effective in error correction, it is essential to acknowledge the gap in current research. Many studies comparing or evaluating their performances primarily concentrate on technical aspects such as bit error rate and error correction capability, often neglecting user experience quality, particularly video quality [3]. Given the importance of video degradation in Quality of Experience (QoE), a methodology has been devised to evaluate video quality temporally.

2. Related Work

The performance and parameters of DVB-T/T2 systems have undergone extensive evaluation across diverse research endeavors conducted in recent years [4,5,6,7,8,9,10,11]. For instance, in [4], evaluation of the DVB-T/T2 Lite system’s performance utilizing the multiple-input single-output (MISO) transmission technique was conducted. In [5], an enhanced receiver for DVB-T systems was proposed to mitigate the impacts induced by channel imperfections, particularly in phase and quadrature. Similarly, Ref. [6] delved into the effects of time and frequency deviation on radar performance within the DVB-T system. Moreover, Ref. [7] simulated a multiple-input multiple-output (MIMO) scenario over DVB-T2 and LDPC channel coding, employing the maximum likelihood estimation technique. Additionally, Ref. [8] focused on analyzing the performance degradation attributed to phase and quadrature (IQ) imperfections in the orthogonal frequency division multiplexing (OFDM) modulator/demodulator of DVB-T and DVB-T2 Lite systems.
The quality of DVB-T2 transmission was analyzed in [9] under fixed reception conditions to monitor the transition process from analog to digital terrestrial TV in Indonesia. Furthermore, Refs. [10,11] proposed the adoption of flexible waveform techniques such as the universal filtered multicarrier (UFMC) and filter bank multicarrier (FBMC) techniques for 5G networks to enhance the spectral efficiency of DVB-T2. Despite these advancements, the extant literature evaluating the performance of DVB-T/T2 systems overlooks crucial aspects for end users, notably, video quality. With the scaling of new streaming platforms and ascending demand for high-quality video services, a pressing need arises to improve user experience, particularly by considering human perception [12].
The introduction of DVB-T2 heralded critical technological changes compared to its predecessor, notably impacting system performance. One of the main disparities lies in the error correction coding schemes employed, with DVB-T standardizing the concatenated pair as Reed–Solomon and convolutional (RS-CONV) for external and internal error correction, respectively. Conversely, DVB-T2 incorporates the BCH code concatenated with LDPC.
Despite the advantages presented by DVB-T2 over DVB-T, both technologies persist in numerous countries. For instance, DVB-T2 reigns as the dominant technology in Europe. At the same time, the RS-CONV pair retains usage in countries like Brazil and Japan, which have adopted the integrated-services digital broadcasting terrestrial (ISDB-T) system(https://www.dibeg.org/world/ (accessed on 4 April 2024)).Evaluating the improvements facilitated by exchanging concatenated code pairs in this context is imperative. Consequently, several studies have explored the performance of FEC codes [13,14] and analyzed the performance gains using constellation techniques rotated within the DVB-T2 system [2]. The assessment of these studies often employs the bit error rate (BER) metric, widely utilized in digital systems [15,16,17,18].
In addition to system metrics, numerous studies have evaluated various user metrics concerning video quality within the realm of Quality of Experience (QoE) [19]. Objective metrics have also been employed to analyze video quality in multimedia systems [20,21,22]. However, these studies typically overlook temporal variations influencing QoE, a critical gap identified in [23,24,25,26]. Temporal fluctuations in video quality can significantly impair QoE [26], necessitating a quantitative analysis of video quality related to QoE. Accordingly, this study proposes a novel methodology for temporal (frame-by-frame) analysis of 4 K ultra-resolution video quality.
The evaluation of video quality employs SSIM/PSNR metrics. Although the SSIM primarily serves as the primary quality evaluation metric in this study, PSNR is also considered as an additional measure. Notably, the SSIM provides a measure closely aligned with human perception as it assesses the quality of digital images relative to the original image considering factors such as luminance, contrast, and structure [27,28].
The SSIM/PSNR values are obtained through a new methodology involving subjecting a set of frames representing the video to varying noise levels, simulating fluctuations in channel conditions during video transmission. This methodology yields the average percentage gain in the SSIM of one encoder relative to the other.
In summary, this article’s contributions include:
  • Utilization of the SSIM for temporal video quality evaluation, aligning closely with human perception;
  • Development of a novel methodology assessing SSIM/PSNR values through frame-by-frame analysis under varying noise levels, simulating channel condition fluctuations during video transmission;
  • Consideration of temporal variations enabling the generation of quantitative data for more accurate analysis of technology performance regarding video quality, aiding professionals and researchers in technology selection;
  • Identification of the most efficient techniques in reducing quality degradation, facilitating prediction and optimization of video quality, particularly for streaming ultra-resolution videos.
The subsequent sections of this document are structured as follows: Section 3 elucidates the methodology and metrics employed to derive the results, Section 4 showcases the outcomes attained in this investigation, and Section 5 deliberates upon the findings. The final insights are encapsulated in Section 6.

3. Methodology

In this section, we delineate the methodology employed for temporal evaluation, as depicted in Figure 1. The approach encompasses several stages, each elucidated in detail below. Our objective is to provide a comprehensive explanation of our evaluation process and ensure methodological transparency.
Four selected frames from the Cross video are showcased in Figure 2 to further explain the methodology. The discernible escalation in noise levels evident in each frame relative to its predecessor is noteworthy. This gradual noise amplification adheres to the previously described methodology, wherein noise intensity systematically escalates as the video progresses. These nuanced alterations in channel conditions allow us to assess the efficacy of various techniques amidst fluctuating noise levels. Ultimately, this facilitates the acquisition of valuable insights into the robustness and adaptability of the video transmission system.

3.1. Video Encoding/Decoding

In the Video Encoding block, the original YUV file is encoded to the H.264 standard with a frame rate of 50 FPS and a maximum duration of 10 s using FFmpeg (FFmpeg is a command line tool used to convert multimedia files between formats [29]). YUV is a raw format commonly used in video compression studies [30]. The videos used in the simulations are Cross, Crowd, Duck, Tree, and Park, which were obtained from Xiph.org (https://media.xiph.org/video/derf/ (accessed on 4 April 2024)). The resolution, frame rate, number of frames, length of GOP, B-frames per GOP, and Quantization Parameter (QP) are considered as video codification parameters, as presented in Table 1. The Video Decoding block transforms the received H.264 video into YUV format, allowing for further processing and analysis.

3.2. Channel Coding/Decoding

In the Channel Coding block, the video is converted into binary data, represented as binary information. In channel coding, redundant bits are added to the original data to enhance the robustness of the transmission. Moreover, various combinations of code pairs are utilized to further enhance the robustness of the transmission by channel coding. The RS-CONV and BCH-LDPC are codes applied sequentially. Initially, an external encoder (RS and BCH) adds redundancy to the binary information; subsequently, a second layer of redundancy is added by the internal encoder (LDPC and CONV), as presented in Figure 3. The Channel Decoding block decodes the received information by performing the inverse operations of the RS-CONV and BCH-LDPC, thus obtaining the binary information corresponding to the received video.
A brief description of the codes is presented below.
  • BCH (Bose–Chaudhuri–Hocquenghem): BCH codes represent block codes that function on multiple bits instead of individual ones. Employing a BCH(n,k) code enables the encoding of k message bits and the generation of encoded n-bit data, where n is equal to 2 m 1 with m 3 [31];
  • LDPC (low-density parity check): LDPC codes utilize a binary parity check matrix characterized by numerous elements with values of 1 and 0. In particular, LDPC coding encompasses diverse methodologies, including the implementation of matrices and graphs [32];
  • RS (Reed–Solomon): RS codes are systematic cyclic linear block codes that operate on symbols with width of m bits, where m is m > 2 . In RS codes, the codes are designed in such a way that every possible m-bit word is indeed a valid symbol [33];
  • CONV (convolutional codes): Unlike block encoding, the output of the convolutional encoder is not in block format but in the form of a coded sequence generated from an input information sequence. The encoder generates redundancy through convolutions. The decoder utilizes the redundancy in the coded sequence to determine which message sequences are sent through an error correction action. Thus, in this type of error-correcting code, a set of m symbols is transformed into a set of n symbols [34].
Table 2 lists the parameters of the channel encoders and the transmission and reception process of the OFDM systems. These parameters are based on the standards defined in [35,36].

3.3. Orthogonal Frequency Division Multiplexing Symbol TX/RX

The OFDM symbol TX refers to transmitting symbols in an OFDM (orthogonal frequency division multiplexing) communication system. Figure 4 presents a basic diagram with the necessary steps for creating OFDM symbols. The first step is to convert binary information into complex symbols generated from modulation schemes such as PSK (Phase Shift Keying) or QAM (Quadrature Amplitude Modulation). Next, the Serial–Parallel (S/P) block will divide the transmitted data symbols serially into subgroups. These subgroups will be modulated onto subcarriers.
OFDM symbol RX refers to the reception process of OFDM symbols in a communication system. After dividing the received signal into individual subcarriers through FFT, channel estimation and equalization occur, using the characteristics of the channel to equalize the received signal. Next, symbols on each subcarrier are demodulated using appropriate modulation schemes. Subsequently, the demodulated symbols are mapped back to their original bit sequences.

3.4. Channel Additive White Gaussian Noise/Rayleigh

For simplification and processing reasons, the simulations are baseband signal transmissions. Consequently, a signal in the passband can be represented by a complex signal in the baseband. The noise variation is applied to AWGN and Rayleigh channels, subjecting different video segments to varying channel conditions. The noise level increases gradually as the video frame sequence progresses, achieved by adjusting the Signal-to-Noise Ratio (SNR). The relationship between SNR and the noise variance can be defined by Equation (1) [37].
S N R dB = 10 log 10 S σ 2
where the following abbreviations apply:
  • S N R dB is the SNR in decibels;
  • S is the signal power;
  • σ 2 is the noise variance.
The methodology employed results in a significant loss of quality in the information, as depicted in Figure 5.
The simulations use the Rayleigh channel gain and delay values obtained from field tests performed by the Brazilian Association of Radio and Television Broadcasters (ABERT) and Mackenzie University [38]. Table 3 presents the gains and delays.

3.5. Calculation of Structural Similarity Index/Peak Signal-to-Noise Ratio

Access to the original and received videos makes computing the SSIM/PSNR feasible. An objective evaluation of transmission quality can be obtained by computing the SSIM/PSNR between the original and received video.

3.5.1. Structural Similarity Index

The SSIM is responsible for comparing each frame of the original video and degraded video sequences to quantify the video quality. The SSIM is based on the idea that natural images are highly structured; their pixels have a strong dependency, particularly when they are spatially close. Thus, a strong dependency returns an index close to 1 (higher quality), whereas a weak dependency returns an index close to 0 (lower quality) [39]. The SSIM is given by Equation (2).
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
where the following abbreviations apply:
  • x and y are the dimensions of the frame;
  • μ x and μ x are the means of x and y, respectively;
  • σ x 2 and σ y 2 are the variances of x and y, respectively;
  • σ x y is the variance of x and y;
  • c 1 and c 2 are variables for stabilizing the division by a minimum.

3.5.2. Peak Signal-to-Noise Ratio

The PSNR value is calculated as:
PSNR = 10 · log 10 MAX 2 MSE
where MAX is the maximum possible pixel intensity (which, in 8-bit images, is 255), and MSE is the mean square error between the reference image pixel value and the compressed image pixel value [27].

4. Results

This section presents the temporal analysis of SSIM/PSNR performance between RS-CONV and BCH-LDPC concatenated pairs in AWGN and Rayleigh channels and QPSK, 16-QAM, and 64-QAM modulated signals. The methodology proposed in the study was used to calculate the SSIM/PSNR values of both the original and resulting videos obtained from the simulations. Each video was simulated in two channels and three modulation schemes, totaling 990 simulations. Each simulation was repeated 33 times to ensure statistical variability. Table 4 provides comprehensive details of all performed simulations.
Figure 6 and Figure 7 serve as examples of how the data generated by the methodology behave. Due to space constraints, not all figures are shown. In Figure 6 and Figure 7, the colored curves are the SSIM values extracted from the videos resulting from the simulations, and the reference curves are the SSIM values from the original videos. The colored points represent the values where there are losses in quality; therefore, it is possible to observe that, as the sequence of frames advances, the loss of quality increases, being more critical for RS-CONV/64-QAM.
The increase in losses is a trend that follows the adopted methodology, which employs a temporal variation of the noise where the inserted noise in the fragments that compose the video is gradually increased, thus causing an increase in the BER and sequential losses in the frames. In practice, this implies changes that may occur in the channel conditions during video transmission to the user. It can be concluded that techniques with fewer wrong frames are more robust and tend to provide more consistent video quality.

5. Discussion

An important robustness parameter is the number of frames transmitted without error, which directly reflects on the execution time and video quality. From the extracted SSIM/PSNR values, it is possible to calculate, for all the videos and defined scenarios, the proportional values of the number of transmitted frames without losses, and the mean values obtained are presented in Table 5. The frames lost during the transmission can result in visual artifacts such as blurs, jumps, or distortions in the image, as well as abrupt cuts in the audio and loss of synchronization between image and sound. These problems can impair the end user’s QoE and compromise the understanding of the transmitted content. It becomes extremely important to maintain a stable and high-speed connection during the transmission and to use appropriate and up-to-date equipment.
When we examine the values in Table 5, it becomes evident that BCH-LDPC with the QPSK modulation scheme presents better performance, obtaining values above 70% for most videos. Considering the number of frames transmitted without error, the mean values indicated in Table 6 and Table 7 can be obtained. The results of the SSIM show that, for all the videos simulated in BCH-LDPC/64-QAM, the values are close to or higher than those simulated in RS-CONV/QPSK. This results in gains in SSIM values close to or above 78% on average for BCH-LDPC in relation to RS-CONV. As an additional metric, the PSNR reinforces the results found, exhibiting a similar behavior through which the BCH-LDPC method demonstrates greater gains as conditions become more severe and where the greatest gains are in Rayleigh/64-QAM.
The values obtained indicate that, by using the BCH-LDPC/64-QAM system, it is possible to achieve video quality levels close to or higher than with RS-CONV/QPSK, thus enabling the use of videos in ultra resolution while maintaining acceptable quality levels. Thus, even in adverse channel conditions, the BCH-LDPC pair significantly improves the video quality delivered to the user. In addition, its proposal to meet the current demands of videos with increasingly high resolutions can be met satisfactorily, proving the importance of adopting BCH-LDPC in systems that use RS-CONV. An example is the Japanese standard ISDB-T. Such observations highlight the importance of the proposed methodology, which makes it possible to quantitatively define the average performance gain between DVB-T2/BCH-LDPC and DVB-T/RS-CONV systems in terms of objective metrics of video quality delivered to the end user. In contrast, metrics limited to the physical layer level, such as BER, do not consider important QoE parameters.
In our future work, we plan to use the proposed methodology to assess other systems based on video quality, such as the techniques used by 5G, including FBMC and UFMC. These methods have been suggested as alternatives to the traditional OFDM transmission technique used in DVB-T2 systems, as mentioned in [10,11].

6. Conclusions

This paper presents a methodology for temporal analysis of 4K ultra-resolution video quality, allowing quantitative comparisons regarding the objective video quality metric of the SSIM. The results of using BCH-LDPC and RS-CONV encoders in different scenarios were analyzed and compared, considering the H.264 digital compression standard. These results contribute to research on variations in image quality during video transmission.
The simulation results indicate that the BCH-LDPC encoder performed better on both AWGN and Rayleigh channels, demonstrating greater robustness in multipath environments. The BCH-LDPC/64-QAM system, with its average gain of over 78% in the SSIM metric compared to that of RS-CONV, has proven its adaptability even under adverse channel conditions and with higher spectral efficiency. These values provide clear quantitative evidence that the BCH-LDPC/64-QAM system can achieve video quality levels that are comparable or even superior to those of RS-CONV/QPSK. This robust performance under challenging conditions further supports the argument for considering the BCH-LDPC/64-QAM system for adoption in other existing systems, such as the Japanese ISDB-T standard.
However, it is important to acknowledge that the proposed methodology has some disadvantages. For example, certain channel conditions or specific scenarios may have limitations that were not addressed in this study. Additionally, there may be additional costs associated with implementing the BCH-LDPC system compared to RS-CONV. Such considerations should be taken into account when evaluating the feasibility and practical applicability of this approach.
The guidelines for future work need to be expanded to consider the disadvantages identified earlier. It is recommended that we explore the limitations of the proposed methodology in different application scenarios. Additionally, it would be interesting to investigate ways to mitigate or overcome these limitations, either by adapting the methodology or developing complementary techniques. These efforts can help enhance the applicability and performance of the proposed system in various practical situations.

Author Contributions

Conceptualization, T.d.A.C. and F.d.S.F.; methodology, T.d.A.C., A.S.M. and B.S.L.C.; software, A.S.M., E.M.C.M. and C.M.M.C.; validation, F.d.S.F., B.S.L.C. and F.J.B.B.; formal analysis, G.P.d.S.C.; investigation, T.d.A.C.; resources, T.d.A.C.; data curation, T.d.A.C. and A.S.M.; writing—original draft preparation, T.d.A.C.; writing—review and editing, T.d.A.C. and A.S.M.; visualization, T.d.A.C.; supervision, F.J.B.B.; project administration, F.J.B.B.; funding acquisition, T.d.A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study’s publication fee was funded by PROPESP/UFPA and CAPES.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Authors give thanks for all the support provided by the members of the Computer and Telecommunications Laboratory (LCT) at the Federal University of Pará (UFPa).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Panda, C.; Bhanja, U. Energy Efficiency and BER analysis of Concatenated FEC Coded MIMO-OFDM-FSO System. In Proceedings of the 2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC), Bengaluru, India, 10–11 January 2022; pp. 1–5. [Google Scholar] [CrossRef]
  2. Ghayyib, H.S.; Mohammed, S.J. Performance Enhancement of FEC Code For DVB-T2 System by Using Rotated Constellations. In Proceedings of the 2021 1st Babylon International Conference on Information Technology and Science (BICITS), Babil, Iraq, 28–29 April 2021; pp. 234–238. [Google Scholar] [CrossRef]
  3. Seufert, M.; Egger, S.; Slanina, M.; Zinner, T.; Hoßfeld, T.; Tran-Gia, P. A survey on quality of experience of HTTP adaptive streaming. IEEE Commun. Surv. Tutor. 2014, 17, 469–492. [Google Scholar] [CrossRef]
  4. Polak, L.; Sotner, R.; Kufa, J.; Kratochvil, T. DVB-T2/T2-Lite using MISO Principle for Portable and Mobile Transmission Scenarios. In Proceedings of the 2021 44th International Conference on Telecommunications and Signal Processing (TSP), Brno, Czech Republic, 26–28 July 2021; pp. 34–37. [Google Scholar] [CrossRef]
  5. Mohajeran, S.A.; Sadough, S.M.S. On the Interaction Between Joint Tx/Rx IQI and Channel Estimation Errors in DVB-T Systems. IEEE Syst. J. 2018, 12, 3271–3278. [Google Scholar] [CrossRef]
  6. Bournaka, G.; Ummenhofer, M.; Cristallini, D.; Palmer, J.; Summers, A. Experimental Study for Transmitter Imperfections in DVB-T Based Passive Radar. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1341–1354. [Google Scholar] [CrossRef]
  7. Chakiki, M.A.F.; Astawa, I.G.P.; Budikarso, A. Performance Analysis of DVB-T2 System Based on MIMO Using Low Density Parity Check (LDPC) Code Technique and Maximum Likelihood (ML) Detection. In Proceedings of the 2020 International Electronics Symposium (IES), Surabaya, Indonesia, 29–30 September 2020; pp. 169–173. [Google Scholar] [CrossRef]
  8. Polak, L.; Kratochvil, T. Measurement and evaluation of IQ-Imbalances in DVB-T and DVB-T2-Lite OFDM modulators. In Proceedings of the 2017 40th International Conference on Telecommunications and Signal Processing (TSP), Barcelona, Spain, 5–7 July 2017; pp. 555–558. [Google Scholar] [CrossRef]
  9. Julianawati, L.; A’yun, Q.; Anggraeni, M.E.; Faradisa, R. Performance Evaluation of DVB-T2 TV Broadcast For Fixed Reception. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019. [Google Scholar] [CrossRef]
  10. Honfoga, A.C.; Dossou, M.; Moeyaert, V. Performance comparison of new waveforms applied to DVB-T2 transmissions. In Proceedings of the 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Paris, France, 27–29 October 2020. [Google Scholar] [CrossRef]
  11. Honfoga, A.C.; Nguyen, T.T.; Dossou, M.; Moeyaert, V. Application of FBMC to DVB-T2: A Comparison vs Classical OFDM Transmissions. In Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, ON, Canada, 11–14 November 2019. [Google Scholar] [CrossRef]
  12. Ksentini, A.; Taleb, T. QoE-oriented adaptive SVC decoding in DVB-T2. IEEE Trans. Broadcast. 2013, 59, 251–264. [Google Scholar] [CrossRef]
  13. Cho, H.; Kim, I.; Song, H.; Lim, D.W. Concatenated schemes of Reed-Solomon and convolutional codes for GNSS. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence, ICTC 2019, Jeju Island, Republic of Korea, 16–18 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 338–339. [Google Scholar] [CrossRef]
  14. Kumar, P.S.; Raju, M.; Iqbul, M.A. Serial Concatenated Convolution Codes for Coded OFDM in Digital Audio Broadcasting Environment. In Proceedings of the 2019 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 21–22 February 2019; pp. 553–558. [Google Scholar] [CrossRef]
  15. Shaheen, F.; Butt, M.F.U.; Agha, S.; Ng, S.X.; Maunder, R.G. Performance Analysis of High Throughput MAP Decoder for Turbo Codes and Self Concatenated Convolutional Codes. IEEE Access 2019, 7, 138079–138093. [Google Scholar] [CrossRef]
  16. Sadkhan, S.B. Performance Evaluation of Concatenated Codes applied in Wireless Channels. In Proceedings of the 2019 1st AL-Noor International Conference for Science and Technology (NICST), Sulimanyiah, Iraq, 25–29 October 2019. [Google Scholar] [CrossRef]
  17. Khalifa, O.O.; Ahmed, Z.; Esgiar, A.N.; Saeed, R.A.; Abdalla, A.H. Performance Analysis of LTE Codes System Using Various Modulation Techniques. In Proceedings of the 2019 1st AL-Noor International Conference for Science and Technology (NICST), Sulimanyiah, Iraq, 23 March 2020. [Google Scholar] [CrossRef]
  18. Prihantio, R.W.; Anisah, I.; Wijayanti, A.; Anggraeni, M.E. Bit Error Rate Evaluation of Digital Terrestrial TV Broadcast Based on Field Measurement in Urban Area. In Proceedings of the 2020 International Electronics Symposium (IES), Surabaya, Indonesia, 29–30 September 2020. [Google Scholar] [CrossRef]
  19. Vijayalakshmi, M.; Kulkarni, L. Analysis of Quality of Experience (QoE) in Video Streaming Over Wi-Fi in Real Time; Springer: Singapore, 2021. [Google Scholar] [CrossRef]
  20. Sujak, B.A.; Murdaningtyas, C.D.; Anggraeni, M.E.; Sukaridhoto, S. Comparison of Video IPTV and Digital TV DVB-T2 Quality for Indonesia TV Broadcast. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019. [Google Scholar] [CrossRef]
  21. Laniewski, D.; Schütz, B.; Aschenbruck, N. On the Impact of Burst Loss for QoE-Based Performance Evaluations for Video Streaming. In Proceedings of the 2020 IEEE 45th LCN Symposium on Emerging Topics in Networking (LCN Symposium), Sydney, Australia, 16–19 November 2020; pp. 78–87. [Google Scholar] [CrossRef]
  22. Trioux, A.; Coudoux, F.X.; Corlay, P.; Gharbi, M. Performance Assessment of the Adaptive GoP-size extension of the Wireless SoftCast Video Scheme. In Proceedings of the 2020 10th International Symposium on Signal, Image, Video and Communications (ISIVC), Saint-Etienne, France, 7–9 April 2021. [Google Scholar] [CrossRef]
  23. Trioux, A.; Coudoux, F.X.; Corlay, P.; Gharbi, M. Temporal information based GoP adaptation for linear video delivery schemes. Signal Process. Image Commun. 2020, 82, 115734. [Google Scholar] [CrossRef]
  24. Chung, B.; Yim, C. Bi-Sequential Video Error Concealment Method Using Adaptive Homography-Based Registration. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1535–1549. [Google Scholar] [CrossRef]
  25. Kazemi, M.; Ghanbari, M.; Shirmohammadi, S. The Performance of Quality Metrics in Assessing Error-Concealed Video Quality. IEEE Trans. Image Process. 2020, 29, 5937–5952. [Google Scholar] [CrossRef]
  26. Yim, C.; Bovik, A.C. Evaluation of temporal variation of video quality in packet loss networks. Signal Process. Image Commun. 2011, 26, 24–38. [Google Scholar] [CrossRef]
  27. Benjak, J.; Hofman, D.; Knezović, J.; Žagar, M. Performance Comparison of H. 264 and H. 265 Encoders in a 4K FPV Drone Piloting System. Appl. Sci. 2022, 12, 6386. [Google Scholar] [CrossRef]
  28. Setiadi, D.R.I.M. PSNR vs SSIM: Imperceptibility quality assessment for image steganography. Multimed. Tools Appl. 2021, 80, 8423–8444. [Google Scholar] [CrossRef]
  29. Cheng, Y.; Liu, Q.; Zhao, C.; Zhu, X.; Zhang, G. Design and implementation of mediaplayer based on FFmpeg. In Software Engineering and Knowledge Engineering: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2012; Volume 2, pp. 867–874. [Google Scholar]
  30. Stankowski, J.; Dziembowski, A. IV-PSNR: Software for immersive video objective quality evaluation. SoftwareX 2023, 24, 101592. [Google Scholar] [CrossRef]
  31. Yavasoglu, O.; Akcam, N.; Okan, T. Performance analysis of concatenated BCH and convolutional coded OFDM system. Int. J. Electron. 2020, 107, 1574–1587. [Google Scholar] [CrossRef]
  32. Nursiaga, R.; Alaydrus, M. Efficiency of Satellite Transponder Using Paired Carrier Multiple Access with Low Density Parity Check. In Proceedings of the 2019 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), Tangerang, Indonesia, 23–24 October 2019. [Google Scholar] [CrossRef]
  33. Khan, M.; Afzal, S.; Manzoor, R. Hardware implementation of shortened (48,38) Reed Solomon forward error correcting code. In Proceedings of the 7th International Multi Topic Conference, 2003. INMIC 2003, Islamabad, Pakistan, 8–9 December 2003; pp. 90–95. [Google Scholar] [CrossRef]
  34. Moreira, J.C.; Farrell, P.G. Essentials of Error-Control Coding; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  35. ETSI EN 300 744; Digital Video Broadcasting (DVB): Frame Structure, Channel Coding and Modulation for Digital Terrestrial Television (DVB-T). European Standard: Antibes, France, 1997.
  36. Eizmendi, I.; Velez, M.; Gómez-Barquero, D.; Morgade, J.; Baena-Lecuyer, V.; Slimani, M.; Zoellner, J. DVB-T2: The Second Generation of Terrestrial Digital Video Broadcasting System. IEEE Trans. Broadcast. 2014, 60, 258–271. [Google Scholar] [CrossRef]
  37. Dobrian, F.; Sekar, V.; Awan, A.; Stoica, I.; Joseph, D.; Ganjam, A.; Zhan, J.; Zhang, H. Understanding the impact of video quality on user engagement. ACM SIGCOMM Comput. Commun. Rev. 2011, 41, 362–373. [Google Scholar] [CrossRef]
  38. Fructuosos, C.A.; Younis, C.; Conti, D.J.S.; Ito, J.Y.; Cacheiro, M.; ABERT/SET (Brazilian Association of Radio and Television Broadcasters/Brazilian Society of Television Engineering). Digital Television Systems—Brazilian Tests—Final Report Part 1, Report of SETABERT Group Tests; ANATEL: Sao Paulo, Brazil, 2000.
  39. Chen, G.H.; Yang, C.L.; Xie, S.L. Gradient-Based Structural Similarity for Image Quality Assessment. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; pp. 2929–2932. [Google Scholar] [CrossRef]
Figure 1. Simulation of video transmission.
Figure 1. Simulation of video transmission.
Applsci 14 03581 g001
Figure 2. Noise variation in four selected frames.
Figure 2. Noise variation in four selected frames.
Applsci 14 03581 g002
Figure 3. Concatenated BCH-LDPC coding scheme.
Figure 3. Concatenated BCH-LDPC coding scheme.
Applsci 14 03581 g003
Figure 4. OFDM symbol generation.
Figure 4. OFDM symbol generation.
Applsci 14 03581 g004
Figure 5. BERxEbN0/RS-CONV over AWGN channel.
Figure 5. BERxEbN0/RS-CONV over AWGN channel.
Applsci 14 03581 g005
Figure 6. SSIM x frames for Park over AWGN.
Figure 6. SSIM x frames for Park over AWGN.
Applsci 14 03581 g006
Figure 7. SSIM x frames for Park over Rayleigh.
Figure 7. SSIM x frames for Park over Rayleigh.
Applsci 14 03581 g007
Table 1. Parameters of codification.
Table 1. Parameters of codification.
Resolution4K (3840 × 2160)
Frame rate50 FPS (frames per second)
Number of frames500
Length of GOP (Group of Pictures)20
B-frames per GOP3
Quantization parameter (QP)37
Table 2. Simulation parameters.
Table 2. Simulation parameters.
ParametersValue
FFT Length2048
Number of Subcarriers1705
Size of Cyclic Prefix512
Modulation SchemeQPSK, 16 QAM, and 64 QAM
ChannelAWGN and Rayleigh
Configuration Reed–Solomon188/204
Code Rate1/2
Configuration BCH32,208/32,400
Table 3. Rayleigh channel parameters.
Table 3. Rayleigh channel parameters.
TapDelay (s)Gain (dB)
100
20.15 ×   10 6 −13.8
32.22 ×   10 6 −16.2
43.05 ×   10 6 −14.9
55.86 ×   10 6 −13.6
65.93 ×   10 6 −16.4
Table 4. Details of the simulations.
Table 4. Details of the simulations.
ComponentQuantity
Concatenated codes (BCH-LDPC and RS-CONV)2
Modulation schemes (QPSK, 16 QAM, and 64 QAM)3
Channel (AWGN and Rayleigh)2
Videos (Tree, Crowd, Cross, Duck, and Park)5
Repetitions33
Total Simulations 2 × 3 × 2 × 5 × 33 = 990
Table 5. Portion of the videos without loss.
Table 5. Portion of the videos without loss.
AWGN
TreeCrowdCrossDuckPark
BCH-LDPC/QPSK96%94.6%93.8%93.8%92%
RS-CONV/QPSK84%74.48%77.09%72.39%69.16%
BCH-LDPC/16-QAM89%85.4%85.8%84.2%80%
RS-CONV/16-QAM75.93%64.34%67.92%61.99%56.6%
BCH-LDPC/64-QAM84%75.4%76%73.8%70.6%
RS-CONV/64-QAM67.74%47.19%54.36%52.04%48.6%
Rayleigh
TreeCrowdCrossDuckPark
BCH-LDPC/QPSK81.75%73.15%75.13%72.49%69.53%
RS-CONV/QPSK56.49%37.12%44.62%40.6%36.21%
BCH-LDPC/16-QAM78.38%67.22%68.61%66.68%63.8%
RS-CONV/16-QAM38.19%22.15%27.53%21.8%20.58%
BCH-LDPC/64-QAM72.93%59.71%63.14%58.36%56.42%
RS-CONV/64-QAM20.39%12.65%14.47%13.6%12.12%
Table 6. Gain (%) in mean SSIM of BCH-LDPC in relation to RS-CONV.
Table 6. Gain (%) in mean SSIM of BCH-LDPC in relation to RS-CONV.
AWGN
TreeCrowdCrossDuckPark
QPSK4.12%9.78%6.07%12.47%11.75%
16-QAM6.31%9.82%7.58%13.11%13.38%
64-QAM6.03%11.93%12.65%14.57%19.54%
Rayleigh
TreeCrowdCrossDuckPark
QPSK8.34%18.79%10.33%14.75%22.19%
16-QAM14%32.63%12.82%29.97%31.15%
64-QAM43.39%35.42%33.07%47.85%78.59%
Table 7. Gain (dB) in mean PSNR of BCH-LDPC in relation to RS-CONV.
Table 7. Gain (dB) in mean PSNR of BCH-LDPC in relation to RS-CONV.
AWGN
TreeCrowdCrossDuckPark
QPSK1.75 dB3.37 dB2.67 dB2.82 dB3.56 dB
16-QAM2.81 dB3.74 dB3.36 dB2.81 dB3.23 dB
64-QAM3.02 dB4.65 dB3.95 dB3.34 dB4.06 dB
Rayleigh
TreeCrowdCrossDuckPark
QPSK4.35 dB6.17 dB4.91 dB3.29 dB5.47 dB
16-QAM7.15 dB7.94 dB6.95 dB6.07 dB7.01 dB
64-QAM11.61 dB8.71 dB10.69 dB7.16 dB9.44 dB
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Costa, T.d.A.; Macedo, A.S.; Matos, E.M.C.; Castro, B.S.L.; Farias, F.d.S.; Cardoso, C.M.M.; Cavalcante, G.P.d.S.; Barros, F.J.B. A Temporal Methodology for Assessing the Performance of Concatenated Codes in OFDM Systems for 4K-UHD Video Transmission. Appl. Sci. 2024, 14, 3581. https://doi.org/10.3390/app14093581

AMA Style

Costa TdA, Macedo AS, Matos EMC, Castro BSL, Farias FdS, Cardoso CMM, Cavalcante GPdS, Barros FJB. A Temporal Methodology for Assessing the Performance of Concatenated Codes in OFDM Systems for 4K-UHD Video Transmission. Applied Sciences. 2024; 14(9):3581. https://doi.org/10.3390/app14093581

Chicago/Turabian Style

Costa, Thiago de A., Alex S. Macedo, Edemir M. C. Matos, Bruno S. L. Castro, Fabricio de S. Farias, Caio M. M. Cardoso, Gervásio P. dos S. Cavalcante, and Fabricio J. B. Barros. 2024. "A Temporal Methodology for Assessing the Performance of Concatenated Codes in OFDM Systems for 4K-UHD Video Transmission" Applied Sciences 14, no. 9: 3581. https://doi.org/10.3390/app14093581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop