Next Article in Journal
Robust Topological Edge States in C6 Photonic Crystals
Next Article in Special Issue
On the Performance of Intelligent Reconfigurable Surfaces for 6G Indoor Visible Light Communications Systems
Previous Article in Journal
Anomalous Spectral Shift of o-Modes in Multilayer Photonic Structure Induced by Homeotropic–Homeoplanar Transition in Chiral–Nematic Defect Layer
Previous Article in Special Issue
An Optimal Adaptive Constellation Design Utilizing an Autoencoder-Based Geometric Shaping Model Framework
 
 
Article
Peer-Review Record

BiGRU-Based Adaptive Receiver for Indoor DCO-OFDM Visible Light Communication

Photonics 2023, 10(9), 960; https://doi.org/10.3390/photonics10090960
by Yi Huang, Dahai Han *, Min Zhang, Yanwen Zhu and Liqiang Wang
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Photonics 2023, 10(9), 960; https://doi.org/10.3390/photonics10090960
Submission received: 9 August 2023 / Revised: 17 August 2023 / Accepted: 18 August 2023 / Published: 22 August 2023
(This article belongs to the Special Issue Visible Light Communications)

Round 1

Reviewer 1 Report (Previous Reviewer 1)

1 Do we need to consider the mean of additive noise?

The professionalism of English writing needs to be improved.

Author Response

We understand your concern regarding the consideration of the mean of additive noise in our approach. While we acknowledge the importance of understanding noise characteristics, our neural network-based method is designed to automatically extract noise features without the need for additional analysis. Our approach focuses on leveraging the capabilities of the neural network to directly learn and characterize noise properties from the data. We believe this approach aligns with the advancements in deep learning techniques that aim to automate and streamline complex tasks, and we have found promising results in our experiments.

Comments on the Quality of English Language: We greatly value your input on the quality of our English writing. We understand the importance of professionalism in language usage, and we apologize for any shortcomings in this regard. We will make sure to thoroughly review and enhance the English language aspects of our manuscript to ensure clarity and professionalism.

Once again, we appreciate your valuable feedback and will make the necessary improvements based on your suggestions. If you have any further questions or comments, please feel free to let us know.

Reviewer 2 Report (Previous Reviewer 3)

The revised paper has meet my stanard.

Author Response

We sincerely appreciate your time and effort in reviewing our manuscript. We are pleased to hear that you have found the improvements made to be satisfactory, and we are grateful for your positive assessment. Your feedback and suggestions have played a crucial role in enhancing the quality of our work.

Reviewer 3 Report (Previous Reviewer 2)

The manuscript has been well improved and is now suitable for publishing.

Author Response

We sincerely appreciate your time and effort in reviewing our manuscript. We are pleased to hear that you have found the improvements made to be satisfactory, and we are grateful for your positive assessment. Your feedback and suggestions have played a crucial role in enhancing the quality of our work.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

1. Why are two fully connected layers used for non-linear problems?

2. Why does the model in the article use cross-entropy loss as the loss function?

3. How are the number of symbols and the number of datasets determined in the dataset?

4. When comparing the complexity of BiGRU and BiLSTM in handling OFDM signals, were the same parameters and datasets used for the comparison?

5. How does the performance of a pilotless receiver based on deep learning affect the model used?

6. What are the advantages of using oversampled data in the pre-training model?

7. What steps and processes are eliminated by the proposed BiGRU model compared to traditional receivers?

8. Does the proposed solution in this study have the potential for practical applications?

There are many inappropriate sentences in the article, and the English level needs to be improved.

Author Response

Point 1: Why are two fully connected layers used for non-linear problems?

Response 1:In the BiLSTM/BiGRU models, our objective is to effectively capture the long-term dependencies in the input data and utilize this information to address non-linear problems, such as natural language processing tasks (e.g., sentiment analysis, language generation). Bidirectional LSTM allows the model to simultaneously consider both past and future information, enabling a better understanding of the contextual information in sequence data.

Fully connected layers are the simplest and most common layer type in neural networks. They connect each neuron in the layer to all neurons in the previous layer using weights and introduce non-linear transformations through activation functions. The non-linear nature of fully connected layers empowers neural networks with enhanced expressiveness when dealing with complex non-linear problems. However, the capacity of a single fully connected layer may be limited, especially when handling intricate problems or large-scale datasets.

In our study, we explore the problem of restoring the original waveform using the BiLSTM model, which is a sequence-to-sequence (seq2seq) task. Our goal is to predict a target sequence similar to the original waveform based on input sequence data. Fully connected layers are a common neural network layer type, where each neuron is connected to all neurons in the previous layer, and they introduce non-linear transformations through activation functions. The primary reasons for using two fully connected layers on top of the BiLSTM model are as follows:

Enhanced Model Expressiveness: In sequence prediction tasks, especially complex tasks like restoring the original waveform, a single fully connected layer may not provide sufficient expressiveness. By using two fully connected layers, we increase the model's capacity, enabling it to learn more complex features and patterns, thereby enhancing its performance in the task of waveform restoration.

Feature Extraction and Mapping: After the BiLSTM layer, the model has gained some sequence understanding, but it might still require more advanced features for better waveform restoration. Two fully connected layers can map the output features of the BiLSTM to a higher-dimensional representation space, making it easier for the model to extract crucial features from the data and make more effective decisions in subsequent layers.

Non-linear Transformation: The activation functions within fully connected layers introduce non-linear transformations, which are crucial for handling non-linear problems. Restoring the original waveform often involves complex non-linear relationships. By using two fully connected layers, we provide the model with more opportunities for non-linear transformations, enabling it to better adapt to the complexity of waveform restoration tasks.

Parameter Learning and Optimization: The parameters of the two fully connected layers can be learned and optimized during the training process through backpropagation. This allows the model to automatically adjust weights and biases to meet the requirements of the given task.

It's important to emphasize that selecting the model structure and the number of layers is a complex process that typically involves extensive experimentation and comparisons. In our research, after careful experimentation and evaluation, we determined that the use of two fully connected layers yields improved performance in the task of waveform restoration.

 

Point 2:Why does the model in the article use cross-entropy loss as the loss function?

Response 2:The cross-entropy loss function is a commonly used loss function, particularly suitable for classification problems. In deep learning, especially in the fields of natural language processing and image recognition, classification is one of the most common tasks. The BiLSTM model is often employed for sequence modeling in these tasks, where the output typically represents probabilities for different classes. It's important to note that the choice of loss function often varies based on the nature of the task and dataset. For other types of tasks, such as regression, different loss functions like mean squared error (MSE) may be chosen. In each specific problem, a series of experiments and evaluations are conducted to select the most appropriate loss function to achieve optimal performance.

In our research, we explored the problem of restoring the original waveform using the BiLSTM model, which can be viewed as a sequence-to-sequence (seq2seq) task. Specifically, our goal is to transform the input sequence data into another sequence similar to the original waveform using the BiLSTM model. To accomplish this task, we need a suitable loss function to measure the difference between the model's output and the true original waveform.

The following are the main reasons for choosing the cross-entropy loss function:

Sequence Prediction Task: In sequence-to-sequence problems, we want the model to predict the target sequence based on the input sequence. The cross-entropy loss function is widely used in sequence prediction tasks, especially in fields like natural language processing and speech recognition.

Probability Distribution Matching: Using the cross-entropy loss function helps us measure the difference in probability distributions between the model's output and the true original waveform. This is particularly important for sequence prediction tasks, as we want the model to output sequences similar to the true data distribution.

Robustness to Outliers: The cross-entropy loss function is relatively robust to outliers. In the task of restoring the original waveform, the raw data may contain outliers or noise, and the cross-entropy loss function helps the model handle such cases more effectively.

Gradient Descent Optimization: Similar to the reasons mentioned earlier, the cross-entropy loss function is simple and efficient for gradient computation and optimization processes. This contributes to stable and fast training of the model.

By choosing the cross-entropy loss function, we aim to better address the seq2seq problem of waveform restoration using the BiLSTM model in our study.

 

Point 3:How are the number of symbols and the number of datasets determined in the dataset?

Response 3:In a DCO-OFDM system, the receiver needs to perform symbol timing and synchronization to correctly demodulate the received signal. Determining the number of symbols and the quantity of data sets involves several key steps:

Symbol Timing and Synchronization: At the receiver, symbol timing and synchronization are initially performed to ensure accurate interpretation of received OFDM symbols. This is achieved by detecting the cyclic prefix (CP), which is added to the end of OFDM symbols at the transmitter to eliminate inter-symbol interference caused by multipath propagation. By identifying the end point of the cyclic prefix, the receiver can precisely locate the start point of each OFDM symbol.

Symbol Interval: Once symbol timing and synchronization are conducted, the start and end points of each OFDM symbol can be obtained. The symbol interval refers to the distance between two consecutive OFDM symbols and determines the number of symbols.

Extraction of OFDM Symbols: Initially, each complete OFDM symbol is extracted from the received dataset. This step can be accomplished through time-domain or frequency-domain processing of the received signal. Typically, the determination of the start and end points of each OFDM symbol is based on the system design and receiver processing algorithms.

Subcarrier Allocation: In a DCO-OFDM system, multiple frequency-domain subcarriers are used to carry data. In this step, it's necessary to determine the number of subcarriers utilized for data transmission within each OFDM symbol. The number of subcarriers is often determined by the system design and modulation scheme.

Symbol Count Determination: After determining the number of subcarriers, it becomes possible to calculate the count of symbols carried within each OFDM symbol. The symbol count is typically directly related to the number of subcarriers and is often equal to it.

Data Set Quantity Determination: The quantity of data sets refers to the number of samples within the dataset, which corresponds to the count of received complete OFDM symbols. In experiments, the data set quantity can be determined based on the length of the received dataset or timestamps.

It's important to note that the determination of the number of symbols and data set quantity may be constrained by the design of the OFDM system and transmission requirements. During experimentation and research, detailed explanations of the employed dataset construction methods and parameter settings are usually provided to ensure the reliability and reproducibility of the experimental results presented in the paper.

 

Point 4:When comparing the complexity of BiGRU and BiLSTM in handling OFDM signals, were the same parameters and datasets used for the comparison?

Response 4:In our study, to fairly compare the complexity of BiGRU and BiLSTM in handling OFDM signals, we endeavored to maintain their parameters and datasets as similar as possible. The specific approach is outlined as follows:

Parameter Settings: We aimed to use identical parameter settings as much as possible for both the BiGRU and BiLSTM models. This encompassed hyperparameters such as the size of hidden layers, learning rate, optimizer, batch size, and more. Ensuring consistency in parameter settings was undertaken to mitigate potential effects arising from parameter disparities when comparing the two models.

Network Structure: BiGRU and BiLSTM are both variations of recurrent neural networks (RNNs), thus sharing a similar foundational network structure. We maintained the same structural parameters, such as the number of layers and neurons per layer, across both models to ensure a comparable level of network complexity.

Dataset: When evaluating the performance of the two models, we employed an identical dataset. This choice ensures that both models were trained and tested under the same data conditions, thereby avoiding performance variations due to dataset differences.

Experimental Conditions: We made efforts to maintain consistency in other experimental conditions, such as hardware environment, operating system, programming framework, and more. These factors can potentially impact experimental outcomes, and upholding uniform experimental conditions enhances the reliability of the results.

It's important to note that while we strived to align parameters and datasets, inherent differences between the two models and their algorithmic characteristics may still result in some disparities. In the study, we compared the performance of the two models across different metrics and thoroughly elucidated any discrepancies observed, along with their underlying reasons.

Point 5:How does the performance of a pilotless receiver based on deep learning affect the model used?

Response 5:In our research, we explore the use of deep learning-based pilotless receivers for wireless communication systems. These receivers rely on neural network models to directly estimate the transmitted symbols from received signals without the need for explicit pilot symbols. The choice of the neural network model can significantly impact the performance of the pilotless receiver. Here are some key factors that influence the performance:

Model Architecture: The architecture of the deep learning model plays a crucial role in determining its capacity to learn and generalize from the received signals. Different models such as CNNs, RNNs, or Transformer-based models may be used, and each has its strengths and weaknesses. The number of layers, the number of neurons in each layer, and the connectivity patterns are essential considerations when designing the model.

Model Complexity: The complexity of the chosen model can have a substantial effect on its performance. A model that is too simple may not have enough capacity to capture the complex relationships between the received signals and the transmitted symbols, leading to suboptimal performance. On the other hand, an overly complex model may suffer from overfitting and not generalize well to unseen data.

Training Data: The availability and quality of the training data can influence the performance of the pilotless receiver. An extensive and diverse dataset that covers various channel conditions and scenarios is crucial for training a robust and generalizable model. Insufficient or biased training data may result in a model that performs well only under specific conditions but fails to adapt to new situations.

Training Process: The hyperparameters used during the training process, such as learning rate, batch size, and optimization algorithm, can impact the convergence speed and final performance of the model. Careful tuning of these hyperparameters is necessary to ensure effective training.

Signal-to-Noise Ratio (SNR) and Channel Conditions: The performance of the pilotless receiver may vary under different SNR levels and channel conditions. The model needs to be evaluated across a range of SNR values and channel scenarios to assess its robustness and reliability.

Model Regularization: Techniques such as dropout, weight decay, and batch normalization can be employed to prevent overfitting and improve the generalization of the model.

       To evaluate the performance of a pilotless receiver based on deep learning, we conduct thorough experiments and compare different model architectures, hyperparameters, and training strategies. We analyze the results in terms of bit error rate, symbol recovery accuracy, and other relevant metrics to understand the strengths and weaknesses of each model.

Point 6:What are the advantages of using oversampled data in the pre-training model?

Response 6:In general, the received signal in VLC based on DCO-OFDM is oversampled. In the DCO-OFDM system, the optical signal is converted into an electrical signal through optical modulation and then transformed into a received signal by an optical detector. Due to the high modulation frequency of the optical signal, oversampling is typically employed to accurately reconstruct the original signal and recover the data. Oversampling refers to sampling the received signal at a sampling rate higher than the signal's minimum sampling rate (Nyquist sampling rate). In the context of DCO-OFDM systems, oversampling can enhance the temporal resolution of the received signal, enabling the system to better capture rapidly changing signals and complex channel characteristics.

In our study, the primary advantages of directly using the received data after OFDM sampling for model pretraining are as follows:

Enhanced Data Representation: Oversampling implies denser sampling of received data in the time domain, resulting in more sample points. This enhances the temporal resolution of the data, enabling the model to acquire a richer and more detailed temporal data representation. This is beneficial for the pretraining of deep learning models as they can learn more useful features from high-resolution data.

Capture of Rapid Changes: OFDM signals may exhibit rapid changes or transient phenomena in the time domain, such as time delay spread caused by multipath propagation. Oversampling better captures these rapid changes, enabling the model to effectively handle complex channel conditions.

Increased Data Samples: Oversampling increases the number of data samples, thus expanding the training dataset. A larger number of data samples contribute to improved generalization and robustness of the pretrained model.

Noise Mitigation: Oversampling can mitigate the impact of noise on received data to a certain extent. With denser sampling, noise may be distributed more uniformly across multiple sample points, reducing interference on the pretrained model.

It's important to note that the choice of oversampling is influenced by the characteristics of the communication system and the specific application. By leveraging the benefits of oversampling in our research, we aimed to enhance the performance and capabilities of the pretraining model for DCO-OFDM signal processing tasks.

 

Point 7:What steps and processes are eliminated by the proposed BiGRU model compared to traditional receivers?

Response 7:The proposed BiGRU model aims to serve as a direct decoder in the DCO-OFDM-VLC system. In comparison to traditional receivers, the proposed BiGRU model eliminates several steps and processes:

Equalization Process: In conventional DCO-OFDM-VLC receiver systems, equalization is a critical step to compensate for channel impairments and eliminate Inter-Symbol Interference (ISI). Traditional equalizers may involve complex algorithms and filters to achieve this. However, in the proposed BiGRU model, the BiGRU architecture itself acts as an equalizer, learning to map the received signal directly to transmitted data symbols without the need for explicit equalization algorithms.

Symbol Classification: Some traditional receiver designs rely on symbol classification techniques to identify different pre-labeled OFDM symbols. As modulation orders and the number of effective subcarriers increases, the computational complexity of this process grows. In contrast, the BiGRU model does not require pre-labeled symbols or explicit symbol classification. It directly maps the received signal to transmitted data symbols, simplifying the receiver's structure.

Preprocessing and Postprocessing: Traditional receivers may require preprocessing of the received signal, such as converting complex vectors obtained after Discrete Fourier Transform (DFT) into real vectors, followed by equalization and classification stages, or data label conversion for training neural networks. Postprocessing may also be necessary to convert modulation constellation points into fixed labels for training. The BiGRU model directly takes raw data as input and does not require any preprocessing or postprocessing steps, streamlining the entire reception process.

Pilot Removal: In traditional DCO-OFDM-VLC systems, pilots and cyclic prefixes are often used for channel estimation and synchronization purposes. These symbols occupy valuable spectrum resources. The BiGRU model, by directly decoding the received data, can effectively reduce the need for redundant pilots, thereby improving data transmission rates.

Complex Algorithm Implementation: Traditional receivers may require complex algorithms to implement equalization, synchronization, and symbol classification, which can be computationally intensive and challenging to optimize. As a deep neural network, the BiGRU model learns to perform these tasks through training, simplifying the implementation process.

By eliminating these steps and processes, the proposed BiGRU model offers a more streamlined and efficient approach to data recovery in the DCO-OFDM-VLC system, potentially enhancing the overall performance of visible light communication systems.

 

Point 8:Does the proposed solution in this study have the potential for practical applications?

Response 8:In this study, our goal was to explore the feasibility of using deep learning models in the field of DCO-OFDM-VLC, specifically the BiGRU model as a direct decoding deep neural network receiver. Our research findings indicate that the proposed solution achieved success in experimental demonstrations and showcased potential application prospects.

Specifically, our BiGRU model demonstrated adaptability to complex channels in experiments and effectively increased data transmission rates. It eliminates the need for traditional equalizers and symbol classifiers, thereby streamlining some steps and processes in traditional receivers. Furthermore, the solution directly utilizes raw data as input, requiring no preprocessing or postprocessing, making it easier to integrate into existing DCO-OFDM-VLC systems.

Enhanced Data Transmission Rate: The BiGRU model efficiently learns the statistical characteristics of visible light channels, resulting in improved data recovery efficiency. This reduction in redundant symbols (e.g., pilots and cyclic prefixes) occupying spectrum resources leads to higher data transmission rates. In practical applications, higher data transmission rates are crucial for bandwidth-intensive tasks such as supporting high-resolution video streaming, real-time data transfer, and remote control systems.

Compatibility with Existing Systems: The proposed solution can be directly deployed into traditional DCO-OFDM-VLC systems without additional preprocessing or postprocessing. This implies that it can be relatively easily integrated into existing visible light communication infrastructure, serving as a feasible upgrade solution to enhance performance without significantly altering the overall system architecture.

Learning Capabilities: Deep learning models like BiGRU have shown significant achievements in handling complex channels and improving communication system performance. They are gaining traction in various communication domains, including RF communication, underwater optical communication (UWOC), free-space optical communication (FSO), and more. The applicability of deep learning in VLC systems demonstrates its potential in practical deployments.

Experimental Validation: The study mentioned successful experimental demonstrations of the proposed receiver in the DCO-OFDM-VLC system. Experimental validation is a crucial step in verifying the feasibility and effectiveness of any proposed solution. The proposed receiver has been successfully validated through experiments, indicating its readiness for real-world applications.

While the experimental results showcase potential application prospects, we are acutely aware that practical applications may encounter challenges. In future research, we will delve deeper into exploring the stability and robustness of the solution under different environmental conditions and consider the hardware and system requirements necessary for practical deployment.

Overall, we firmly believe that the proposed BiGRU model, serving as a receiver in the DCO-OFDM-VLC system, holds potential for practical application and can contribute to the further development of visible light communication technology.

Reviewer 2 Report

This paper gives a substantial investigation of a direct time-domain waveform equalization scheme based on a bidirectional gated recurrent unit neural network for direct current-biased orthogonal frequency division multiplexing in indoor VLC. The considered topic is interesting and useful. The theoretical derivation results appear to be correct and believable. In general, the manuscript provides an interesting conclusion. Nevertheless, the reviewer holds some concerns about this work which you can find below. I suggest that the authors revise and improve the manuscript accordingly.

 

A typical realistic application or use case of the proposed system model is important, but it is missing in the paper.

 

The contribution of this paper is not clear. Most of the concepts and designs introduced in the paper have been presented previously, either from the author's works. The authors should explain this paper's main differences and previous works.

 

In my opinion, the channel model or channel state information of the VLC link is significant for the polite signal. Many studies have been conducted to investigate the link performance based on the fading channel. However, this has not been reviewed in the Introduction section. The following references could help the readers with previously undertaken research works, such as doi: 10.1109/TVT.2023.3252822, 10.1109/JLT.2013.2293137.

 

The motivations for this study should be strengthened.

 

There are many symbols in Eqs. (4)-(11), but were not well expressed in the PDF file. More importantly, Eqs. (12)-(17) should be rewritten concise. 

 

The conclusion section is so long that it is not suitable for readability. I would like to move the detailed discussions into the simulation parts. 

 

The mathematical writing needs to be improved. Below is just an incomplete list of suggestions:

(a) It would be good to use romantic numbers for lists.

(b) It would be necessary to number each equation and not start a sentence with a math

symbol.

(c) It would be better to use capital letters for random variables and little letters for realizations.

(d) It would be necessary to use parenthesis or brackets large enough and align equations with equal signs.

(e) It would be necessary to use mathrm for non-math content in equations.

Minor editing of English language required.

Author Response

We sincerely appreciate the time and effort the reviewer has invested in evaluating our paper. We acknowledge the valuable insights provided and are committed to addressing each of the concerns raised. Below, we outline our planned revisions in response to your comments:

  1. Realistic Application/Use Case: We acknowledge the importance of presenting a realistic application or use case of our proposed system model. In the revised manuscript, we will incorporate a detailed example of a practical scenario where our bidirectional gated recurrent unit neural network-based waveform equalization scheme can be applied effectively, thereby enhancing the relevance and practicality of our work.

  2. Clarification of Contribution: We understand the need to clarify the unique contributions of our paper compared to prior works. In the revised manuscript, we will explicitly emphasize the novel aspects and improvements of our approach, distinguishing it from previously presented concepts and designs. This will provide readers with a clearer understanding of the advancements our paper offers.

  3. Channel Model and State Information: We appreciate your suggestion to include a discussion of the channel model and channel state information of the VLC link, particularly regarding its impact on signal fidelity. We will enhance the Introduction section by incorporating a concise review of relevant studies that have investigated link performance based on fading channels. The references you provided will be cited and discussed to ensure comprehensive coverage of previous research in this area.

  4. Strengthening Motivations: We acknowledge the need to reinforce the motivations behind our study. In the revised manuscript, we will expand on the practical significance and potential applications of direct time-domain waveform equalization in indoor VLC systems. By providing a more detailed rationale for our research, we aim to enhance the overall motivation of our study.

  5. Equation Representation and Mathematical Writing: We thank the reviewer for their detailed suggestions to improve the presentation of equations and mathematical writing. We will adhere to the provided recommendations, ensuring that equations are numbered consistently and appropriately, using proper symbols for random variables and realizations, employing parentheses or brackets adequately, and applying the mathrm command as necessary. These improvements will enhance the clarity and readability of our mathematical content.

  6. Conclusion Section and Readability: We acknowledge the reviewer's concern regarding the length of the conclusion section. In response, we will revise the conclusion to provide a concise summary of our findings and their implications. Detailed discussions will be appropriately relocated to the simulation sections, enhancing the overall flow and readability of the manuscript.

We would like to express our gratitude to the reviewer for their constructive feedback, which will undoubtedly contribute to the refinement and enhancement of our paper. We are committed to addressing all concerns and providing a more comprehensive and valuable contribution to the field. Your guidance is greatly appreciated.

Reviewer 3 Report

This paper proposes a time-domain equalization scheme based on BIGRU neural network DCO-OFDM in VLC system. summarizes the digital signal processing algorithm for high-speed THz communication system and introduces the application results in detail. This paper is well-organized and can be accepted for publication after mirror revision.

 

1)It would be better if you can add some high order modulation format results, such as, 64 QAM, 128 QAM.

2)In Fig.1, at the transmitter side DSP, the OFDM signals after IFFT should add CP and then Parallel/Serial Conversion P/S, at the receiver side, the OFDM signals after equalization is P/S not S/P, Figure 4 similarly, please correct them.

3) In the last paragraph, “When the signal-to-noise ratio is in the range of 0 dBm to 20 dBm”, it should be dB not dBm. please correct them.

No comments.

Author Response

Point 1:It would be better if you can add some high order modulation format results, such as, 64 QAM, 128 QAM.

Response 1:Thank you for your valuable feedback. We appreciate your suggestion to include results for higher-order modulation formats, such as 64 QAM and 128 QAM, in our study. We acknowledge that the implementation of high-order OFDM modulation on commercially available LEDs for VLC presents significant challenges, and currently, there are no high-speed VLC systems using commercial LEDs for such modulation.

Our primary focus has been on investigating the reception capabilities of the proposed receiver for VLC signals. We recognize that conducting experiments with higher-order modulation formats would indeed provide a more comprehensive understanding of the receiver's performance. While we agree that our existing experiments may not be optimal, they do offer insights into the signal-processing capabilities of the proposed receiver to a certain extent.

However, we also understand that high-order and high-speed VLC experiments impose more stringent hardware requirements. Unfortunately, our current laboratory resources do not meet these requirements. Despite this limitation, we are committed to enhancing our research and experimental setup in the future.

In future work, we plan to extend our experiments to incorporate higher-order modulation formats and conduct a thorough analysis of the results. This will allow us to provide a more complete evaluation of the receiver's performance and its potential to handle advanced modulation schemes in VLC systems.

Thank you for your insightful feedback, and we will take your suggestion into consideration to improve the comprehensiveness of our study.

 

Point 2:In Fig.1, at the transmitter side DSP, the OFDM signals after IFFT should add CP and then Parallel/Serial Conversion P/S, at the receiver side, the OFDM signals after equalization is P/S not S/P, Figure 4 similarly, please correct them.

Response 2:We appreciate the reviewer's observation. We have revised the figures according to the provided feedback. In Fig.1, we have updated the transmitter side DSP block to correctly depict the addition of CP after IFFT and the subsequent Parallel/Serial Conversion (P/S). In Fig.4, we have also corrected the receiver side OFDM signal path to reflect Parallel/Serial Conversion (P/S) after equalization. These changes ensure the accurate representation of the signal processing steps in our system.

Point 3:In the last paragraph, “When the signal-to-noise ratio is in the range of 0 dBm to 20 dBm”, it should be dB not dBm. please correct them.

Response 3:Thank you for bringing this to our attention. We have rectified the error, replacing "dBm" with the correct unit "dB" in the sentence referring to the signal-to-noise ratio range.

Back to TopTop