Next Article in Journal
Experimental Study of Transient Flow Regimes in a Model Hydroturbine Draft Tube
Previous Article in Journal
Research on the Transmission of Stresses by Roof Cutting near Gob Rocks
 
 
Article
Peer-Review Record

High-Accuracy Power Quality Disturbance Classification Using the Adaptive ABC-PSO as Optimal Feature Selection Algorithm

Energies 2021, 14(5), 1238; https://doi.org/10.3390/en14051238
by Supanat Chamchuen 1,2, Apirat Siritaratiwat 1, Pradit Fuangfoo 2, Puripong Suthisopapan 1 and Pirat Khunkitti 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Energies 2021, 14(5), 1238; https://doi.org/10.3390/en14051238
Submission received: 26 January 2021 / Revised: 16 February 2021 / Accepted: 18 February 2021 / Published: 24 February 2021
(This article belongs to the Section F: Electrical Engineering)

Round 1

Reviewer 1 Report

In general, the manuscript has good content. But, there are some necessary questions to be answered and improvements to be made:

1) I see 25% similarity report for this manuscript. Try to write your own sentences to avoid this. 

2) Selective criticism of signal processing techniques in literature. Why no comparison of WT with ST? Wavelet and stockwell transform seem to be equally popular in PQ research. Why did you go ahead with WT? What are the pros and cons of WT?

3) What are the benefits of ABC-PSO over ABC & PSO? What was your motivation behind using this algorithm? I see that the number of iterations needed by PSO is very less as compared to ABC and ABC-PSO.

4) Authors should provide a flowchart or algorithm to describe the ABC-PSO optimization.

5) It is absolutely important that you discuss and describe PNN classifier in the paper (as per Table 9, you are using PNN). The paper is about classification and there is no discussion on the classifier. Why did you choose PNN among all classifiers?

6) Is accuracy everything in classification problems, what about suitability for real-time applications?

7) With the machine learning techniques being not so transparent in there computations, is it possible to believe that the classifier will always give high accuracy? - Give a justification.

 

Author Response

Cover letter for revision

 

Manuscript Information

 

Journal : Energies

Manuscript ID: energies-1106647

Title: High-accuracy power quality disturbance classification using the adaptive ABC-PSO as optimal feature selection algorithm

Authors : Supanat Chamchuen, Apirat Siritaratiwat, Pradit Fuangfoo, Puripong Suthisopapan and Pirat Khunkitti

 

 

Dear Editor/Reviewers,

Thank you for allowing us to revise the manuscript, with an opportunity to address the reviewers’s comments. We also would like to thank the reviewers for his/her valuable comments to improve our work. In this letter, our point-by-point response on the reviewers’s comments are indicated as below. A revised manuscript with "Track Changes" function indicating changes is submitted as the main document.

           

 

Reviewer 1's comment

In general, the manuscript has good content. But, there are some necessary questions to be answered and improvements to be made:

1) I see 25% similarity report for this manuscript. Try to write your own sentences to avoid this.

Authors’s response:

            We would like to thank the reviewer for this suggestion. We have revised some sentences to minimize the similarity, as shown in lines 35, 59, 61, 67, 70, 72, 132, 160, 203, and 216.

            By the way, we would like to confirm that we already pay attention to this point and have tried our best to write our own sentences. From our similarity check, we found that most of similarities happen accidentally by the use of general words, for example, the words “feature extraction”, “feature selections”, name of algorithms, mathematical equations and parameter constraints of standard, definition of parameters in the well-known equations, feature vector components, and the use of statistical parameters and its abbreviation.

 

 

2) Selective criticism of signal processing techniques in literature. Why no comparison of WT with ST? Wavelet and stockwell transform seem to be equally popular in PQ research. Why did you go ahead with WT? What are the pros and cons of WT?

Authors’s response:

            We would like to thank the reviewer for this comment. We have added the details (pros and cons) of WT and ST to section 3.1 (1st paragraph). Then, a comparison of WT with ST has been mentioned to clarify the reason why WT was selected in this work, as shown in line 135.

            We would also like to intensively clarify here about the reason to use WT. The reviewer is right that ST and WT are equally popular in PQ research. But if we look closer to the details, the main disadvantage of ST is its larger quantity of information during processing (then larger required storage). In this work, since there are a huge number of considered signals upto 13 types (average in literature is only 9-10), the required storage of ST could be rapidly increased. Therefore, the WT is more suitable than ST in this particular case. Also, by using WT, its smaller required information can further yields less processing time, which is matched with the part of our hypothesis to improve the algorithm performance.

 

 

3) What are the benefits of ABC-PSO over ABC & PSO? What was your motivation behind using this algorithm? I see that the number of iterations needed by PSO is very less as compared to ABC and ABC-PSO.

Authors’s response:

            Firstly, we would like to kindly clarify that the motivation behind using the ABC-PSO was already mentioned in sections 5 (line 248) and 5.3 (line 305). It was motivated by using the distinctive property of the PSO algorithm to compensate for the weakness of the ABC algorithm. So, the expected benefits of the ABC-PSO is that it could provide high-quality solution with high convergence rate. From the results, our findings are well consistent with the hypothesis.

            For the reviewer’s question about the required iterations, the reviewer is right that PSO requires less iterations to achieve the best solution than others. However, as shown in Fig. 5, the reviewer can see that PSO provides the worst accuracy. In general, an accuracy has the highest priority in the field of PQD classifition, so this is the reason why we have to focus highly on an accuracy rather than other factors. Therefore, it is clearly demonstrated in Fig. 2 that the proposed ABC-PSO indicates the best performance by its best accuracy.

            However, it was our mistake that we did not mention the criteria of convergence and iteration, which might mislead the reviewer. So, we have added the criteria of convergence and iteration as well as the discussion on it, as shown in section 6.4. 

 

 

4) Authors should provide a flowchart or algorithm to describe the ABC-PSO optimization.

Authors’s response:

            We would like to thank the reviewer for this suggestion, we have added the pseudo code to describe the ABC-PSO, as shown in Fig. 4. The flowchart to describe the overall procedure has also been added as Fig. 1.

 

 

5) It is absolutely important that you discuss and describe PNN classifier in the paper (as per Table 9, you are using PNN). The paper is about classification and there is no discussion on the classifier. Why did you choose PNN among all classifiers?

Authors’s response:

             We would like to thank the reviwer for this remark, we have added the explanation about the PNN as well as the reason why we choose PNN as classifier to line 209. The discussion on the classifier is also added into section 6.2 (line 386).

We also would like to thoroughly answer the reviewer’s question here. The reason why the PNN is selected in this work starts from the outstanding advantages of PNN which are

1) It is a linear learning algorithm which is capable to reach the results of nonlinear learning algorithms with conserving the high accuracy.

2) A simple implementation of PNN is widely known as its distinctive merit, since the number of hidden layers, weights of network are automatically defined by the network itself through the spread constant. Therefore, it is suitable for our classification environment because we focused on a huge number of signal types.

3) PNN is a highly capable tool for solving several types of classification problems, as shown in several literature below.

Therefore, we think that the PNN is appropriate in our system environment.

 

 

6) Is accuracy everything in classification problems, what about suitability for real-time applications?

Authors’s response:

            In general, an accuracy has the highest priority in PQD classification problems. As the reviewer might have seen in the literature of PQD field that all of them focus on and end up with a summary of accuracy, references are shown in Table 9. This is because, in practical, we do not bring the whole process in a research paper to be used in real-time situation, in stead, we generally bring only the optimally selected features passed through the classifier. So, the other paramaters, i.e. convergenve rate, noise resistance, computational time, required storage and etc, have smaller impact in real-time applications and are organized as lower priority (but still important). Accordingly, in the classification research in which many researchers attempt to improve the performance of algorithm, the research will benefit mostly in the design and analysis phase. So, we can say that an accuracy has the highest priority in real-time applications. Therefore, it is clearly demonstrated that our proposed algorithm benefits in both of the real-time applications and the design/analysis phase.

 

 

7) With the machine learning techniques being not so transparent in there computations, is it possible to believe that the classifier will always give high accuracy? - Give a justification.

Authors’s response:

            To answer this comment, we would like to clarify the correctness of our machine leaning for classification system through the feature extraction, classifier and feature selection separately.

            For the feature extraction (WT) and classifier (PNN), it might be our mistake that we did not describe these machine learning techniques so transparent. The reason was because we performed a combination of the very well-known machine learning techniques in our paper (WT and PNN), and the algorithms of these techniques can be found in several reliable sources. So, we did not focus deep enough to explain it. However, as the reviewer might have seen that we always have references in the description where the machine learning techniques are mentioned and expressed. In addition, the reviewer can see from the plenty literature shown below that the these machine learning techniques (WT, PNN) are very popular and have been widely used in the classification research [below references]. So, I think that their high performance for classification problems was already proved, and this is part of the reason why we selected thsese techniques.

Therefore, in this work, the most important thing that we have to deeply prove is the correctness of mechanism of the proposed adptive ABC-PSO feature selection algorithm. Then, in our calculations, we have to always prove that the solutions found in our work are correct and are truly a steady state solution. So, we have to make sure that the number of tests and iterations must cover all possible solutions, since this is a general procedure in the field.

By the way, from this reviewer’s concern, we have improved the paper by adding the justification explained above to line 411 to improve the presentation of feature selection. Also, the additional information and discussion on the feature extraction (WT) as well as the classifier (PNN), as requested by the reviewer’s comments 2 and 5, respectively, have been added.

 

Examples of references that perform WT and PNN in classification research.

For WT

  1. Zhengyou, H.; Shibin, G.; Xiaoqin, C.; Jun, Z.; Zhiqian, B.; Qingquan, Q. Study of a new method for power system transients classification based on wavelet entropy and neural network. Int. J. Electr. Power Energy Syst. 2011, 33, 402–410.
  2. Dehghani, H.; Vahidi, B.; Naghizadeh, R.A.; Hosseinian, S.H. Power quality disturbance classification using a statistical and wavelet-based Hidden Markov Model with Dempster-Shafer algorithm. Int. J. Electr. Power Energy Syst. 2013, 47, 368–377.
  3. De Yong, D.; Bhowmik, S.; Magnago, F. An effective power quality classifier using wavelet transform and support vector machines. Expert Syst. Appl. 2015, 42, 6075–6081.
  4. Lu, S.-D.; Sian, H.-W.; Wang, M.-H.; Liao, R.-M. Application of Extension Neural Network with Discrete Wavelet Transform and Parseval’s Theorem for Power Quality Analysis. Appl. Sci. 2019.
  5. Aker, E.; Othman, M.L.; Veerasamy, V.; Aris, I. bin; Wahab, N.I.A.; Hizam, H. Fault Detection and Classification of Shunt Compensated Transmission Line Using Discrete Wavelet Transform and Naive Bayes Classifier. Energies 2020, 13, 243.
  6. Parvez, I.; Aghili, M.; Sarwat, A.I.; Rahman, S.; Alam, F. Online power quality disturbance detection by support vector machine in smart meter. J. Mod. Power Syst. Clean Energy 2019, 7, 1328–1339.
  7. Wang, J.; Xu, Z.; Che, Y. Power Quality Disturbance Classification Based on DWT and Multilayer Perceptron Extreme Learning Machine. Appl. Sci. 2019, 9, 2315.
  8. Rupal Singh, H.; Mohanty, S.R.; Kishor, N.; Ankit Thakur, K. Real-time implementation of signal processing techniques for disturbances detection. IEEE Trans. Ind. Electron. 2019, 66, 3550–3560.

 

For PNN

- 1. Mohanty, S.R.; Ray, P.K.; Kishor, N.; Panigrahi, B.K. Classification of disturbances in hybrid DG system using modular PNN and SVM. Int. J. Electr. Power Energy Syst. 2013, 44, 764–777.

- 2. Huang, N.; Xu, D.; Liu, X.; Lin, L. Power quality disturbances classification based on S-transform and probabilistic neural network. Neurocomputing 2012, 98, 12–23.

- 3. Mishra, S.; Bhende, C.N.; Panigrahi, B.K. Detection and classification of power quality disturbances using S-transform and probabilistic neural network. IEEE Trans. Power Deliv. 2008, 23, 280–287.

- 4. Abidin, A.F.; Mohamed, A.; Shareef, H. Intelligent detection of unstable power swing for correct distance relay operation using S-transform and neural networks. Expert Syst. Appl. 2011, 38, 14969–14975.

- 5. Sharma, R.; Pachori, R.B.; Rajendra Acharya, U. An integrated index for the identification of focal electroencephalogram signals using discrete wavelet transform and entropy measures. Entropy 2015, 17, 5218–5240.

- 6. Beritelli, F.; Capizzi, G.; Lo Sciuto, G.; Napoli, C.; Scaglione, F. Rainfall Estimation Based on the Intensity of the Received Signal in a LTE/4G Mobile Terminal by Using a Probabilistic Neural Network. IEEE Access 2018, 6, 30865–30873.

- 7. Woźniak, M.; Połap, D.; Capizzi, G.; Sciuto, G. Lo; Kośmider, L.; Frankiewicz, K. Small lung nodules detection based on local variance analysis and probabilistic neural network. Comput. Methods Programs Biomed. 2018, 161, 173–180.

- 8. Gutiérrez-Gnecchi, J.A.; Morfin-Magaña, R.; Lorias-Espinoza, D.; Tellez-Anguiano, A. del C.; Reyes-Archundia, E.; Méndez-Patiño, A.; Castañeda-Miranda, R. DSP-based arrhythmia classification using wavelet transform and probabilistic neural network. Biomed. Signal Process. Control 2017, 32, 44–56.

- 9. Raman, M.R.G.; Somu, N.; Kirthivasan, K.; Sriram, V.S.S. A Hypergraph and Arithmetic Residue-based Probabilistic Neural Network for classification in Intrusion Detection Systems. Neural Networks 2017, 92, 89–97.

 -10. Varuna Shree, N.; Kumar, T.N.R. Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network. Brain Informatics 2018, 5, 23–30.

 

Sincerely,

Authors

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper investigates a very important topic of classifying power quality disturbance under different conditions using ABC-PSO. The topic is of high importance to the scientific community. Although the current paper presents some advancement in the area, the following changes should be made before publishing it in Energies.

  1. How the gap in current knowledge is being filled/advanced by this work? Any limitations of current work?
  2. The order of feature extraction, selection and classification is confusing. need to be better explained and presented.
  3. In Table 9, some of the previously published work such as (Ref 13, 52) have better accuracy than the proposed algorithm. Any justification for that?
  4. Page 4-5, the text does not replicate Figure 1. For example A1, A2 discussed in the text is missing from the figure.
  5. What is the justification for 8 level decomposition? why not 10 or any other.
  6. The iteration and convergence criteria is not mentioned in the paper.
  7. The noisy signal should be compared with other feature selection techniques
  8. The convergence graphs should be compared with other algorithms instead of ABC and PSO because the proposed algorithm is a combination of these two.
  9. The flow chart and pseudo code for the proposed algorithm should be included. 

 

Author Response

Cover letter for revision

 

Manuscript Information

 

Journal : Energies

Manuscript ID: energies-1106647

Title: High-accuracy power quality disturbance classification using the adaptive ABC-PSO as optimal feature selection algorithm

Authors : Supanat Chamchuen, Apirat Siritaratiwat, Pradit Fuangfoo, Puripong Suthisopapan and Pirat Khunkitti

 

 

Dear Editor/Reviewers,

Thank you for allowing us to revise the manuscript, with an opportunity to address the reviewers’s comments. We also would like to thank the reviewers for his/her valuable comments to improve our work. In this letter, our point-by-point response on the reviewers’s comments are indicated as below. A revised manuscript with "Track Changes" function indicating changes is submitted as the main document.

           

 

 

Reviewer 2's comment

This paper investigates a very important topic of classifying power quality disturbance under different conditions using ABC-PSO. The topic is of high importance to the scientific community. Although the current paper presents some advancement in the area, the following changes should be made before publishing it in Energies.

 

  1. How the gap in current knowledge is being filled/advanced by this work? Any limitations of current work?

Authors’s response:

We would like to thank the reviewer for this helpful comment. Firstly, we would like to kindly inform the reviewer that the main goal of PQD classification research, in general, is to improve the performance of classification system. So, we can improve the system performance through many performance indicators, i.e. accuracy, convergence rate, computational time, noise resistance, required storage and many more, depending on each proposed methodology and focused situation. Accordingly, each parameter could be considered as a research gap, and then we can judge the limitations of the work based on that research gap. However, in this research field, we have to keep in mind that the accuracy has the highest priority. So, that is why we focus on an accuracy improvement in this work.

Therefore, as the reviewer mentioned “the limitations of current work”, in this case would be an accuracy is not that perfect and can still be improved. And the research gap is we noticed that an accuracy of the existed PQD classification research (as shown in Table 9) can be improved increasingly. From the resuls, the filled/advanced knowledge is that this work provides one more high performance system for this research field, that can greatly detect the disturbances in power system.

The above statement was already mentioned in the last paragraph of introduction (for limitations of current work) and in the results/conclusion part (for knowledge filled by this work).

 

 

  1. The order of feature extraction, selection and classification is confusing. need to be better explained and presented.

Authors’s response:

We would like to thank for the reviewer’s suggestion. We have added the flowchart as Fig. 1 to improve the presentation of our work. The additional explanation about the order of feature extraction, selection and classifier was also made at the beginning of section 3 (line 122), section 4 (line 204) and section 5 (line 242).

 

 

  1. In Table 9, some of the previously published work such as (Ref 13, 52) have better accuracy than the proposed algorithm. Any justification for that?

Authors’s response:

This is very good remark. By comparing with Ref 13 (updated 14), our proposed system can classify more types of PQD signals than them. Although Ref 13 demonstrates higher accuracy, but it is specifically under nine signal types. So, we typically can not tell their accuracy for 13 PQD signals, it might be worse. For ref 52 (updated 57), they performed GA as feature selection. So, we can tell that the computational time of their work is much longer than our work due to the well-known weakness of GA (very low convergence rate). The reason that I can tell this time consumption is because the computation time of PQD classification is generally dominated by the feature selection process (more than 90% of the whole process). Accordingly, the computation time of the classification system using GA as feature selection algorithm could be very long. Therefore, although the accuracy of our proposed algorithm is slightly less than Ref 52, but our convergence rate is much better than them.

We have added the above justification to the paper at line 452.

 

 

  1. Page 4-5, the text does not replicate Figure 1. For example A1, A2 discussed in the text is missing from the figure.

Authors’s response:

We would like to thank the reviewer for this suggestion. We have revised that figure.

 

 

  1. What is the justification for 8 level decomposition? why not 10 or any other.

Authors’s response:

The suitable level of decomposition is determined based on the criteria that the frequency range of each decomposed level must be consistent with all frequency components existed in signals, so that the characteristics of signals can be evaluated. Also, in case the signal contains many frequency components (for example, one signal contains the main frequency, sag, swell, harmonic in different frequencies), we have to make sure that the frequency range of each decomposed level must cover all existed frequency components.

To clearly explain the reviewer once again, please have a look at the below figure in which the characteristics of power quality signals (10 kHz sampling frequency) are evaluated using 8 decomposition levels. As disturbances in power system mostly occur in terms of harmonic, transient and high-frequency transient, the reviewer can see that the frequency range of each decomposed level are matched with all  frequency conponents of disturbances. So that, the characteristics of disturbances can be evaluated at each level.

In this work, since we focused on the power quality signals where disturbances occur in the same frequency components as the below figure, so we confirm that the eight decomposition levels are suitable for our classification system.

 However, as we stated in the paper that “decomposition level was set to be eight levels since this value is completely sufficient for characterizing the sampling frequency of power quality signals”, this might be insufficient. So, we have improved the explanation as well as have added the reference of below figure to the paper at line 173.

 

Source: Erişti, Hüseyin, and Yakup Demir. "A new algorithm for automatic classification of power quality events based on wavelet transform and SVM." Expert systems with applications 37.6 (2010): 4094-4102.

 

 

  1. The iteration and convergence criteria is not mentioned in the paper.

Authors’s response:

            We have add the criteria to judge the iteration and convergence to line 412.

 

 

  1. The noisy signal should be compared with other feature selection techniques

Authors’s response:

We have added the result of noisy signals compared with the ABC and PSO, as shown in Table 7. The additional discussion about that has also been made.  

 

 

 

  1. The convergence graphs should be compared with other algorithms instead of ABC and PSO because the proposed algorithm is a combination of these two.

Authors’s response:

For this comment, we would like to kindly argue the reviewer. Since we aimed to improve the performance of PQD classification using our proposed algorithm “ABC-PSO”, our hypothesis was to use the outstanding merit of PSO (fast convergence rate) to compensate the weakness of ABC (low convergence rate). So, in the results, we have to make sure whether our hypothesis is true. Therefore, it is necessary to have the graph to show how the convergence rate of ABC-PSO is getting better than ABC, and how it is worse than PSO. By this reason, we confirm that a comparison of convergence graphs which is shown in the paper is necessary and is correct.

In addition, to be honest, it is very difficult to follow the reviewer’s request because the simulations of each paper are generally done by different environments (such as computer speed, CPU, RAM). So, we need to build the whole new classification system (feature extraction + selection + classifier) to make such a requested comparison because we can not adopt the convengence information from literature. Therefore, we generally add only the necessary algorithms in comparison.

 

 

  1. The flowchart and pseudo code for the proposed algorithm should be included.

Authors’s response:

We agree with the reviewer, we have added the flowchart (as Fig. 1) and pseudo code (as Fig. 4) to demonstrate the overall procedure.

 

 

 

Sincerely,

Authors

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper discusses the classification of Power Quality events comparing some existing methods with a new algorithm, based on ABC-PSO.

1) Line 34. IEC 61000 is not a standard, but a huge group of more than 100 different standards. Only a few are related to Power Quality. Please, specify which ones you have in mind.

2) Sec. 3. This section contains the description of three methods without a clue to the Reader of their use, how they are organized, etc. So sec. 3 should start with a presentation of the methods, their use and organization of the section itself.

3) Sec. 4. Similarly, the neural network described here in sec. 4 and connected to sec. 3 by the first lines (182-183) should be introduced.

4) Sec. 5. It is not clear the relationship between sec. 3 and 4 and this sec. 5.
The ABC algorithm is fed with features of which the best ones will be selected. It is not clear however what are these features, which algorithms identifies them in the raw signals, etc.

5) Line 270. Please, clarify what is the weakness of the ABC algorithm, making reference to the previous section where the ABC is described.

6) Sec. 6 - Results. Since you compare many different algorithms using "accuracy" as key parameter (commented elsewhere), you should also consider an assessment of algorithms complexity (so calculation effort and memory storage) and possible sensitivity to some parameters, or e.g. the starting point.

7) Line 308 and 351. The algorithm was tested only with synthetic waveforms and then by adding some amount of noise, also synthetic.
You should also use measured waveforms, that are available e.g. in data sets (you can make a search of "dataset" "voltage waveform" "power quality event").

8) Table 5. "Spread constant"appears here for the first time without being introduced and explained. In general details of implementation, explanation of parameters, and operation of the ABC-PSO algorithm are not explained. Please, provide details and explanations in the manuscript.

9) Sec. 6.1, Line 332, Table 6, Table 7, Table 9. You refer to accuracy and classification accuracy.
You should define it first. Since it is a classification task, the answer of a algorithm is either right or wrong.
i) You instead speak of an accuracy as a function of iteration numbers, and it is not clear if you derive an answer also when the algorithm has not finished and converged yet.
ii) In addition, since an answer is a discrete number and you show accuracy with two decimal places, it is possible to achieve such numeric representation only with a minimum of 100 tests. Please, clarify how many tests you have done for each specific case to derive the accuracy value.
iii) In general clarify how this classification accuracy is defined.
iv) Doing different runs (or tests) with the same specific case (so, same type of PQD, same algorithm, same algorithm settings) you need other parameters to change between runs: the PQD parameters cannot be changed because it would change the type of waveform; noise is only introduced later ad it was commented for its representativeness. So, it is really necessary that you clarify how you performed the tests and how you rated the algorithms answers, and then calculate the classification accuracy.

10) Line 352. Regarding the "magnitude" of noise, you should clarify what is this magnitude. It should be a signal-to-noise ratio. And then you should clarify if the ratio is of amplitude or, better, of power.

11) Figure 2. You show on the Y axis an "accuracy" label.
It is not clear if it is an error or an accuracy, if it is a single value or the result of some statistical operation, etc. In other words, it was not defined and need to be defined, so all these figures and tables can be understood.

12) Table 9. Please, clarify if the results shown in Table 9 for the other algorithms have been recalculated by you or are derived from the literature.

13) Line 381. Again on "accuracy". You say that the accuracy is 99.31% for identifying all 13 types of events, that is 13 out of 13, so 100%. This is related to the number of significant digits already commented and the lack of definition of this "accuracy" index of performance.

14) References. Standards are not really resources online, because they are not freely accessible, and they are sold by many sources (not only globalspec). Standards should be treated as publications, with a code , a title and a date of publication. The date in which you accessed it is irrelevant.

Author Response

Cover letter for revision

 

Manuscript Information

 

Journal : Energies

Manuscript ID: energies-1106647

Title: High-accuracy power quality disturbance classification using the adaptive ABC-PSO as optimal feature selection algorithm

Authors : Supanat Chamchuen, Apirat Siritaratiwat, Pradit Fuangfoo, Puripong Suthisopapan and Pirat Khunkitti

 

 

Dear Editor/Reviewers,

Thank you for allowing us to revise the manuscript, with an opportunity to address the reviewers’s comments. We also would like to thank the reviewers for his/her valuable comments to improve our work. In this letter, our point-by-point response on the reviewers’s comments are indicated as below. A revised manuscript with "Track Changes" function indicating changes is submitted as the main document.

           

 

Reviewer 3's comment

The paper discusses the classification of Power Quality events comparing some existing methods with a new algorithm, based on ABC-PSO.

 

1) Line 34. IEC 61000 is not a standard, but a huge group of more than 100 different standards. Only a few are related to Power Quality. Please, specify which ones you have in mind.

Authors’s response:

We would like to thank the reviewer for this helpful remark. We have deleted the IEC 61000 from our paper. Also, we would like to clarify that we use the power quality signals based on the “IEEE 1159-2019” standard, as already mentioned in the beginning of section 2 (line 114).

 

 

2) Sec. 3. This section contains the description of three methods without a clue to the Reader of their use, how they are organized, etc. So sec. 3 should start with a presentation of the methods, their use and organization of the section itself.

Authors’s response:

We would like to thank for this suggestion. We have added the explanation to describe the overall process of section 3 to the beginning of section 3 (line 122). The flowchart has also been made to improve the presentation, as shown in Fig. 1. The organization of section has been now mentioned.

 

 

3) Sec. 4. Similarly, the neural network described here in sec. 4 and connected to sec. 3 by the first lines (182-183) should be introduced.

Authors’s response:

We have added the explanation to describe how the neural network is connected to section 3, as shown in the beginning of section 4 (line 204). The flowchart has also been made to improve the presentation, as shown in Fig. 1.

 

 

4) Sec. 5. It is not clear the relationship between sec. 3 and 4 and this sec. 5.

The ABC algorithm is fed with features of which the best ones will be selected. It is not clear however what are these features, which algorithms identifies them in the raw signals, etc.

Authors’s response:

We have added the explanation to describe the relationship between sections 3, 4 and 5, as shown in the beginning of section 5 (line 242). The flowchart has also been made to improve the presentation, as shown in Fig. 1.

 

 

5) Line 270. Please, clarify what is the weakness of the ABC algorithm, making reference to the previous section where the ABC is described.

Authors’s response:

We have clarified the weakness of the ABC at line 270 (updated line 306). We have also linked that weakness to the previous section where the ABC is described.

 

 

6) Sec. 6 - Results. Since you compare many different algorithms using "accuracy" as key parameter (commented elsewhere), you should also consider an assessment of algorithms complexity (so calculation effort and memory storage) and possible sensitivity to some parameters, or e.g. the starting point.

Authors’s response:

For this comment, we would like to kindly explain that, in general, an accuracy has the highest priority in the field of PQD classification. As the reviewer might have seen in the literature of PQD field that all of them focus on and end up with a summary of accuracy, references are shown in Table 10. This is because, in practical, we do not bring the whole process in a research paper to be used, in stead, we generally bring only the optimally selected features in practical use of classifier. So, the other paramaters, i.e. convergenve rate, noise resistance, computational time, required storage and etc do not play a significant impact in real-time applications and are organized as lower priority. So, as we considered the highest priority parameter (accuracy) as well as the convergence property, we would like to strongly confirm that these focused paramters are sufficient for PQD classification design, as compared with the literatue in this research field.

In addition, to be honest, it is very difficult to follow the reviewer’s request because the simulations of each paper are generally done by different environments (such as computer speed, CPU, RAM). So, we need to build the whole new classification system (feature extraction + selection + classifier) to make such a requested comparison because we can not adopt the information of other parameters from literature. Therefore, we generally have only the necessary parameters in comparison.

 

 

7) Line 308 and 351. The algorithm was tested only with synthetic waveforms and then by adding some amount of noise, also synthetic. You should also use measured waveforms, that are available e.g. in data sets (you can make a search of "dataset" "voltage waveform" "power quality event").

Authors’s response:

We would like to thank the reviewer for this suggestion. We have added the new section as section 6.5 to show the classification performance based on the real waveforms. The real PQD signals are adopted from the PQube equipment (http://map.pqube.com), which is an instrument for power quality monitoring and real-time electrical signal phenomena recode.

 

 

 

8) Table 5. "Spread constant" appears here for the first time without being introduced and explained. In general details of implementation, explanation of parameters, and operation of the ABC-PSO algorithm are not explained. Please, provide details and explanations in the manuscript.

Authors’s response:

Thank you for this remark, however, we would like to clarify that the spread constant was already mention in Equation (7). But it is in the word “spread parameter”, so this might mislead the reviewer. Therefore, we have revised it into “spread constant”, as shown in line 222. In addition, the spread constant indicated in Table 5 is just the value related to the given accuracy, we have added this information to line 369.

 

 

9) Sec. 6.1, Line 332, Table 6, Table 7, Table 9. You refer to accuracy and classification accuracy. You should define it first. Since it is a classification task, the answer of a algorithm is either right or wrong.

Authors’s response:

            We would like to thank the reviewer for this helpful suggestion, we have defined the accuracy and the overall (classification) accuracy to line 316. We have also revised all the words “accuracy” appeared in the paper to be corresponded to each content.

 

 

  1. i) You instead speak of an accuracy as a function of iteration numbers, and it is not clear if you derive an answer also when the algorithm has not finished and converged yet.

Authors’s response:

This is a good remark. We would like to clarify that the accuracy shown in the paper has been validated that it is completely a steady state solution. In Fig. 5, although the interation axis (X-axis) is shown only for 2000 interation numbers, but we have simulated more than 10000 interations for each accuracy value to make sure that the solution is completely converged. The figure was cropped just for better view.

To follow the reviewer’s comment, we have added the explanation about the completely converged solution to line 413.

 

 

  1. ii) In addition, since an answer is a discrete number and you show accuracy with two decimal places, it is possible to achieve such numeric representation only with a minimum of 100 tests. Please, clarify how many tests you have done for each specific case to derive the accuracy value.

Authors’s response:

We would like to thank for this remark, the reviewer is correct that we performed 100 tests for each PQ signal type. We have added this information to section 6.2 (line 379).

 

 

iii) In general clarify how this classification accuracy is defined.

Authors’s response:

The definition of accuracy and overall (classification) accuracy has been added to line 316.

 

 

  1. iv) Doing different runs (or tests) with the same specific case (so, same type of PQD, same algorithm, same algorithm settings) you need other parameters to change between runs: the PQD parameters cannot be changed because it would change the type of waveform; noise is only introduced later ad it was commented for its representativeness. So, it is really necessary that you clarify how you performed the tests and how you rated the algorithms answers, and then calculate the classification accuracy.

Authors’s response:

For this comment, we would like to kindly inform that the reviewer might mislead the meaning of different runs (or tests) in our simulations. As we already mentioned in line 116 that “the signals were randomly generated for each test within their mathematical constraints”, then this can answer the reviewer’s question that the parameter changed during different runs is the parameter constraints for PQ signals generation. Accordingly, the accuracy was calculated based on this randomly generated signals, as additionally defined following the previous reviewer’s comments. However, we have added the information to improve the explanation about the tests, as shown in lines 117 (about different testing parameters) and 316 (about accuracy).

 

 

10) Line 352. Regarding the "magnitude" of noise, you should clarify what is this magnitude. It should be a signal-to-noise ratio. And then you should clarify if the ratio is of amplitude or, better, of power.

Authors’s response:

We would like to thank for this remark. The reviewer is correct that the magnitude of noise refers to a signal-to-noise ratio, and the definition of SNR in itself is of power. We have added this revision to line 400.

 

 

11) Figure 2. You show on the Y axis an "accuracy" label.

It is not clear if it is an error or an accuracy, if it is a single value or the result of some statistical operation, etc. In other words, it was not defined and need to be defined, so all these figures and tables can be understood.

Authors’s response:

We have revised the Y axis of Fig. 2 into "Overall accuracy (%)", the definition of this term has been defined (in line 316) following the previous reviewer’s comments.

 

 

12) Table 9. Please, clarify if the results shown in Table 9 for the other algorithms have been recalculated by you or are derived from the literature.

Authors’s response:

The results of other algorithms shown in Table 9 (updated Table 10) are adopted from the literature, we have added this statement to line 448.

 

 

13) Line 381. Again on "accuracy". You say that the accuracy is 99.31% for identifying all 13 types of events, that is 13 out of 13, so 100%. This is related to the number of significant digits already commented and the lack of definition of this "accuracy" index of performance.

Authors’s response:

Thank you for this remark. Since we have followed the previous reviewer’s comment by adding the definition of accuracy to line 316, therefore this 99.31% is the overall accuracy which is the average value of the accuracy of each power quality signal. This calculation method to obtain an overall accuracy is typically a general way for PQD classification.

 

 

14) References. Standards are not really resources online, because they are not freely accessible, and they are sold by many sources (not only globalspec). Standards should be treated as publications, with a code , a title and a date of publication. The date in which you accessed it is irrelevant.

Authors’s response:

We would like to thank the reviewer for this helpful remark. We have revised the reference pattern of all standards.

 

 

Sincerely,

Authors

 

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Thank you for your revision. The paper needs significant changes in terms of the English language and the flow of information. Some of my comments are below but these are not inclusive.

  1. There are issues with the use of small and capital letters. For example, in the abstract Artificial bee colony is written with small first letters but Discrete Wavelet Transform with capital letters. This is throughout the whole paper. Need a thorough revision.
  2. Section 3 feature extraction, the introductory paragraph is disorganized and difficult to understand for readers. 
  3. The paper lacks the consistency and flow of information presentation. For example, in line 185, the sentence ending with (focused on). Similar mistakes can be identified at different places. 
  4. Use of 'is' and 'was' in the same sentence. Line 204 and 205
  5.  

Author Response

Cover letter for revision

 

Manuscript Information

 

Journal : Energies

Manuscript ID: energies-1106647

Title: High-accuracy power quality disturbance classification using the adaptive ABC-PSO as optimal feature selection algorithm

Authors : Supanat Chamchuen, Apirat Siritaratiwat, Pradit Fuangfoo, Puripong Suthisopapan and Pirat Khunkitti

 

 

Dear Editor/Reviewers,

Thank you for allowing us to revise the manuscript, with an opportunity to address the reviewers’s comments. We also would like to thank the reviewers for his/her valuable comments to improve our work. In this letter, our point-by-point response on the reviewers’s comments are indicated as below. A revised manuscript with "Track Changes" function indicating changes is submitted as the main document.

           

 

Reviewer 2's comment

Thank you for your revision. The paper needs significant changes in terms of the English language and the flow of information. Some of my comments are below but these are not inclusive.

 

  1. There are issues with the use of small and capital letters. For example, in the abstract Artificial bee colony is written with small first letters but Discrete Wavelet Transform with capital letters. This is throughout the whole paper. Need a thorough revision.

Authors’s response:

We would like to thank the reviewer for this helpful comment. We have revised these issues throughout the whole paper, the revisions are shown in lines 11, 50, 53, 89, 98, 100 and 115.

 

  1. Section 3 feature extraction, the introductory paragraph is disorganized and difficult to understand for readers.

Authors’s response:

We have revised the introductory paragraph of section 3.

 

  1. The paper lacks the consistency and flow of information presentation. For example, in line 185, the sentence ending with (focused on). Similar mistakes can be identified at different places.

Authors’s response:

We would like to thank the reviewer for this comment, we corrected the mentioned word in line 185 (updated line 196). Also, our paper in the revised version has been grammatically checked by the English editing service of MDPI, certification is attached.

 

  1. Use of 'is' and 'was' in the same sentence. Line 204 and 205

Authors’s response:

We have corrected this error.

 

 

 

 

Sincerely,

Authors

Author Response File: Author Response.pdf

Reviewer 3 Report

Dear Authors,

thank you for your replies and kind explanations and amendments to the manuscript. It is much more clear: the organization and readability have improved. I have no other comments, besides some minor English form issues, sometimes in the introduced changes.

Author Response

Cover letter for revision

 

Manuscript Information

 

Journal : Energies

Manuscript ID: energies-1106647

Title: High-accuracy power quality disturbance classification using the adaptive ABC-PSO as optimal feature selection algorithm

Authors : Supanat Chamchuen, Apirat Siritaratiwat, Pradit Fuangfoo, Puripong Suthisopapan and Pirat Khunkitti

 

 

Dear Editor/Reviewers,

Thank you for allowing us to revise the manuscript, with an opportunity to address the reviewers’s comments. We also would like to thank the reviewers for his/her valuable comments to improve our work. In this letter, our point-by-point response on the reviewers’s comments are indicated as below. A revised manuscript with "Track Changes" function indicating changes is submitted as the main document.

           

 

Reviewer 3's comment

Dear Authors, thank you for your replies and kind explanations and amendments to the manuscript. It is much more clear: the organization and readability have improved. I have no other comments, besides some minor English form issues, sometimes in the introduced changes.

Authors’s response:

We would like to thank the reviewer for positive rating of our manuscript, we have used the English editing service of MDPI to correct all gramartical errors in the paper, certification is attached.

 

 

 

 

Sincerely,

Authors

Author Response File: Author Response.pdf

Back to TopTop