Next Article in Journal
Conflicting Effects of Coffee Consumption on Cardiovascular Diseases: Does Coffee Consumption Aggravate Pre-existing Risk Factors?
Next Article in Special Issue
Stacked Auto-Encoder Based CNC Tool Diagnosis Using Discrete Wavelet Transform Feature Extraction
Previous Article in Journal
Biotechnology and Bioprocesses: Their Contribution to Sustainability
Previous Article in Special Issue
Aerodynamic Studies on Non-Premixed Oxy-Methane Flames and Separated Oxy-Methane Cold Jets
 
 
Article
Peer-Review Record

Infrared Infusion Monitor Based on Data Dimensionality Reduction and Logistics Classifier

Processes 2020, 8(4), 437; https://doi.org/10.3390/pr8040437
by Xiaoli Wang, Haonan Zhou and Yong Song *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Processes 2020, 8(4), 437; https://doi.org/10.3390/pr8040437
Submission received: 13 February 2020 / Revised: 26 March 2020 / Accepted: 31 March 2020 / Published: 7 April 2020

Round 1

Reviewer 1 Report

â–  First of all, the quality of figures must be improved especially Fig. 3, 8 and 9. It is really difficult to figure out the characters, numbers and lines on the figures. â–  The authors proposed a cost effective dimensional reduction method. They simplified the input signal as a seven-dimensional vector before the reduction. Therefore, the details of meanings and the process to make the seven-dimensional vector must be presented in Section 2. I also recommend separate the section 2 into a couple of sections or subsections according to subjects. That will be much more easer to read and understand. â–  In 3. Results, I cannot understand the claim that “The mean squared loss and the variation of the operational parameters we need with the number of trainings are shown in Figure 7”. What it the variation of the parameters on the figure? Also the authors said they will tune some parameters. Let the reader know the detail of the parameters and their feature. In addition, could you explain the reason why the authors do not need some features? â–  Do you have any comparison data for size, cost, and power consumption of the propose method? Without the data, I hardly agree with the optimal design of the infusion monitor at the first sentence in the conclusion section.

Author Response

Firstly, we would like to thanks the reviewer for the positive and constructive comments. According to your comments, we have checked our manuscript carefully. Some grammatical & language errors and other inexact expressions in the manuscript have been corrected. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

Reviewer’s Comments:

Point 1: First of all, the quality of figures must be improved especially Fig. 3, 8 and 9. It is really difficult to figure out the characters, numbers and lines on the figures.

Response 1: Thank you for your comments. We have changed the picture to a better one. At the same time, since figure 8 has no effect on the article, we remove figure 8.

 

 Point 2: The authors proposed a cost effective dimensional reduction method. They simplified the input signal as a seven-dimensional vector before the reduction. Therefore, the details of meanings and the process to make the seven-dimensional vector must be presented in Section 2. I also recommend separate the section 2 into a couple of sections or subsections according to subjects. That will be much more easer to read and understand.

 Response 2: Thank you for your professional opinion. According to your opinion, we have divided the second part into three parts: equipment, data dimension reduction method and logical classifier.  

 

Point 3: In 3. Results, I cannot understand the claim that “The mean squared loss and the variation of the operational parameters we need with the number of trainings are shown in Figure 7”. What it the variation of the parameters on the figure? Also the authors said they will tune some parameters. Let the reader know the detail of the parameters and their feature. In addition, could you explain the reason why the authors do not need some features? 

Response 3: The loss function is a concept in the field of machine learning, which is used to represent the gap between the model output and the expected value. Here, we use the sum of squares as the sum of the difference between each output value and the corresponding expected value. When discussing such data, we are more concerned with the variation trend of model performance, so model loss is often presented in the form of numerical value without unit. For the changing parameters in the figure, the horizontal axis is the number of data training iterations, and the vertical axis is the value of the loss function. 

 

Point 4: Do you have any comparison data for size, cost, and power consumption of the propose method? Without the data, I hardly agree with the optimal design of the infusion monitor at the first sentence in the conclusion section. 

Response 4: Thank you for your professional opinion. In this paper, the method we proposed mainly focuses on the algorithm level, and has no obvious influence on the size, cost and power consumption of the product. At last, special thanks to you for your comments again!

 

Reviewer 2 Report

Manuscript Summary

[1] The manuscript proposes a method to build a drop infusion monitor using an embedded device with an infrared detector and a neural network to classify the signal (here defined as logistic regression)

[2] Given the limited resources of the device, the method proposes the application of dimensionality reduction on the voltage signal generated by the photodiode

[3] The dimensionality reduction is based on the embeddings of an autoencoder with a single bottleneck layer

General evaluation

[4] The idea of the paper is interesting although the novelty on the methodological part is limited. Unfortunately, there are also a couple of serious methodological flaws that undermine the described work at least from the machine learning point of view.

Specific comments

[5] The writing is in dire need of proper English editing. Many phrases are incredibly hard to read and the meaning is not clear. I strongly suggest a full rewrite with the help of a native English speaker.

[6] The first serious flaw is described in lines 201 to 206. If I understand correctly the autoencoder is trained until acceptable results are obtained on the test dataset. Unfortunately, a proper test dataset is supposed to be used only once. What are you describing here is the validation set. But an actual test dataset is missing. This approach is prone to overfitting.

[7] Both the data used to train the autoencoder and the results are not described at all. Not a single number on test reconstruction error or example is provided. Figure 7 shows the loss during training and is completely uninformative of the actual reconstruction results on the test set.

[8] The second serious flaw is that the classifier accuracy is "tested" only on the theoretical coefficient of the infusion tube, not on the actual number of drops, so there is no actual ground data available. At least a comparison with other infusion devices is expected.

[9] Too many details of the signal processing, model training, and data collection are hidden behind vague wording like "we will fine-tune some parameters" (Line 267 - 268), "training data under different drop speeds is added" (Line 274 - 275) without specifying the actual quantities.

Minor comments

[7] Figures 7, 9 are directly taken from the Tensorboard output and are totally uninformative of the actual model performance, since they refer to the training set.

[8] Figure 8 is directly taken from the Tensorboard output and is completely useless.

 

 

Author Response

Firstly, we would like to thanks the reviewer for the positive and constructive comments. According to your comments, we have checked our manuscript carefully. Some grammatical & language errors and other inexact expressions in the manuscript have been corrected. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

Reviewer’s Comments:

Point 1: The writing is in dire need of proper English editing. Many phrases are incredibly hard to read and the meaning is not clear. I strongly suggest a full rewrite with the help of a native English speaker.

Response 1: We are sorry for the poor English expression in the previous manuscript. We have revised the whole manuscript carefully and tried to avoid any grammar or syntax error. In addition, we have asked several colleagues who are skilled authors of English language papers to check the English. Thanks for your suggestion.

 Point 2: The first serious flaw is described in lines 201 to 206. If I understand correctly the autoencoder is trained until acceptable results are obtained on the test dataset. Unfortunately, a proper test dataset is supposed to be used only once. What are you describing here is the validation set. But an actual test dataset is missing. This approach is prone to overfitting. 

Response 2: Thank you for your professional advice. As you said, automatic coding is trained until an acceptable result is obtained on the test data set. The research in this paper is to select part of a group of data as a training set and the other part as a test set. There is no overlap between the two parts, so each data is used only once. 

Point 3: Both the data used to train the autoencoder and the results are not described at all. Not a single number on test reconstruction error or example is provided. Figure 7 shows the loss during training and is completely uninformative of the actual reconstruction results on the test set.

 Response 3: Thanks for the judges. We redesigned the experiment in figure 7 to better reflect the loss of the training group and the test group. 

Point 4: The second serious flaw is that the classifier accuracy is "tested" only on the theoretical coefficient of the infusion tube, not on the actual number of drops, so there is no actual ground data available. At least a comparison with other infusion devices is expected. 

Response 4: In the final experiment, we actually applied the algorithm model to an actual device and tested it in a real environment. But the other texts related products are not given to the data accuracy, so compared with other products we do not know how to start, can only be learned from related industries present related equipment cannot meet the needs of industry of accuracy, and the application of our method, under the environment of various kinds of actual testing accuracy meet the needs of practical application standards (the error is within 2%, also has said 5% of the 500 ml of infusion bottle, the 5 ml or 10 ml error), in order to form contrast.  

Point 5: Too many details of the signal processing, model training, and data collection are hidden behind vague wording like "we will fine-tune some parameters" (Line 267 - 268), "training data under different drop speeds is added" (Line 274 - 275) without specifying the actual quantities. 

Response 5: The 'training data under the company drop speeds is added' refers to the data sets, we collect includes reality may encounter situations, as well as the speed of data, in order to minimize the gap between training error and generalization error. 'we will fine-tune some parameters' means that during the classifier training, the parameters of the data dimensionality reduction algorithm will be adjusted accordingly to better adapt to the classification task. However, the learning rate at this time is only one thousandth of that of the previous training, so it is called fine-tuning. 

Point 6: Figures 7, 9 are directly taken from the Tensorboard output and are totally uninformative of the actual model performance, since they refer to the training set. 

Response 6: Thank you for your professional advice. We changed the contents of figure 7 and figure 9, and replaced them with two experimental training diagrams and test diagrams. The vertical coordinate is the error loss degree, and the horizontal coordinate is the training times. This can better reflect the performance of the model. 

Point 7: Figure 8 is directly taken from the Tensorboard output and is completely useless.  

Response 7: Thanks for your careful comments. As you said, figure 8 does not make much sense to the article as a whole, so we have removed figure 8. At last, special thanks to you for your comments again!

Reviewer 3 Report

The paper presents the improvement data processing of an infrared infusion monitoring method based on a data dimensional reduction method and Logistic classifier for medical application. The following are the comments to improve the content and quality of the paper prior publication:

  1. Line 40: a citation is usually mentioned author's surname, not the abbreviation name. You need to mention the full name of F.Z.
  2. It is not clear what the data dimension method used in the paper. According to the Equations and Figures 4 and 5, it looks like the Neural Network. I suggest the Authors provide more detail Section about the proposed data reduction method. For example, pseudo-code can be presented in the paper.
  3. Chapter 2 looks mixed up. It would be better to separate Section 2 into 3 sub-sections: (1) the equipment, (2) data dimension reduction method, (3) logistic classifier.
  4. Please also provide a more detail section about logistic classifiers. Why the Authors selected this classifier instead of other classifiers?
  5. Figure 8 poor resolution. Please replace the Figure with the better one.
  6. Figure 9(a) and (b), what is the unit of x- and y-axis?
  7. Figure 9(b), there are two lines in the Figure 9, light solid red and dark solid red. Please provide a legend to describe these lines.
  8. Line 290: An academic paper is usually written using passive voice, thus the subject 'we' is not a proper form. Please revise the paper thoroughly.
  9. Figure 10, what is the unit of time in the x-axis? second or minutes?

Author Response

Firstly, we would like to thanks the reviewer for the positive and constructive comments. According to your comments, we have checked our manuscript carefully. Some grammatical & language errors and other inexact expressions in the manuscript have been corrected. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

Reviewer’s Comments:

Point 1: Line 40: a citation is usually mentioned author's surname, not the abbreviation name. You need to mention the full name of F.Z.

Response 1: Thanks for your professional advice, according to your suggestion we modified the corresponding parts of the manuscript and marked them in red.

 Point 2: It is not clear what the data dimension method used in the paper. According to the Equations and Figures 4 and 5, it looks like the Neural Network. I suggest the Authors provide more detail Section about the proposed data reduction method. For example, pseudo-code can be presented in the paper. 

Response 2: Thanks for your Comment. In this paper, the data dimensionality reduction method is self-coding dimensionality reduction. The pseudo code of neural network has been added in this paper. 

Point 3: Chapter 2 looks mixed up. It would be better to separate Section 2 into 3 sub-sections: (1) the equipment, (2) data dimension reduction method, (3) logistic classifier.

 Response 3: Thank you for your professional opinion. According to your suggestion, we have reconstructed the second chapter and added the corresponding neural network pseudo code.

 Point 4: Figure 8 poor resolution. Please replace the Figure with the better one. 

Response 4: Thanks for your careful comment. Since figure 8 is taken directly from the Tensorboard output, the resolution is low. But figure 8 doesn't make much sense to the article as a whole, so we've removed it. 

Point 5: Figure 9(a) and (b), what is the unit of x- and y-axis? 

Response 5: In Figure 9 (a) and (b), the unit of x-axis is the number of data training. Y-axis in figure 9(a) is an indicator to evaluate the quality of our model output and a score to evaluate the "difference" between model output and label value, so there is no unit problem with loss here. Y-axis in figure 9(b) refers to the ratio between the number of data correctly classified in the sample and the total number of data in the sample when verifying the data accuracy, while the ratio has no unit. 

Point 6: Figure 9(b), there are two lines in the Figure 9, light solid red and dark solid red. Please provide a legend to describe these lines.

Response 6: There are two curve in figure 9, this is because the figure is released by the company of a drawing tool Tensor - Board drawing automatically generated, Tensor - Board in according to the data to generate images, in order to prevent individual data make the whole image is too much noise, the image curve smoothing, one color shallow that line is the original data, the curve of the darker is after smoothing curve. 

Point 7: Line 290: An academic paper is usually written using passive voice, thus the subject 'we' is not a proper form. Please revise the paper thoroughly. 

Response 7: Thank you for your professional opinion. We have changed some of the contents of this article and highlight the major changes in red. 

Point 8: Figure 10, what is the unit of time in the x-axis? second or minutes? 

Response 8: Figure 10 standard test accuracy mark from 12:00 to 18:00, test time is 6 hours, we have changed the X coordinate axis to "time (h)". 

At last, special thanks to you for your comments again!

Round 2

Reviewer 1 Report

The paper is improved and good for readers.

The only comment is that it could be better to provide not only the accuracy but also a precision and recall.

Author Response

Reply to Reviewer’s Comments

 

Reply to Reviewer #1

 

Firstly, we would like to thanks the reviewer for the positive and constructive comments. According to your comments, we have checked our manuscript carefully. Some grammatical & language errors and other inexact expressions in the manuscript have been corrected. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

 

Reviewer’s Comments:

 

Point 1: The only comment is that it could be better to provide not only the accuracy but also a precision and recall.

 

Response 1: Thank you for your comments. In the actual collection in this paper, there are 2682 data from shading place, 3010 data from indoor and 2611 data from by the window. When stratified sampling is conducted, 70% of the sampled data is used for training and 30% for actual testing. After training, the recall rate measured on the training set is 100%, and the accuracy rate measured on the test set is also 100%. When applied to the actual bluetooth device, this value decreases slightly. The results are shown in the figure1(Please see <Reply to Reviewer1's Comments. PDF> for details). At the same time, the supplement to this part has been added to 306-310 lines of the manuscript.

Figure 1. The change in accuracy in this test.  At last, special thanks to you for your comments again!

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Response 1: We are sorry for the poor English expression in the previous manuscript. We have revised the whole manuscript carefully and tried to avoid any grammar or syntax error. In addition, we have asked several colleagues who are skilled authors of English language papers to check the English. Thanks for your suggestion.

Comment 1: Although some sentences have been rewritten, the overall language level is still under the acceptable threshold for the journal publication. I renew the suggestion to adopt a review from a native English speaker if you can.

 

Response 2: Thank you for your professional advice. As you said, automatic coding is trained until an acceptable result is obtained on the test data set. The research in this paper is to select part of a group of data as a training set and the other part as a test set. There is no overlap between the two parts, so each data is used only once. 

Comment 2: What are you describing is a validation dataset. The purpose of the test dataset is to be used after the training, and only once.
An actual test dataset is missing. For a very brief introduction on train, validation and test dataset, please see this blog post: https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7

 

Response 3: Thanks for the judges. We redesigned the experiment in figure 7 to better reflect the loss of the training group and the test group. 

Comment 3: A real test set is still missing. Tensorboard graph is uninformative of the actual test results. You should show at least some examples of input (the 7 voltage values) from the test set along with the reconstructed values (in a table or in a plot), and provide the value of the reconstruction accuracy (in percentage).

 

Response 4: In the final experiment, we actually applied the algorithm model to an actual device and tested it in a real environment. But the other texts related products are not given to the data accuracy, so compared with other products we do not know how to start, can only be learned from related industries present related equipment cannot meet the needs of industry of accuracy, and the application of our method, under the environment of various kinds of actual testing accuracy meet the needs of practical application standards (the error is within 2%, also has said 5% of the 500 ml of infusion bottle, the 5 ml or 10 ml error), in order to form a contrast.  

Comment 4: It should be explicitly mentioned in the text that there is no actual ground truth or comparison with other devices for the number of drops and that only the theoretical infusion numbers are used as a comparison.

 

Response 5: The 'training data under the company drop speeds is added' refers to the data sets, we collect includes reality may encounter situations, as well as the speed of data, in order to minimize the gap between training error and generalization error. 'we will fine-tune some parameters' means that during the classifier training, the parameters of the data dimensionality reduction algorithm will be adjusted accordingly to better adapt to the classification task. However, the learning rate at this time is only one thousandth of that of the previous training, so it is called fine-tuning. 

Comment 5: Again this can be partially acceptable, but this is only qualitative and not quantitative. You should provide the actual number of the samples for each test environment: how many samples of data were collected by the window, indoor and Shading place to test the model? Moreover, the values of all parameters used for the training and finetuning of the model (η, γ, ε, T) that you introduce in Algorithm 1 have to be specified.

 

Response 6: Thank you for your professional advice. We changed the contents of figure 7 and figure 9, and replaced them with two experimental training diagrams and test diagrams. The vertical coordinate is the error loss degree, and the horizontal coordinate is the training times. This can better reflect the performance of the model. 

Comment 6: the figures are only partially informative because they show only the behaviour of the loss during the training and validation. Additional graphics or tables showing examples of the behaviour at inference time (after the model has been trained) are needed. Please report at least some examples of input and output values with positive and negative cases.

Author Response

Please see Reply to Reviewer2's Comments. PDF for details as there are many pictures in the response.

 

Reply to Reviewer’s Comments

 

Reply to Reviewer #2

 

Firstly, we would like to thanks the reviewer for the positive and constructive comments. According to your comments, we have checked our manuscript carefully. Some grammatical & language errors and other inexact expressions in the manuscript have been corrected. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

 

Reviewer’s Comments:

 

Point 1: Although some sentences have been rewritten, the overall language level is still under the acceptable threshold for the journal publication. I renew the suggestion to adopt a review from a native English speaker if you can.

 

Response 1: Thank you for your careful review We have revised the whole manuscript carefully and tried to avoid any grammar or syntax error. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

 

Point 2: What are you describing is a validation dataset. The purpose of the test dataset is to be used after the training, and only once. An actual test dataset is missing. For a very brief introduction on train, validation and test dataset, please see this blog post: https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7 

Response 2: Thank you for your professional advice. I'm sorry that we neglected this detail before, because in our initial idea, we put the algorithm on the device for actual testing, which is the best test set. Therefore, we did not set the test dataset in the first phase of the experiment. The test results are shown in Figure 1 (a) (b), where red is the input data (original data) and green is the generalization of the basic expression algorithm of the results of the two graphs after reconstruction.

   

(a)

(b)

Figure 1. Test set test results. (a) random set 1;(b) random set 2

The results of these two graphs basically express the generalization of the algorithm. And the relevant content has been added in line 306-309 of the manuscript. 

 

Point 3: A real test set is still missing. Tensorboard graph is uninformative of the actual test results. You should show at least some examples of input (the 7 voltage values) from the test set along with the reconstructed values (in a table or in a plot), and provide the value of the reconstruction accuracy (in percentage). 

Response 3: Thank you for your professional opinion. For the test set experiment, as point2 said, we have added the relevant content in line 306-309 of the manuscript.  

 

Point 4: It should be explicitly mentioned in the text that there is no actual ground truth or comparison with other devices for the number of drops and that only the theoretical infusion numbers are used as a comparison. 

Response 4: Thank you for your suggestion. We have added the relevant explanation in the 261-267 lines of the manuscript. Thanks again!  

 

Point 5: Again this can be partially acceptable, but this is only qualitative and not quantitative. You should provide the actual number of the samples for each test environment: how many samples of data were collected by the window, indoor and Shading place to test the model? Moreover, the values of all parameters used for the training and finetuning of the model (η, γ, ε, T) that you introduce in Algorithm 1 have to be specified. 

Response 5: Thank you for your professional comments. In this paper, there are 2682 data from shading place, 3010 data from indoor and 2611 data from by the window. When stratified sampling is conducted, 70% of the sampled data is used for training and 30% for actual testing. Where η is the step size, γ is the specific gravity of the sliding gradient, ϵ is a constant for stability, T is a small sample size. the values of each parameter in algorithm 1 are shown as follows:

T=5;η=0.02;γ=0.8; ϵ=10-6 

(1)

Relevant content has been added in the article and highlighted in red. Thanks again for your comments! 

 

Point 6: The figures are only partially informative because they show only the behaviour of the loss during the training and validation. Additional graphics or tables showing examples of the behaviour at inference time (after the model has been trained) are needed. Please report at least some examples of input and output values with positive and negative cases. 

Response 6: As shown in the figure 2, the three curves are randomly taken from three different samples, among which the red curve is a positive case, which is true in our label and the judgment result of the model. The green and blue curves describe negative cases. Although the green curve has a peak, the judgment of the droplet has been completed in the last time step. Therefore, the curve is a negative case, so there will be no repeated counting. See our model for how to determine whether two adjacent spikes are from the same droplet.

Figure 2. The examples of input and output values  

At last, special thanks to you for your comments again!

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Dear Authors,

I have checked the revised manuscript and still found some minor errors:

  1. The Figure number need to be re-arrange. Line 274 is for Figure 7 and it is jumped to Figure 10 in line 295. I understand it is due to the revision, but please re-arrange the Figure number.
  2. Although Figure 7 and 10 are captured from Tensor - Board, I think the legend of the Figure is also necessary because these Figure has two lines i.e. light red and dark red. Reader may get confused what is the different between these two lines.
  3. English still need to be amended in some parts, for example in the second paragraph of Conclusion, the Authors still use active voice form instead of passive voice form.

Author Response

Reply to Reviewer’s Comments

 

Reply to Reviewer #3

Firstly, we would like to thanks the reviewer for the positive and constructive comments. According to your comments, we have checked our manuscript carefully. Some grammatical & language errors and other inexact expressions in the manuscript have been corrected. The important changes in our revised manuscript have been marked “in Red”. Thanks again.

Reviewer’s Comments:

Point 1: The Figure number need to be re-arrange. Line 274 is for Figure 7 and it is jumped to Figure 10 in line 295. I understand it is due to the revision, but please re-arrange the Figure number.

Response 1: Thank you for your careful review, we have carefully compared and modified the figure number in the manuscript.

 

Point 2: Although Figure 7 and 10 are captured from Tensor - Board, I think the legend of the Figure is also necessary because these Figure has two lines i.e. light red and dark red. Reader may get confused what is the different between these two lines. 

Response 2: Thanks for your Comment. In figure 7 and figure 8, dark red represents the description of the original data. Bright red is used by tensor board to filter the original data automatically, which can better show the trend of data loss rate. We have added notes in lines 274 to 276 of the manuscript.  

 

Point 3: English still need to be amended in some parts, for example in the second paragraph of Conclusion, the Authors still use active voice form instead of passive voice form. 

Response 3: Thank you for your careful review. We have revised the whole manuscript carefully and tried to avoid any grammar or syntax error. The important changes in our revised manuscript have been marked “in Red”.

 At last, special thanks to you for your comments again!

 

Author Response File: Author Response.pdf

Back to TopTop