Next Article in Journal
Sentinel Lymph Node Metastasis on Clinically Negative Patients: Preliminary Results of a Machine Learning Model Based on Histopathological Features
Next Article in Special Issue
Latent Dimensions of Auto-Encoder as Robust Features for Inter-Conditional Bearing Fault Diagnosis
Previous Article in Journal
Interoperability of Digital Tools for the Monitoring and Control of Construction Projects
Previous Article in Special Issue
A Dynamic Methodology for Setting Up Inspection Time Intervals in Conditional Preventive Maintenance
 
 
Article
Peer-Review Record

Event-Driven Dashboarding and Feedback for Improved Event Detection in Predictive Maintenance Applications

Appl. Sci. 2021, 11(21), 10371; https://doi.org/10.3390/app112110371
by Pieter Moens 1,*, Sander Vanden Hautte 1, Dieter De Paepe 1, Bram Steenwinckel 1, Stijn Verstichel 1, Steven Vandekerckhove 2, Femke Ongenae 1 and Sofie Van Hoecke 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2021, 11(21), 10371; https://doi.org/10.3390/app112110371
Submission received: 23 August 2021 / Revised: 21 October 2021 / Accepted: 27 October 2021 / Published: 4 November 2021
(This article belongs to the Special Issue Overcoming the Obstacles to Predictive Maintenance)

Round 1

Reviewer 1 Report

This is an interesting paper describing an integrated system for event detection. As a whole, the focus of this paper tends to implementation too much rather than theory or methodology. This is not preferrable as a journal paper. It is advisable to focus more on theory or methodology.

  1. This paper is written from the viewpoint of system integration of several functional components. While it is interesting from that viewpoint, it is preferrable to describe key technologies including how to combine knowledge-driven and data-driven approaches and how to capture feedback from the user so that the readers can understand the overview of the technologies.
  2. Please enlarge fonts in the figures, e.g., Figure 1.
  3. Sections 4 and 5: These sections are written by referring to Listing 1-5. This is not preferable as a journal paper. Please describe the methodology in more formal manner without referring to Listings. Of course, you can use Listings just as examples.
  4. Please add an annex describing the ontology defined in this paper, e.g., dashb: AnomalyVisualization.
  5. This paper deals with only three kinds of events. What will happen if the number of events and sensors increases ten times or a hundred times? Combinatorial explosion, loss of integrity, or critical delay of calculation may occur. This point and the limitation of scalability of this method should be discussed.

 

Author Response

Our point by point response to each question and comment is enclosed in an attached rebuttal pdf document.

Author Response File: Author Response.pdf

Reviewer 2 Report

see attached document

Comments for author File: Comments.pdf

Author Response

Our point by point response to each question and comment is enclosed in an attached rebuttal pdf document.

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors present a holistic framework for detecting outlier, anomalies and finally also new events in data streams recorded by sensor data. The description of the components provide a nice overview about the functionality of the whole framework (which is realized within a concrete software tool due to the present evaluations in Section 8), however it lacks of a more detailed description about the concrete methodological components it embeds. 

So, for instance, it is nowhere explained how user feedback (in terms of event labellings) is dynamically integrated into the event/anomaly detectors in order to improve their performance - the authors just describe temporal and semantic rule mining in a general descriptive style, but there are no formulas, concrete algorithms listed how the integration of the user feedback is de facto achieved. Are the rules incrementelly updated or even evolved on the fly or re-trained with past batch data and if so with which algorithmic steps?

The same consideration goes to the data-driven and knowledge based event detectors mentioned in Section 3.1 and 3.2 - i write here 'mentioned' because there are no formulas, algorithmic descriptions etc. how they de facto work....

In this sense, the paper needs a significant extension into this direction....

Furthermore, a bottleneck of the current framework which i see is a uni-variate view of the sensor data - the authors mention several times the CO2 sensor as an example and how events are linked there to sensor properties etc. - however, i guess events (and especially anomalies) are often of multi-variate nature, i.e. they cannot be recognized based on a sole single sensor but need the multivariate interrelations view of the sensor data (especially in the case of more complex intermittent faults) - so, how are such multivariate events handled in the framework?? - the authors should clarify this

Author Response

Our point by point response to each question and comment is enclosed in an attached rebuttal pdf document.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The revised version corresponds well to the reviewer's comments.

Reviewer 2 Report

The authors have sufficiently addressed most of my comments to the previous version of this paper. Not all concerns were covered (describing more deeply how learning can proceed), but I understand that this is a first-results paper and it would overload this paper to go deeper into these issues. Therefore, the paper can now be accepted.

Reviewer 3 Report

The authors have sufficiently addressed all my comments on the preliminary version of the paper. Therefore, it is now ready for publication.

Back to TopTop