Next Article in Journal
Efficient Future Waste Management: A Learning-Based Approach with Deep Neural Networks for Smart System (LADS)
Previous Article in Journal
Study on the Reducing Measures to Reduce the Influence of Culvert Extension on Existing Lines in Loess Regions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Engineering Application of a Product Quality Testing Method within the SCADA System Operator Education Quality Assessment Process

1
Division of Electronic Systems Exploitations, Faculty of Electronics, Institute of Electronic Systems, Military University of Technology, 2 Gen. S. Kaliski St., 00-908 Warsaw, Poland
2
Department of Computer and Control Engineering, Faculty of Electrical and Computer Engineering, Rzeszow University of Technology, 12 Powstańców Warszawy Ave, 35-959 Rzeszów, Poland
3
Division of Air Transport Engineering and Teleinformatics, Faculty of Transport, Warsaw University of Technology, 75 Koszykowa St., 00-662 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(7), 4139; https://doi.org/10.3390/app13074139
Submission received: 2 February 2023 / Revised: 10 March 2023 / Accepted: 22 March 2023 / Published: 24 March 2023

Abstract

:
An education system can be considered as an anthropotechnical system consisting of an education subject (e.g., trainee, operator), examiner (teacher), system decision maker (e.g., teaching module coordinator), and the environment (e.g., administrative, social, IT). The outcomes of this system are the knowledge and skills acquired by a trained student. The educational effect is usually diagnosed in the form of an exam or test. The study addresses the issue related to the credibility of applied assessment methods, knowledge qualification levels, and assessment errors. The analysis is conducted in accordance with the principles applied in statistical quality control when studying product population defectiveness. Using known methods for testing product defectiveness, the authors characterized the trainee educational effect (i.e., his/her knowledge and skill level) in a technical manner. The probability of an event wherein the examined person demonstrating a specific knowledge level achieves an adequate grade, has been adopted as the reliability measure. The conducted calculations provided graphs of the reliability functions involving the grades received within the “traditional” examination and testing process. The authors suggested an application that tests a SCADA system user, based on tools known to the trainee. The application was developed using the SCADA suite used as a visualization element in distributed control systems. This application enables the automation of the operator diagnosis process.

1. Introduction

Distributed control systems (DCS) consist of, among others, process stations, operator stations, and engineering stations [1,2]. Process stations are industrial controllers, which control processes; operator stations are used for operator visualization and interactions; and engineering stations are designed for the configuration, programming, and activation of control and visualization software. SCADA (Supervisory Control and Data Acquisition) suite runs on operator stations. DCS elements are interconnected via an industrial network (Figure 1). This system can be treated as a social engineering system [3,4]. The human factor impacts each of the significant stages of system operation. The reliability of this system depends on the reliability of the technical segment (hardware and software), and also on the reliability of the actions conducted by the operator, engineer, and designer actions, namely, a SCADA system operator. All users, both the system designer, responsible for the configuration and selection of appropriate hardware elements, the engineer, who launches and configures the entire system, and primarily the operator, who conducts required actions from the operator station level in the course of the entire operation process, should exhibit adequate psychomotor features and knowledge [5,6]. The task of the user-operator is monitoring synoptic screens and—if required—properly impacting the ongoing control process through active elements of dynamic visualization images. Therefore, analyzing the reliability of SCADA system operator education effect is important.
The rest of this article is organized as follows. The further part of Section 1 contains a critical review of source literature on the current state of the issue in question. The operator knowledge reliability assessment method makes up Section 2. Section 3 discusses an original application for testing a SCADA system operator knowledge. Finally, Section 4 summarizes the conducted study.

1.1. Education Quality Assessment—Testing

Increasing a SCADA system user reliability, which constitutes an anthropogenic factor within a distributed control system, may be achieved owing to the skillful use of supporting IT tools, such as interactive demonstrators, simulators and, above all, regular diagnosis based on applications for testing knowledge and skills [6]. The applications intended for testing SCADA system operators are tools that employ e-learning methods [7]. A user runs dedicated [8] visualization systems or (usually) universal SCADA suites [9,10] that enable—through numerous software drivers—compatibility with most industrial controllers. The application demonstrated in this paper has one more undeniable advantage. The use of the SCADA Intouch suite [11], which the user works with provides additional possibilities of practicing its operational principles. There is no need to additionally make users familiar with the operation principles regarding third-party, external applications. Simultaneously, the SCADA system user who takes a test, trains practical skills of working with the visualization package. Furthermore, this approach enables conducting tests in remote form and separating trainee functions (i.e., the tested system user) from teacher (examiner) functions, owing to the use of two different stations (computer stations).
Most of the previously developed and applied diagnosis methods focus on diagnosing the properties of a technical facility, assuming that the properties of other anthropotechnical system (ATS) elements and the interrelations are correct [12,13,14]. A systemic approach to this issue requires diagnosis processes to include all elements and relations within an anthropotechnical system. Among the many possible approaches to this problem, particular attention should be paid to the process of diagnosing an anthropotechnical pair, which is a structure consisting of an operator and a technical facility (Figure 2).
Searching for a relatively universal (that is applicable both to a technical facility and an operator-human) performance diagnosis method leads to a conclusion that testing may be the solution.
There are numerous practical examples of testing as a specific diagnosis method [15,16,17]. This specificity is mainly due to the fact that a diagnostic test comes down to stimulating a facility with specific tester-generated signals [18,19,20] and to recording facility-generated responses [21,22,23].
In the case of an operator, these stimulations include questions or orders to conduct certain actions [24,25]. In the case of a technical facility, test stimulations are generally determined as control, load, or interference signal values.
The advantage of diagnosing-testing is its procedural transparency, a generally simple technical implementation of the test and a relatively simple algorithm for diagnostic inference based on received responses.
Please note that testing perfectly fits the definition of state supervision (monitoring) [26,27,28,29,30].
The technical implementation of test stimulations and recording responses is relatively easy when their binary, i.e., zero-one form. This form of symptomatic signals (i.e., responses of the tested facility) is easily achieved when employing appropriate threshold systems.
The example of an anthropotechnical pair is a control system user (SCADA system user) and a technical device in the form of an industrial controller that supervises the process variable values.
The demand for IT methods and procedures providing remote services has grown in recent years [31,32]. It should be noted that the importance of remote work has been increasing since the beginning of 2020 also due to the pandemic [33]. The proposed technical solutions involving the threshold diagnosis of process variable states and the knowledge exhibited by an operator of systems associated with I&C (distributed control systems, SCADA operator control and visualization system [34,35]) are not difficult to transfer to the field of remote. According to recent research, the vast majority of the respondents believe remote test environments to be satisfactory (80%). The difference in the pass rate of operator knowledge tests, which is the anthropotechnical pair anthropogenic part, taken remotely and stationary, does not exceed a few percentage points [36]. The threshold testing method proposed by the authors and its computer implementation is useful not only for traditional operator tests, but it can also (or even especially) be easily deployed in the form of computer tests (CBT—Computer-Based Testing) [37].
Among CBT techniques, in addition to a classic approach to testing, where questions are arranged in standard stacks, a large amount of interest is focused in computer adaptive testing (CAT) [38,39]. However, this does not hinder the application of the proposed method to assess the anthropotechnical pair test results. The employment of CAT mechanisms additionally increases diagnosis credibility through the use of individually selected questions, depending on partial, stage results-diagnoses of operator knowledge tests. A separate issue improving testing credibility is also the special, correct selection of question-stacks. The source literature points to selecting an appropriate test question database, which would cover the field of tested knowledge domain or a scope of skills, in a possibly comprehensive manner [40].
For the sake of supplementation, one should also mention the alternative, complex methods that stretch far beyond the application of statistical quality control for testing an anthropotechnical pair (thus, beyond the field of the procedures discussed in the article) concerning IRT (Item Response Theory) [41], such as, e.g., the employment of generative techniques to develop a wide set of exercise tests and question-based tests. It should also be noted that the aforementioned methods for the selection of questions are very expensive and resource-consuming. It is worth applying them only when the cost of the training following the testing stage of a positively tested operator is significant [42].
In the event of applying threshold testing in distributed systems (both DCS and remote operator testing) [43], there is one important aspect from the perspective of diagnosis credibility. It concerns ensuring an appropriate safety and reliability of the communication process within a distributed testing system. The aforementioned safety aspect has been particularly stressed in [44,45]. The application of remote threshold testing, which utilizes a number of software and hardware solutions (e.g., testing applications, secure network traffic tunnelling, remote desktop, virtual machines) resembles—in terms of protections—issues associated with the Internet of Things (IoT), where the aspects related to securing the data transmitted between system stations are also important [46].

1.2. Education System in an Anthropotechnical Approach

The considerations below focus on the education system. The education process provides results in user (operator, SCADA system engineer—referred to as a trainee in the application) knowledge and skills. The education system can be interpreted as an anthropotechnical system with a structure demonstrated in Figure 3 [3,4].
Each anthropotechnical system (ATS) is developed in order to generate the required outcomes of material, power, or information nature. The requirements for the effect are usually formulated by the recipient ordering and accepting the effect.
The more the system coordinator or examiner encounters a situation in which ATS is unable to provide a required effect, the less confidence he/she has that proceeding with a successive task will be successful. This confidence level is expressed in values that characterize process reliability and the education process.
Generating the effect involves shaping the properties of a delivered, “raw” material (e.g., training level of a SCADA system user-operator candidate). The achieved effect depends on ATS state, and also on the initial values of the “provided” material. Let us assume that these properties exhibit large fluctuations, while being hard to pre-diagnose. The effect generating procedure is—usually—determined for a set of expected material properties and may prove to be defective at certain stages (fragments) of the generated effect. In this case, it is necessary to diagnose effect elements and reject (select) elements failing to meet the requirements.
The education system is a very complex system. This case involves a set of examiners (teachers), education objects (trainees, here: SCADA system users), and educational tools (i.e., methods, curricula, teaching objects) that facilitate forwarding and consolidating knowledge and skills of educated people.
The achieved effect in the form of required trainee knowledge and skills also has a complex form. The overall effect of the education system is a set of knowledge and skills of all trainees.
Education reliability depends on teachers and the trained “material”, i.e., trainee motivation and preparation in the field of knowledge covered by the curricula of previous education stages in related areas.
It should be noted that examining results (i.e., knowledge and skill diagnosing) reveal:
  • reliability of obtaining an effect (i.e., knowledge and skills) with the required properties;
  • education system reliability.
To illustrate the diagnosis process properties in relation to the examination procedure in greater detail, let us recall the following.
The diagnosis process consists of a diagnostic test and diagnostic inference [3].
The diagnostic test, in this case, indicates generating questions (i.e., diagnostic stimulations) and recording formulated answers. This activity is of metrological nature.
Diagnostic inference is—as well known—multi-level. Successive levels are measurement, syndromic, structural, and operational inference. Measurement inference indicates (in the case of an exam) indexing obtained answers into correct (positive symptoms) and incorrect (negative symptoms).
A set of symptoms (which is a subset of correct and a subset of incorrect answers) constitutes an examined person’s knowledge state syndrome. Syndromic inference involves—in this case—comparing the populations of these sets and making a decision whether this ratio exceeds the adopted exam pass threshold (criterion). This inference level may—in principle—complete the diagnosis process. An inquisitive and diligent examiner often implements subsequent inference stages, which are structural inference (in this case: Identifying non-mastered fields) and operational inference (in this case: Trainee prepared or unprepared for specific operational tasks).
Operational diagnosis is particularly important for an education system decision maker. It should constitute grounds for making decisions of systemic nature. Furthermore, the existence of opinions on the high reliability of diagnoses (specifically, the belief that received grades are consistent with the actual state of trainee’s knowledge) precludes hope of receiving an unjustified positive grade.
The effect (trainee knowledge and skills) recipient is the employing professional environment (e.g., I&C and maintenance departments). This recipient requires the trainees to exhibit performance and reliability related traits required to act as operators or administrators (decision makers, coordinators) of specific operating systems.
Of course, a complete knowledge of the topics forwarded during the teaching process is not required in practice. Questionnaire-based studies indicated that the required knowledge minimum amounts to approx. 60% of the taught knowledge. Therefore, it can be assumed that in most cases, this knowledge level (expressed by the basic ability to solve professional problems) is deemed sufficient. This indicates that a diagnosis (in the form of an exam) expressed as “passed” (adequate knowledge) or “failed” (inadequate knowledge) is enough. This issue was already addressed in the 1970s by the author of the foundations for technical diagnostics in Poland and the precursor of the examination process credibility assessment—professor Lesław Będkowski [47]. Employing the then-proposed methods and supplementing them with the contemporary computational application engine and the possibility to implement the testing application in practice, we achieve a complete solution.
However, in practice, numerical assessment often (e.g., when assigning a job task) plays the leading role. It indicates knowledge completeness in greater detail. This is why numerical assessments are usually employed within the teaching process. They are more precise in defining the knowledge level of a student.
There is—in this context—a question “What is this reliability (or credibility, in other words) of detailed (i.e., numerical) assessments?”.
As we know, full diagnosis of a trainee’s/student’s knowledge (which is the diagnosis of all knowledge elements, associated with even only a single topic) would require diagnosing a great number of these elements. Due to time-related and other restrictions, studying this number of elements is impossible.
However, let us note that this diagnosis-related issue indicates great similarity to the issue of diagnosing the state of a set of technical elements [26,27,28]. If the entire knowledge required from a trainee is divided into relatively poorly correlated elements, the method of “statistical quality control” known in technical practice can be employed. Of course, it would be subject to respective modifications [48,49]. It enables determining reliability indices in the form of the credibility of formulated grades. This issue, under the name of “statistical quality control”, was studied by K. Wiśniewski and E. Fidelis many years ago. We can read the following: “Based on diagnosing all elements of a randomly selected sample (assuming complete confidence of all elementary diagnoses), the number of unfit elements is determined. This constitutes a so-called sample defectiveness, which is considered to be the estimation of the defectiveness of the entire batch of diagnosed elements. This leads to estimating the number of unfit elements (i.e., failing to satisfy the requirements) within the entire batch. This statistical diagnosis results in uncertainty”. Therefore, it is a method based on well-proven assumptions, developed without the use of IT techniques, and still applicable and gaining new meaning in the context of employing ICT solutions. The work [48] refers to the foundations of this issue, reaching the second half of the twentieth century.

2. Operator Knowledge Reliability Assessment Method

2.1. Exam as a Knowledge Level Diagnosis Form

An approximate model of a system implementing an examination process is illustrated by Figure 4. It is a case where a user is tested in a traditional way. The examiner is responsible for most testing process elements. It should be noted that both selecting question sets and an answer template are the responsibility of the examiner. The user’s final grade is impacted by the subjectivity of question selection, grade template, and unintentional errors (mistakes) in grading.
The position of certain decision blocks has been modified in order to integrate the model from Figure 4 with the SCADA system user testing system. Please note that elements previously burdening the examiner have been moved to the “testing application” block in the presented model (Figure 5) that takes into account the use of a testing application within the examination process. This enables:
  • obtaining a universal general of questions (diagnostic stimulations) that allows to generate a certain number of questions from a pool (selected or drawn—depending on the settings);
  • automating and objectifying labor-intensive stages of the examiner’s work (comparing answers and threshold grade generation).
The issue related to the reliability of an examination process as a process for diagnosing the knowledge and skill level of an ATS operator has been discussed in [50].
Let us consider trainee (here: SCADA operator) knowledge and examination process elements in technical terms.
It can be assumed (in simplification) that the knowledge exhibited by the trainee is a system consisting of a set of information (S1) and its processing system (S2) (Figure 6). This system provides responses (R) to questions asked (P). Stimulations (P) are—usually—in the form of problems to be solved, tests to complete, etc. Responses (R)—which are the answers of the examined person—are usually theoretical and mathematical descriptions, calculation results, explanations, etc.
Please note that the knowledge system functions under the presence of interference (Z) that include, e.g., mental stress of the examined person, unfavorable impact of other examination system, and surrounding elements. Various knowledge structure models can be considered. These include, e.g., a serial or totalizing structure. Let us assume the issue associated with diagnosing knowledge of totalizing and threshold nature as the basis for a further discussion (Figure 7).
The totalizing structure consists of a set of thematic stacks (autonomous knowledge fragments), made up of elements whose responses add up. An answer is correct only when the total responses from elements of a stimulated stack exceed an adopted threshold. It is possible to determine numerous thresholds, which enables multi-value classification of responses (answers). Real knowledge structures are considerably more complex and similar to the combined structure model [15]. Stacks in this structure can be of fuzzy nature. Each stimulation may induce several different responses. To obtain complete information on the knowledge level exhibited by a diagnosed object, it is necessary to study all stacks, i.e., apply as many stimulations and receive as many responses as there are stacks. If the area of diagnosed knowledge is large (thus, the number of stacks), it may turn out that the number of required stimulations and graded responses is quite large that studying all stacks is out of the question. In most cases, knowledge diagnosis is based on testing a certain number of stacks—which constitutes, in fact, examining a knowledge sample.
Let us adopt the following, simplified assumptions:
  • stack states are independent of each other;
  • all stacks are equally important;
  • stacks were developed using the same “technology” and under the same conditions.
With these assumptions, it is easy to observe a certain analogy between generating a batch of selections and developing a trainee knowledge system. It is justified to refer to the methods employed in statistical quality control. This requires adopting a certain grading rule. It is customary to arbitrarily determine certain quantitative knowledge mastery levels (i.e., threshold values): Ω0, Ω1, …, ΩJ, while (1):
Ω0 < Ω1 < … < ΩJ,
where: J—knowledge class set population (usually also indicates the grade set population).
These levels divide the knowledge area into quantitative grade classes. Knowledge mastery level implies belonging to a class determined by the exam grade:
Ω j 1       Ω    <    Ω j        j   class    j   grade ;     j    =    1 , 2 , , J ,
where: j—assessment number, Ω—examined person’s knowledge level.
Different systems (levels and populations) of knowledge threshold values and grade names are encountered in teaching. Figure 8 shows selected examples of threshold values for 2-, 4-, and 7-value knowledge level classifications.
Let us assume that each response will be considered a fundamental sample containing k elements. The state of each element within this sample is assessed bivalently (0 or 1). Therefore, each response is graded as 0 ÷ k points. Total Q points for all graded responses is the combined point score. Of course, the Q number determines the knowledge level in terms of the tested stacks. If we award the examined person an Sj grade on this basis, we can ask what is the probability that the entire knowledge mastery level falls within the range: [Ωj−1, Ωj], or what is the probability that a trainee deserves an Sj grade upon receiving Q points and a maximum (i.e., possible) number of points Qmax. Specifically, we ask about the probability of the assessment accuracy, which is the diagnosis reliability (3):
P Ω j 1       Ω    <    Ω j    Q , Q max    =    P S j Q , Q max ,
The probability of an examined person deserving an Sj grade can be determined using the expressions (4) and (5):
  • for a continuous knowledge distribution:
P S j Q , Q max    =    Ω j 1 Ω j f Ω 1 Ω Q max Q Ω Q d Ω 0 1 f Ω 1 Ω Q max Q Ω Q d Ω ,
  • for a discrete knowledge distribution:
P S j Q , Q max    =    Ω j 1 Ω j P Ω 1 Ω Q max Q Ω Q 0 1 P Ω 1 Ω Q max Q Ω Q ,
It should be emphasized that the expressions (4) and (5) are analogues of the expressions employed within statistical quality control for testing the defectiveness of the total population, based on determined sample defectiveness.
Based on the expression (4) or (5), the examined person is awarded the Sj grade, which is the most probable, i.e.,:
S j :    P S j Q , Q max    =    max i = 1 , , I P S i Q , Q max
As can be easily observed, grade reliability or the examination result credibility depends on:
  • maximum number of grade points;
  • grading principle, i.e., above all, on the system of trainee knowledge grading interval boundaries.
Graphs in Figure 9 are an illustration of sample results of applying the aforementioned relationships. These graphs show grade credibility functions for a case of 7-value knowledge level classification and 5 examination questions that enable obtaining a maximum number of grade points of Qmax = 25, upon an adopted equal knowledge distribution f(Ω).
As evident based on grade credibility function graphs in Figure 9, the grade credibility (thus, diagnosis reliability) in the case of a “classic” examination is relatively low and strongly related to the examiner’s grading reliability (i.e., his/her subjective grade template). To counteract this, it is worth applying test forms of knowledge diagnosis and an objective test answer grading template.

2.2. Test as a Knowledge Level Diagnosis Form

This form involves substituting subjective grades (i.e., diagnoses) of obtained answers to descriptive questions with objective grades of obtained answers to test questions. It provides an objective diagnosis-assessment of each answer and facilitates increasing the number of questions asked, namely, increasing the population of the tested knowledge sample.
Therefore, the reliability of diagnoses based on the test form increases due to:
  • eliminating subjective answer grading;
  • increasing the population of the tested knowledge element sample.
However, please note that the test form hinders assessing a student’s ability to the associated knowledge elements. This disadvantageous feature of the test form can be mitigated through specific question selection. The answer “rating” principle, which encourages the random selection of answers also requires appropriate preparation.
Sample results of applying the test form of knowledge diagnosis are illustrated in Figure 10 and Figure 11. These graphs show grade credibility functions for a case of 2-value and 7-value knowledge level classification and a maximum number of grade points of Qmax = 100, upon an adopted equal knowledge distribution f(Ω).
As can be easily observed, diagnosis (i.e., examination result (6)) credibility depends on the:
  • maximum number of grade points;
  • grading principle, i.e., above all, on the grading interval boundary system applied to assess a diagnosed operator—e.g., of the SCADA system.

2.3. Diagnosis (Grade) Credibility

To characterize the diagnostic inference method more clearly, let us also introduce the concept of the credibility index (CI).
The diagnostic method credibility index constitutes a credibility averaged relative to the number of Q points, and is written as:
C I = Q = 0 Q = Q max P Q P S j Q , Q max ,
where: P(Q)—probability of receiving Q points.
Figure 12 shows the dependence of this index on the maximum number of points adopted within the grading principle for different grade levels (i.e., various grade accuracies), while Figure 13 illustrates the dependence of this index on the grade set population for a different maximum number of grade points.

3. Application for Testing a SCADA System Operator Knowledge

The engineering method for testing product quality within the process of assessing the education quality of a SCADA system user presented in the article is employed within the knowledge testing application. An application testing a SCADA system user (operator) is a practical tool for testing skills and knowledge. It should be again emphasized that is has been developed in a graphical environment of the SCADA Intouch suite [10,11], which is used by the operator in everyday work. It is a crucial aspect since a tested user (operator) is not required to become familiar with an additional testing suite. He/she conducts the test using a SCADA system visualization suite, employed during professional work as part of the control supervision process at the operator station. Therefore, the trainee may—in the event of an experience user (designer, engineer)—focus on the process of diagnosing own knowledge. On the other hand, in the case of diagnosing a different user (e.g., system operator candidate), the trainee’s fluency in handling the SCADA suite is an additional hint for the examiner that proves mastering practical skills.
The application is of modular structure (Figure 14). This indicates that, depending on the user type, it can operate in the examiner and/or trainee mode. To distinguish between the users, it is required to authenticate them using an ID and password. The trainee ID and password are set individually for each position. Windows to be displayed are customized from the SCADA system level and additionally (for improved security) via an introduced flag system. The test taker is distinguished by first and family name, work number, and position.
The user interface structure enables bringing up successive visualization windows, depending on the selection made in the previous step.
Logging in to the module enables the examiner to perform two types of administrative actions (Figure 15):
  • configure test parameters directly prior to starting the test;
  • operations related to the question database at the preliminary test configuration stage.
When operating the testing application, users encounter two types of activities: Operational and performance related. They start with logging in (registration and user authentication) to the application.
Operational activities are conducted from the administrator-examiner level. The set of operational activities implemented from the examiner module has been marked in Figure 16 in blue. These include operational activities listed in the Figure 15 diagram, along with the final approval—which is the direct preparation for launching the proper test that verifies the knowledge of a SCADA system operator. Whereas performance-related activities—from the perspective of testing application operation—are conducted by a trainee. These are steps required to take the test and include confirming readiness to take the test, providing answers to subsequent questions, and learning the result shown after completing the test (shown against a yellow background in Figure 16).
A window for the trainee and examiner modules is displayed in order to enable setting test parameters (Figure 17). Virtual buttons selected out of the ones visible in the “settings” window are available for each of these two user types. Figure 17 shows an examiner module window with the option to edit all parameters available in the synoptic screen.
It enables single- or multi-station application functioning. Certain preparatory actions are required in the case of both single-station and multi-station work. These include, among others:
  • Initial configuration of the question database. The configuration stage enables adding an explanatory drawing to the questions (Figure 18);
  • Initial application configuration involving setting the operation mode (single- or multi-station);
  • Setting, among others, the number of stations participating in the test, user data, and IDs is possible from the master station (examiner) level in the case of multi-station operator;
  • Setting the number of test questions, activity duration, source location, and question file;
  • Configuration of client apps connecting with the examiner application through indication, among others, the appropriate communication topic. The topic is a system setting that enables defining the operator station communication method;
  • Entering trainee (i.e., tested operator) data;
  • Starting the test (permission) from the master station level or in single-station mode.
The process of diagnosing a SCADA system operator-trainee is as follows:
  • The test is launched at the trainee station;
  • The trainee takes the test through marking appropriate answers to subsequent questions, until they are exhausted or the set time elapses;
  • The test automatically ends after the set time elapses or it is possibly to end it manually at an earlier time. Due to the problem nature of certain questions, it is possible, as per previous consideration, not to set the maximum time for solving the test. This enables a tested trainee to focus on the subject-related aspects of the question, without the stress related to the time factor. Therefore, test results seem more reliable;
  • It is followed by automatic saving of the test results—depending on the selected option—to an archiving spreadsheet file (example in Figure 19) or a text file. Depending on the initial settings, this can be a separate file for each trainee or a collective, common file for a certain group.

4. Conclusions

Based on the considerations herein—related to analyzing the reliability of SCADA system operator education effect, it is possible to formulate the following conclusions and observations:
  • Increasing the maximum number of points Qmax possible to be awarded to an examined person for provided answers increases knowledge state diagnosis credibility (in accordance with statistical regularities);
  • Increasing the number of grades (i.e., the number of classification intervals) increases knowledge state classification accuracy but reduces grading credibility;
  • Classic testing is always burdened with a significant error—primarily resulting from the examiner’s subjectivity;
  • Reducing assessment subjectivity requires employing a test form with a pool of “closed” questions and answers;
  • A testing application that enables employing an objective answer template eliminates potential grading errors and waives the requirement of a rather tedious process of counting grade points;
  • In addition, a SCADA system operator candidate knowledge testing application developed in an actual visualization environment enables practicing SCADA suite operational skills;
  • The proposed technical solution involving the testing application can also be utilized in remote testing based on ICT, which currently is undoubtedly an advantage (however, do not forget about additional methods for protecting against aided answer selection);
  • The depth and detail of SCADA system operator-trainee knowledge level diagnosis depend on the inquisitiveness of the used set of questions;
  • The impact of answer randomness can be mitigated by adopting appropriate grading principles, discouraging these practices.
The engineering method of diagnosing the level of knowledge and skills of the participants of the education process proposed in the article has been successfully verified on the example of SCADA operators.
The positive reception of the results of the testing process by the operator staff proves that it is expedient and possible to develop similar applications for other devices or systems, for example, computer systems, diagnostic systems, security systems.
The research contribution of the author’s team to the development of the article includes the research idea, the concept of practical research implementation, and testing the method on a set of several hundred students—SCADA operators.

Author Contributions

Conceptualization, T.D. and M.B.; methodology, T.D. and M.B.; software, T.D. and M.B.; validation, M.B., T.D. and A.R.; formal analysis, M.B., T.D. and A.R.; investigation, M.B., T.D., W.O. and A.R.; resources, M.B., T.D. and A.R.; data curation, M.B., T.D., A.R. and W.O.; writing original draft preparation, M.B., T.D. and A.R.; writing review and editing, M.B., T.D. and A.R.; visualization, M.B. and T.D.; supervision, M.B., T.D. and A.R.; project administration, M.B., T.D. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was co-financed by Military University of Technology under research project UGB 865. This paper was co-financed under the research grant of the Warsaw University of Technology supporting the scientific activity in the discipline of Civil Engineering and Transport.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AbbreviationFull Name for Abbreviation
SCADASupervisory Control and Data Acquisition
DCSDistributed Control System
ATSAnthropotechnical system
CBTComputer-Based Testing
ICTInformation and Communication Technologies
CATComputer Adaptive Testing
IRTItem Response Theory

References

  1. Mokhatab, S.; Mak, Y.J.; Poe, W.A. Process Modeling and Simulation of Gas Processing Plants. In Handbook of Natural Gas Transmission and Processing, 4th ed.; Elsevier, Gulf Professsional Publishing: Burlington, VT, USA, 2019; pp. 579–614. [Google Scholar] [CrossRef]
  2. Bolton, W. Instrumentation and Control Systems, 2nd ed.; Elsevier Newnes: Waltham, MA, USA, 2015. [Google Scholar]
  3. Dąbrowski, T. Diagnozowanie Systemów Antropotechnicznych w Ujęciu Potencjałowo-Efektowym. Licentiate Thesis, Wyd. WAT, Warsaw, Poland, 2001. (In Polish). [Google Scholar]
  4. Koirala, B.; Hakvoort, R. Integrated Community-Based Energy Systems: Aligning Technology, Incentives, and Regulations. In Innovation and Disruption at the Grid’s Edge; Sioshansi, F.P., Ed.; Elsevier: Amsterdam, The Netherlands; Academic Press: London, UK, 2017; pp. 363–387. [Google Scholar] [CrossRef]
  5. Będkowski, L.; Dąbrowski, T. Basics of Maintenance, Vol. II Basic of Operational Reliability; Wyd. WAT: Warsaw, Poland, 2006. [Google Scholar]
  6. Bednarek, M.; Dąbrowski, T. Selected tools increasing human reliability in the antropotechnical system. J. KONBiN 2020, 50, 243–264. [Google Scholar] [CrossRef]
  7. Tîrziu, A.-M.; Cătălin Vrabie, C. Education 2.0: E-Learning Methods. Procedia-Soc. Behav. Sci. 2015, 186, 376–380. [Google Scholar] [CrossRef] [Green Version]
  8. Electronic Technical Documentation of the Package Freelance. 2019. Available online: https://new.abb.com/control-systems/essential-automation/freelance (accessed on 7 January 2022).
  9. Electronic Technical Documentation of the Package iFix. 2022. Available online: https://www.ge.com/digital/documentation/ifix/version2022 (accessed on 7 January 2022).
  10. Electronic Technical Documentation of the Package Wonderware Intouch. 2017. Available online: https://www.aveva.com/en/products/intouch-hmi (accessed on 7 January 2022).
  11. Reddi, B. Free InTouch SCADA Tutorial Course for Beginners. Available online: https://instrumentationtools.com/intouch-SCADA-training-course/ (accessed on 10 May 2022).
  12. Siergiejczyk, M.; Rosiński, A. The concept of monitoring a teletransmission track of the highway emergency. Diagnostyka 2015, 16, 49–54. [Google Scholar]
  13. Perlicki, K.; Siergiejczyk, M. Measurement procedures for railway telecommunications network based on synchronius digital hierarchy and wavelength division multiplexing transmission techniques. Przegląd Elektrotechniczny 2013, 9, 148–151. [Google Scholar]
  14. Sumiła, M. Selected aspects of message transmission management in ITS systems. In Telematics in the Transport Environment; Mikulski, J., Ed.; Springer-Verlag: Berlin/Heidelberg, Germany, 2012; Volume 329. [Google Scholar] [CrossRef]
  15. Bednarek, M.; Dąbrowski, T. Communication security in the distributed control system. Przegląd Elektrotechniczny 2013, 89, 72–74. [Google Scholar]
  16. Bednarek, M.; Dąbrowski, T.; Wiśnios, M. Supervision of the state in the industrial control system. Biuletyn WAT 2013, 4, 145–154. [Google Scholar]
  17. Bednarek, M.; Dąbrowski, T. Selected aspects of diagnosing communication in industrial networks. Przegląd Elektrotechniczny 2019, 95, 166–169. [Google Scholar] [CrossRef]
  18. Duer, S.; Duer, R. Diagnostic system with an artificial neural network which determines a diagnostic information for the servicing of a reparable technical object. Neural Comput. Appl. 2010, 19, 755–766. [Google Scholar] [CrossRef]
  19. Duer, S. Diagnostic system with an artificial neural network in diagnostics of an analogue technical object. Neural Comput. Appl. 2010, 19, 55–60. [Google Scholar] [CrossRef]
  20. Duer, S.; Scaticailov, S.; Pas, J.; Duer, R.; Bernatowicz, D. Taking decisions in the diagnostic intelligent systems on the basis information from an artificial neural network. MATEC Web Conf. 2018, 178, 07003. [Google Scholar] [CrossRef]
  21. Jacyna, M.; Szczepański, E.; Izdebski, M.; Jasiński, S.; Maciejewski, M. Characteristics of event recorders in Automatic Train Control systems. Arch. Transp. 2018, 46, 61–70. [Google Scholar] [CrossRef]
  22. Duer, S. Examination of the reliability of a technical object after its regeneration in a maintenance system with an artificial neural network. Neural Comput. Appl. 2012, 21, 523–534. [Google Scholar] [CrossRef]
  23. Stawowy, M.; Duer, S.; Paś, J.; Wawrzyński, W. Determining Information Quality in ICT Systems. Energies 2021, 14, 5549. [Google Scholar] [CrossRef]
  24. Choromański, W.; Grabarek, I.; Kozłowski, M. Integrated Design of a Custom Steering System in Cars and Verification of Its Correct Functioning. Energies 2021, 14, 6740. [Google Scholar] [CrossRef]
  25. Bęczkowska, S.A.; Grabarek, I. The Importance of the Human Factor in Safety for the Transport of Dangerous Goods. Int. J. Environ. Res. Public Health 2021, 18, 7525. [Google Scholar] [CrossRef]
  26. Klimczak, T.; Paś, J. Basics of Exploitation of Fire Alarm Systems in Transport Facilities; Military University of Technology: Warsaw, Poland, 2020. [Google Scholar]
  27. Stawowy, M.; Rosiński, A.; Paś, J.; Klimczak, T. Method of Estimating Uncertainty as a Way to Evaluate Continuity Quality of Power Supply in Hospital Devices. Energies 2021, 14, 486. [Google Scholar] [CrossRef]
  28. Paś, J.; Klimczak, T.; Rosiński, A.; Stawowy, M. The Analysis of the Operational Process of a Complex Fire Alarm System Used in Transport Facilities. Build. Simul. 2022, 15, 4. [Google Scholar] [CrossRef]
  29. Becerra, M.A.; Tobón, C.; Castro-Ospina, A.E.; Peluffo-Ordóñez, D.H. Information Quality Assessment for Data Fusion Systems. Data 2021, 6, 60. [Google Scholar] [CrossRef]
  30. Kaniewski, P.; Gil, R.; Konatowski, S. Estimation of UAV position with use of smoothing algorithms. Metrol. Meas. Syst. 2017, 24, 127–142. [Google Scholar] [CrossRef]
  31. Pyza, D.; Jachimowski, R.; Jacyna-Gołda, I.; Lewczuk, K. Performance of Equipment and Means of Internal Transport and Efficiency of Implementation of Warehouse Processes. Procedia Eng. 2017, 187, 706–711. [Google Scholar] [CrossRef]
  32. Zieja, M.; Szelmanowski, A.; Pazur, A.; Kowalczyk, G. Computer Life-Cycle Management System for Avionics Software as a Tool for Supporting the Sustainable Development of Air Transport. Sustainability 2021, 13, 1547. [Google Scholar] [CrossRef]
  33. Oladele, J.I.; Ndlovu, M. A Review of Standardised Assessment Development Procedure and Algorithms for Computer Adaptive Testing: Applications and Relevance for Fourth Industrial Revolution. Int. J. Learn. Teach. Educ. Res. 2021, 20, 1–17. [Google Scholar] [CrossRef]
  34. Ackerman, P. Industrial Cybersecurity—Second Edition: Efficiently Monitor the Cybersecurity Posture of Your ICS Environment; Pact Publishing: Birmingham, UK; Mumbai, India, 2021. [Google Scholar]
  35. Flaus, J.-M. Cybersecurity of Industrial Systems; Willey & Sons: London, UK; Hoboken, NJ, USA, 2019. [Google Scholar]
  36. Jaap, A.; Dewar, A.; Duncan, C.; Fairhurst, K.; Hope, D.; Kluth, D. Effect of remote online exam delivery on student experience and performance in applied knowledge tests. BMC Med. Educ. 2021, 21, 86. [Google Scholar] [CrossRef] [PubMed]
  37. Coyne, I.; Bartram, D. Design and Development of the ITC Guidelines on Computer-Based and Internet-Delivered Testing. Int. J. Test. 2006, 6, 133–142. [Google Scholar] [CrossRef]
  38. Özalp-Yaman, Ş.; Çağıltay, N. Paper-based versus computer-based testing in engineering education. In Proceedings of the IEEE EDUCON 2010 Conference, Madrid, Spain, 14–16 April 2010; pp. 1631–1637. [Google Scholar] [CrossRef]
  39. Danieliene, R.; Telesius, E. Analysis of Computer-Based Testing Systems. In Proceedings of the 2008 Conference on Human System Interactions (IEEE), Krakow, Poland, 25–27 May 2008; pp. 960–964. [Google Scholar]
  40. Goeters, K.-M.; Lorenz, B. On the implementation of item-generation principles for the design of aptitude testing in aviation. In Item Generation for Test Development; Irvine, S.H., Kyllonen, P., Eds.; Routlege: London, UK, 2015. [Google Scholar]
  41. Bartram, D.; Hambleton, R. Computer-Based Testing and the Internet: Issues and Advances; John Wiley & Sons: Chichester, UK, 2006. [Google Scholar]
  42. Bartram, D. The Micropat pilot selection battery: Applications of generative techniques for item-based and task-based tests. In Item Generation for Test Development; Irvine, S.H., Kyllonen, P., Eds.; Routlege: London, UK, 2015. [Google Scholar]
  43. De Bleccker, I.; Okoroji, R. Remote Usability Testing, 1st ed.; Packt Publishing Ltd: Birmingham, UK, 2018. [Google Scholar]
  44. Gamage, K.A.A.; Silva, E.K.d.; Gunawardhana, N. Online Delivery and Assessment during COVID-19: Safeguarding Academic Integrity. Educ. Sci. 2020, 10, 301. [Google Scholar] [CrossRef]
  45. Jacques, S.; Ouahabi, A.; Lequeu, T. Remote Knowledge Acquisition and Assessment During the COVID-19 Pandemic. Int. J. Eng. Pedagog. 2020, 10, 120–138. [Google Scholar] [CrossRef]
  46. Macaulay, T. RIoT Control: Understanding and Managing Risks and the Internet of Things; Morgan Kaufmann: Cambridge, UK, 2017. [Google Scholar]
  47. Dąbrowski, T. Wspomnienie. Płk. prof. dr hab. inż. Lesław Będkowski (25.10.1928–1.1.2011). Głos Akademicki 2011, 1, 10–11. Available online: https://promocja.wat.edu.pl/Glos_Akademicki/Glos_PDF/2011/ga175.pdf (accessed on 10 May 2022).
  48. Gomes, M.I. Statistical Quality Control. In International Encyclopaedia of Statistical Science; Lovric, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  49. Montgomery, D.C. Introduction to Statistical Quality Control, 6th ed.; John Wiley & Sons: Jefferson City, MO, USA, 2009. [Google Scholar]
  50. Dąbrowski, T.; Będkowski, L.; Bednarek, M. Statistical Diagnosing of an Effect Produced in Human Engineering System. Biuletyn WAT 2009, 223–238. [Google Scholar]
Figure 1. SCADA system operational structure.
Figure 1. SCADA system operational structure.
Applsci 13 04139 g001
Figure 2. General structure of an anthropotechnical system.
Figure 2. General structure of an anthropotechnical system.
Applsci 13 04139 g002
Figure 3. Anthropotechnical structure of the education system.
Figure 3. Anthropotechnical structure of the education system.
Applsci 13 04139 g003
Figure 4. Illustration of the examination-testing system and process.
Figure 4. Illustration of the examination-testing system and process.
Applsci 13 04139 g004
Figure 5. Illustration of an examination and testing system and process modified using the application.
Figure 5. Illustration of an examination and testing system and process modified using the application.
Applsci 13 04139 g005
Figure 6. Trainee knowledge system model.
Figure 6. Trainee knowledge system model.
Applsci 13 04139 g006
Figure 7. Knowledge structure models.
Figure 7. Knowledge structure models.
Applsci 13 04139 g007
Figure 8. Knowledge level classification examples, where: 2—two-value grade set, 4—four-value grade set, 7—seven-value grade set.
Figure 8. Knowledge level classification examples, where: 2—two-value grade set, 4—four-value grade set, 7—seven-value grade set.
Applsci 13 04139 g008
Figure 9. Grade credibility functions for a 7-value classification, where waveforms are marked as: 2.0, 3.0, 3.5, 4.0, 4.5, 5.0, and 6.0 are credibility functions of respective grades.
Figure 9. Grade credibility functions for a 7-value classification, where waveforms are marked as: 2.0, 3.0, 3.5, 4.0, 4.5, 5.0, and 6.0 are credibility functions of respective grades.
Applsci 13 04139 g009
Figure 10. Grade credibility functions in the case of a 2-interval classification (i.e.,: pass/fail).
Figure 10. Grade credibility functions in the case of a 2-interval classification (i.e.,: pass/fail).
Applsci 13 04139 g010
Figure 11. Grade credibility functions in the case of a 7-interval classification (i.e.,: 2, 3, 3.5, 4, 4.5, 5, 6).
Figure 11. Grade credibility functions in the case of a 7-interval classification (i.e.,: 2, 3, 3.5, 4, 4.5, 5, 6).
Applsci 13 04139 g011
Figure 12. The dependence of the credibility index on the maximum number of grade points, where: 2—two-value grade set, 4—four-value grade set, 7—seven-value grade set.
Figure 12. The dependence of the credibility index on the maximum number of grade points, where: 2—two-value grade set, 4—four-value grade set, 7—seven-value grade set.
Applsci 13 04139 g012
Figure 13. Dependence of the credibility index on the grade accuracy (i.e., number of grading levels).
Figure 13. Dependence of the credibility index on the grade accuracy (i.e., number of grading levels).
Applsci 13 04139 g013
Figure 14. Modular structure of a system operator testing application.
Figure 14. Modular structure of a system operator testing application.
Applsci 13 04139 g014
Figure 15. Configuration—options available from the examiner (system administrator) level.
Figure 15. Configuration—options available from the examiner (system administrator) level.
Applsci 13 04139 g015
Figure 16. Operational and performance-related activities of the SCADA system operator testing application operation.
Figure 16. Operational and performance-related activities of the SCADA system operator testing application operation.
Applsci 13 04139 g016
Figure 17. Examiner module—settings window.
Figure 17. Examiner module—settings window.
Applsci 13 04139 g017
Figure 18. Examiner module—settings window.
Figure 18. Examiner module—settings window.
Applsci 13 04139 g018
Figure 19. Sample result file.
Figure 19. Sample result file.
Applsci 13 04139 g019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dąbrowski, T.; Bednarek, M.; Rosiński, A.; Olchowik, W. Engineering Application of a Product Quality Testing Method within the SCADA System Operator Education Quality Assessment Process. Appl. Sci. 2023, 13, 4139. https://doi.org/10.3390/app13074139

AMA Style

Dąbrowski T, Bednarek M, Rosiński A, Olchowik W. Engineering Application of a Product Quality Testing Method within the SCADA System Operator Education Quality Assessment Process. Applied Sciences. 2023; 13(7):4139. https://doi.org/10.3390/app13074139

Chicago/Turabian Style

Dąbrowski, Tadeusz, Marcin Bednarek, Adam Rosiński, and Wiktor Olchowik. 2023. "Engineering Application of a Product Quality Testing Method within the SCADA System Operator Education Quality Assessment Process" Applied Sciences 13, no. 7: 4139. https://doi.org/10.3390/app13074139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop