A risk is the effect of uncertainty on certain objectives. These can be business objectives or project objectives. A more complete definition of risk would therefore be “an uncertainty that if it occurs could affect one or more objectives”.
A risk is the effect of uncertainty on certain objectives. These can be business objectives or project objectives. A more complete definition of risk would therefore be “an uncertainty that if it occurs could affect one or more objectives”. Objectives are what matter!
This recognizes the fact that there are other uncertainties that are irrelevant in terms of objectives, and that these should be excluded from the risk process. With no objectives, we have no risks.
Linking risk with objectives makes it clear that every facet of life is risky. Everything we do aims to achieve objectives of some sort including personal objectives, project objectives, and business objectives. Wherever objectives are defined, there will be risks to their successful achievement.
The above online references built in me the incentives, that the risk is something objective, and it is accepted subjectively. Without measures of the risks, it does not make any sense to discuss it further. I hope that my examples below will explain my feelings, and will raise more things to think about.
4.2. Other Visions on the Risk and Numeric Measures
Risk is a complex concept and was met in old times mainly in hazard games. Then (and even in now days), it was measured in terms of “odds”.
A true definition of the odds should be the ratio of the probability that a random event
A will happen in the condition of an experiment, and the probability that another event
B will occur. So,
This ratio should be read as the ratio of two integers, therefore, these probabilities are usually multiplied by 10, or 100, or even by a thousand (and then the fractional parts are removed) for easy understanding on behalf of users not familiar to probability. The best example in the case of classic probability is
According to the web information
https://www.bing.com/search?form=MOZLBR&pc=MOZD&q=odds or
https://www.thefreedictionary.com/odds (accessed on 17 May 2021), odds provide preliminary information about bidders in games on what are their chances to win after the experiment is performed if the player bet is for event
A. There, usually for
B, stays the complement of
A. In my opinion, the odds are kind of hints given by experts to the players of what their risk is in games when they bid for the occurrence of certain random future events.
In medicine, biology, and in many usual actuarial, social, and engineering practices, there is a need to apply tests to establish if an individual (or an item) in the population possesses some property (let use terminology “belongs to category B” or does not belong to this category).
Therefore, let
B and
A be two events where
A has the sense of test factor (for example, result of an environment)) to find out if an individual belongs to category
B. The relative risk (relative risk, we denote it briefly RR) of event
B with respect to event A is defined by the rule
The point is that the larger the RR, the more the test (risk factor A is increasing the probability of occurrence of B) affects category B to be true. For example, if we want to evaluate the influence of some risk factors (obesity, smoking, etc.) on the incidence of a disease (diabetes, cancer, etc.), we need to look at the value of the relative risk when test A is applied and indicates such categorization to be a fact. It is kind of an “odds measure” useful to know. We illustrate this with an example from the biostatistics below.
It is well known that in medical research and in other experiential sciences, to discover the presence of certain diseases, there are well used tests. When the result of the test is positive, it is considered that the object owns the quality of what it is tested. However, tests are not perfect. They are very likely to give a positive result when the tested objects really have that quality, which are tested. Additionally, it happens to give a negative test result although the object possesses that property. Furthermore there is another possibility, although unlikely, that the test gives a positive result, even when the subject does not possess the property in question. These issues are closely related to the conditional probability. Biostatistics has established specific terminology in this regard, which should be known and used. It is known that in carrying out various tests among the population that suffers from something, it is possible to get a positive test (indicating that the person may be ill) or a negative test. In turn, the latter is an indication that the person is not sick. Unfortunately, there are no tests that are 100% truthful. It is possible that the tested subject is sick, but the test may not show it as well as being positive, although the inspected subject is not sick. Here, the concepts of conditional probabilities play a special role and are important in assessing the results of the tests.
Predictive value positive, PV
+ of a screening test is called the probability of a tested individual to possess tested quality (such as being sick) when tested positive, (T
+), in other words,
Predictive value negative, PV
- of a screening test, is called the probability that the tested individual does not have the tested quality (is not sick), on the condition that the test was negative (T
-), in other words,
The sensitivity of the test is determined by the probability that the test will be positive, provided that the individual has the tested quality, in other words,
The specificity of the test is determined by the probability that the test gives a negative result, provided that the tested individual does not possess the quality for which is being checked, in other words,
In other words, specificity = P (no symptom detected | no disease).
False negative is determined as the outcome of the test where an individual, who was tested as a negative, is sick (possessing the tested quality).
To be effective, for test prediction, the disease should have high sensitivity and high specificity. The relative risk to be sick if tested positive is then the ratio
The use of RR in terms of odds is a very good and useful idea.
4.3. Reliability and Risk
Possibly for many natural reasons, the risk concept is most related to another complex concept—reliability. This is an exciting area of discussions, and in my opinion, not finished yet, and will probably never finish. I studied the web opinions and have had detailed discussions with my Gnedenko Forum colleagues, and still did not come to any determined conclusions. Here are some brief results of my research.
In my modest opinion, the risk is an objective-subjective feeling that something undesirable, a dangerous event, may happen under certain conditions. While one can explain in words what it is to everyone else, this is not sufficient without some general frames and numeric measure of that risk.
Let us see what the wise sources on the risk concept are saying.
Creating a reliable product that meets customer expectations is risky.
What is risk and how does one go about managing risk? The recent set of ISO (International; Organization fos Standards) updates and elevates risk management. Here are some details:
ISO 9000:2015 includes the definition of risk as the “effect of uncertainty on an expected result.”
ISO 31000:2009 includes the definition of risk as the “effect of uncertainty on objectives.”
The origin of the English word ‘risk’ traces back to the 17th century French and Italian words related to danger.
A dictionary definition includes “the possibility that something unpleasant or unwelcome will happen.”
Risk from a business sense may need slightly more refinement. The notes in the ISO standards expand and bind the provided definition as away from unwanted outcomes to include the concept of a deviation from the expected.
Surprise seems to be an appropriate element of risk. Surprise may include good and bad deviations from the expected.
For the purposes of the ISO standards, risk includes considerations of financial, operations, environmental, health, safety, and may impact business or society objectives at strategic, project, product, or process levels.
The discussion about a specific risk should include the events and consequences. While we may discuss the risk of an event occurring, we should also include the ‘so what?’ element.
If an event occurs, then this consequence is the result. Of course, this can quickly become complex as events and associated consequences rarely have a one-to-one relationship.
Finally, the ISO notes on risk include a qualitative element to characterizing risk.
The likelihood (similar to probability, I think) and the value (in terms of money would be common) of the up- or downside of the consequences.
According to me, risk is a concept with many faces. It is possible that the reliability may highlight some of it.
Following this study, it is possible to give the following proposition:
The risk (analogously, the reliability) is a complex notion that is measured by numerous indexes depending on different applications and situations. However, the basic measure should be unique.
As for the reliability professionals, these ISO definitions may seem familiar and comfortable. We have long dealt with the uncertainty of product or process failures. We regularly deal with the probability of unwanted outcomes. We understand and communicate the cost of failures.
What is new is the framework described by the ISO standards for the organization to identify and understand risk as it applies to the organization and to the customer.
Reliability risk now has a place to fit into larger discussions concerning business, market, and societal risk management. I agree that reliability risk is a major component of the risks facing an organization. Witness the news making recalls in recent years (nuclear plant accidents, plane crashes, We threatening that sometimes happen, companies ruined). As reliability professionals, we use the tools to identify risks, to mitigate or eliminate risks, and to estimate future likelihoods and consequences of risks. How do we view the connection between risk and reliability? Why do car companies recall some vehicles from users to replace some parts in order to prevent unexpected failures?
We usually connect the risk to some specific vision of the characteristic on the reliability. Here, I present some points of view of my Gnedenko Forum associates, albeit not an exhaustive review.
Here is the opinion of A. Bochkov, the Gnedenko Forum
https://gnedenko.net/ (accessed on 19 June 2021) corresponding secretary and motor:
Risk and safety relate to humanity. Namely, the human assesses the degree of safety and risk considering its own actions during the life or the reliability of systems as a source of potential danger, or risk that such a thing happens. Tools, machines, systems, and technical items do not feel the risk. Risk is felt by people. How do people estimate the risk is a different question. Safety, in most tools that people use, depends on its ergonomic design and its instructions for use. There are ergonomic decisions that help such use and users to follow these instructions and are always in the process of improvement.
Safety is also a complicated concept. It does not have a unique definition nor some unique measure. Everything is specific and is a mixture between objective and personal. There are no numeric measures for safety. However, I agree with the statement that the higher the safety, the lower the risk. Still, the measurement for risk as a number that should make aware people and companies and governments to pay attention to some existing future risks is not clear. According to Bochkov, risk should be measured as a degree of difference of the current estimated state from the ideal. Here, there is no probability to be used. Humans usually assess the probability, but humans do not always understand what they risk. The maximal price of the risk for humans is their own life. For the one who takes such risk, the price has no numeric expression. Such a loss cannot be compensated. However, when a risk is not related to loss of lives, the maximum losses are estimated by the means available by those who make decisions on actions against risks. Usually, human lives are priced by the insurance agencies who are paying for those lost lives.
Hence, this opinion is not far from the ISO specifications listed above, and do not contribute clarity to the questions on how to measure particular risks.
A slightly different opinion is expressed by V. Kashtanov, another member of the Gnedenko Forum advisory board. Again, I present his vision. Everything is based on the possibility to present the situations by an appropriate mathematical process in development.
A qualitative definition: The risk (danger) is a property of a real or modeled process.
It is common to talk about the political, social, economic, technologic, and other processes that possess risks. Then, in modeling respective models for such processes, it is necessary to determine the sets of states of each process and present them as a random walk within the set of all states. Then, the above definition of the risk makes sense and can be understood. The danger is when the process passes into the set of risky states. Safety is when the process is out of the risky states. Such an approach allows us to understand the concept of risk and the ways to assess the risk (author’s note) as the probability to pass from a safety set into the risky ones with the use of the respective model.
Risk (maybe catastrophe) is to be in that set of Risky states. Assessment of the risk is to measure the chance to get there from one that you currently are.
Therefore, without mathematical models, we may not be able to give good definitions for concepts widely used in our scientific and social life. According to Kashtanov, the following definition should be valid:
Risk as a quantitative indicator assessing the danger is some functional operator calculated on the set of trajectories of the process that describes the evolution (the functional behavior) of the studied system.
By the way, a controller may interfere with the system working, and have some control on the process.
Uncertainty factors (randomness, limited information, impossibility to observe process states, or making measurements with errors) create additional challenges in risk evaluation and control.
I like this approach since it gives an explanation of the models of risky events and allows one to measure the chance to get into it from the safety states.
Risk as a two-dimensional process
One further opinion is that of our RTA chief editor V. A. Ryov, who is an established expert in reliability. In his book [
7], he relates almost each system reliability characteristic to some respective risk, with an emphasis on the economic consequences when a risky event takes place, where probability models are used in each situation. My brief review follows.
A variety of risks go with individual people throughout their life. The same things happen with various industrial product lines, agricultural, financial companies, biological, environmental systems, and many other units. Risk appears due to the uncertainty of some events that may occur with respect to which corresponding decisions and actions should be taken. Mathematical risk theory is generated and developed on actuarial science, where the risk is that an insurance company will be ruined and is measured by the probability of ruin.
However, nowadays, the understanding of risk is related to the occurrence of some “risky” event and with its related consequences in terms of material or monetary losses to restore. Numerous examples support this position, as one can read in Rykov’s book [
7]. However, we focus on situations related to reliability. Before that, let us present his mathematical definition of the risk given there.
Risk is related to the occurrence of some random (risky) event A, whose probability Pt(A) varies in time t, depending on current conditions. Occurrence of such an event at time t generates some damage measured by the value Xt. In this way, the risk is characterized by two variables (T, XT), where T is the time of the occurrence of the risky event and XT is the measure of the damage then. Both components can be random.
In reliability theory, T can be the time of use of a technical system, which may vary according to the reliability model of management of this system and of its structure. Application of this approach is demonstrated in the analysis of technological risk for popular reliability systems in the framework of some known failure models. The focus is on the measured risk.
As a basic measure, the two dimensional distribution function F(t, x) = P{T ≤ t, XT ≤ x} is proposed. I like this approach. It admits lots of analytical analysis of characteristics and cases as demonstrated in this book and many new research projects.
However, this is just in theoretical frameworks and assumptions. Practical applications need sufficient data for such a function F(t, x) to be estimated. I do not refer to any particular result from the book. Just note that the variable T varies depending on the reliability system model where a catastrophic event may take place. Then, the value of the losses XT will be known. Additionally, the value of losses that extend some critical value Xcritical is achieved or is close to being achieved. Then, the process should be stopped (times to stop are also interesting to analyze in such an approach) to prevent the risk. Such discussion was not used in that book, but deserves attention.
Somehow, I could not see any satisfactory measure of the risk shown to be different from the amount of expected losses. Average characteristics can be found in, but no clear measure of the risk that can be used such as a BELL RINGS THAT THE RISK IS NEARBY could be seen. I believe that if the risk is measured in terms of odds, then when the odds reach a certain level, then the bell’s ring should make an alarm for those who are concerned about that risk.
One more thing for the risks in reliability.
During my work on the “risk issue”, we had a vital discussion with my colleagues on the Gnedenko Forum Advisory Board. As a result of this discussion, I arrived at a general definition of the risk in a random process, and ways for its evaluation.
First, random processes need some realistic probability model where the set of states Ώ of the process are defined, (known mathematically as described, controlled) at any time and the dynamic probabilities of the changes are known. In other words, it is defined as a probability space {Ώ, @t, Pt}. Here, Ώ is the set of all possible states of the process, and it cannot vary in time; @t is the uncertainty at the time t, and Pt is the probability that works in that time. Let Bt be the set of undesired (risky) events, an element of @t, and At be the set of current states, also an element of @t. At could be a result of a test performed at time t.
Definition: Let it be that at time
t for a process, there is a set
Bt of undesired (risky) events and let
At be the set of current states of the process in the probability space {
Ώ,
@t,
Pt}. The risk that describes the evolution of the observed process is measured by the relative risk
in terms of odds.
This definition is a replicate borrowed from the measurements of risk in biostatistics, as presented above. By the way, in the cases discussed in Bockov terms, where safety and risky sets have nothing in common, this measure = 0; in Kashtanov’s and Rykov’s considerations, it needs calculations. Please note that At and Bt may overlap. As more At is covering Bt, the higher the risk. I think that this is a natural measure.
Another different point of view is presented in the N. Singpurvalla monograph [
8]. He says:
The management of risk calls for its quantification, and this in turn entails the quantification of its two elements: the uncertainty of outcomes and the consequences of each outcome. The outcomes of interest are adverse events such as the failure of an infrastructure element (e.g., a dam) or the failure of a complex system (e.g., a nuclear power plant); or the failure of a biological entity (e.g., a human being). ‘Reliability’ pertains to the quantification of the occurrence of adverse events in the context of engineering and physical systems. In the biological context, the quantification of adverse outcomes is done under the label of ‘survival analysis’. The mathematical underpinnings of both reliability and survival analysis are the same; the methodologies can sometimes be very different. A quantification of the consequences of adverse events is done under the aegis of what is known as utility theory. The literature on reliability and survival analysis is diverse, scattered, and plentiful. It ranges over outlets in engineering, statistics (to include biostatistics), mathematics, philosophy, demography, law, and public policy. The literature on utility theory is also plentiful, but is concentrated in outlets that are of interest to decision theorists, economists, and philosophers. However, despite what seems like a required connection, there appears to be a dearth of material describing a linkage between reliability (and survival analysis) and utility. One of the aims of [
8] is to help start the process of bridging this gap. This can be done in two ways. The first is to develop material in reliability with the view that the ultimate goal of doing a reliability analysis is to appreciate the nature of the underlying risk and to propose strategies for managing it. The second is to introduce the notion of the ‘utility of reliability’, and to describe how this notion can be cast within a decision theoretic framework for managing risk. To do the latter, he makes a distinction between reliability as an objective chance or propensity, and survivability as the expected value of one’s subjective probability about this propensity. In other words, from the point of view of this book, based mainly on the Bayesian approach. It deserves to be studied, but we do not include more discussion on it here.
I am not sure that the risks in reliability analysis should stop here. Reliability itself is defined as the probability that a system functions at a given time. Therefore, the fact that it does not work is a kind of risky event. The availability coefficient is also the probability of an event that the system is able to function at a certain time. Additionally, many other functions, like failure rates, number of renewals, maintenance costs, effectiveness, expenses for supporting functionality, etc. have quantitative measures that could be used as risky variables. Each of these uses a construction of an appropriate probability space, where the above general definition and evaluation of the risk can be applied. Therefore, the issues for the risk assessments are not finished yet. There are many open questions for researchers to work on.