*2.1. Compromised DO2—A Primary Clinical Challenge to E*ff*ective Medical Monitoring*

Hemorrhage is the primary reason for death after major trauma in both civilian and military settings [9–13]. If not controlled in its early stages, hemorrhage can result in inadequate systemic oxygen delivery (DO2) to vital organs (e.g., brain, heart, gut) that without effective intervention can rapidly lead to organ dysfunction and tissue death [14]. Clinically, DO<sup>2</sup> is indirectly assessed by measurements of standard vital signs such as blood pressure. However, improvement in blood pressure alone does not correlate with oxygen received at the tissue level as supported by the observation that crystalloid fluids can elevate systolic pressure while simultaneously worsening patient outcomes [9]. Clinicians frequently define a measurement of systolic pressure <90 mmHg as hypotension incapable of sustaining adequate DO2. More recent data suggest the use of this threshold may not accurately represent risk for poor clinical outcome [4,15,16]. To this end, it has been proposed that optimal assessment of patient status demands actual measurement of systemic DO<sup>2</sup> [17–19].

## *2.2. Current Vital Sign Monitoring*

One limitation of most modern monitoring systems is a bias toward capture of only standard vital signs (Table 1). Standard vital signs exhibit little change during the early stages of volume loss due to physiological compensatory responses [20–25]. Such responses (e.g., deep inspiration, tachycardia, and vasoconstriction) regulate and maintain blood pressure and tissue perfusion prior to the onset of decompensated shock during the early stages of hemorrhage, sepsis, dehydration, and other forms of central hypovolemia [26–29].


**Table 1.** Qualitative timing of changes in traditional vital signs and blood chemistries during progressive central hypovolemia. Modified from Convertino et al. [14,22] and Moulton et al. [30].

In an effort to identify and compare the time course of changes in standard vital signs and physiological compensatory responses during the early stages of blood loss, lower body negative pressure (LBNP) has emerged as a validated model for controlled progressive reductions in central blood volume that mimics the physiology of hemorrhage in humans [14,31,32]. Like hemorrhage, LBNP leads to reduced filling of the heart which in turn reduces cardiac stroke volume and output, resulting in lower DO<sup>2</sup> (Figure 1). Using this model of human hemorrhage has consistently revealed that commonly relied upon vital signs are not specific to the condition of blood loss or do not change until too late in the clinical course of reduced central blood volume to allow optimized patient care (Table 1).

**Figure 1.** Human subject placed in the lower body negative pressure (LBNP) chamber used to induce progressive reductions in cardiac filling (preload), stroke volume, cardiac output and DO<sup>2</sup> similar to hemorrhage.

Currently used monitors track limited vital sign measurements and chemistries on an interval basis (e.g., blood pressure) with limited capability for providing continuous, real-time physiological assessments (e.g., electrocardiogram, pulse, oxygen saturation) and are often non-specific to magnitude of hypovolemia. In addition, current commercial monitoring systems for emergency settings are bulky, power hungry, have wires interfering with patient care, and sensitive to motion artifact. Despite the above findings and limitations, clinicians continue to rely upon standard vital signs or blood chemistries when deciding to intervene because new and more effective monitoring technologies are not available.

#### *2.3. Accuracy, Sensitivity, and Specificity*

The effectiveness of any monitoring technology relies on its ability to provide accurate, sensitive, and specific information about the clinical condition of the patient. In this regard, it is critical that there be an assessment of the number of cases correctly identified as unhealthy (True Positive or *TP* rate), correctly identified as healthy (True Negative or *TN* rate), incorrectly identified as healthy (False Negative or *FN*), and incorrectly identified as unhealthy (False Positive or *FP*). Of course, all of this requires an agreed upon reference gold standard. Once these parameters are quantified, accuracy, sensitivity and specificity of the measurement can be assessed. Within this framework, an estimation of accuracy can be calculated as the ratio of true positive and negative cases (*TP* plus *TN*) to the sum of all measured cases. Mathematically, this can be stated as:

$$\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN} \tag{1}$$

Since sensitivity of a measurement represents its ability to correctly identify unhealthy cases, it can be calculated as the ratio of *TP* to the sum of both true positive and false negative unhealthy cases. Mathematically, sensitivity can be stated as:

$$\text{Sensitivity} = \frac{TP}{TP + FN} \tag{2}$$

Specificity refers to the ability of a diagnostic modality to correctly identify or predict those individuals who are healthy. That is, specificity can be calculated as the ratio of *TN* to the sum of all healthy cases and can be stated mathematically as:

$$\text{Specificity} = \frac{TN}{TN + FP} \tag{3}$$

In addition to accuracy, sensitivity and specificity, Youden's J statistic was first described in 1950 as a way to capture a single measurement performance assessment of a dichotomous diagnostic test [33]. The Youden's J statistic is calculated as:

$$\mathbf{J} = \frac{TP}{TP + FN} + \frac{TN}{TN + FP} - \mathbf{1} \tag{4}$$

Or in its simplistic form:

$$J = Sensitivity + Specificity - 1\tag{5}$$

The values of the J statistic range from 0 to 1. A test that has as zero value gives the same proportion of positive results for both those with the disease state and those without the disease state. In other words, a J value of 0 is useless in assessing the status of a patient because it provides a positive result for the same number of patients that are experiencing the disease state as those that are not. Conversely, a J value of 1 demonstrates that an assessment modality accurately identifies all subjects with a disease state and those without. Quantitative comparisons of sensitivity, specificity, and the J statistic will be used in a subsequent section of this review for comparisons between standard vital signs and hemodynamic measurements in order to identify those physiological signals required by wearable sensors to optimize diagnostic accuracy.
