*2.2. Fractures Classification in Calcaneus*

On the basis of subtalar joint involvement, calcaneal fractures observed in CT images have been divided into two categories: Intra-articular and extra-articular. Intra-articular is a fracture involving the joints, including damage to the connective tissue between two bones. Extra-articular is a fracture that does not involve the joints, rather, it includes pieces of bone drawn from the calcaneus by the ligament.

The intra-articular fractures of the calcaneus represent about 75% of all calcaneal fractures in adults [1,2], where the prospect of recovery depends on how severely the calcaneus was damaged during injury. Several classification systems for intra-articular fractures of the calcaneus have been developed, in which the Sanders system [20,21] is the most commonly used due to its correlation with clinical outcomes and lower inter-observer variability [22]. This classification divides the intra-articular fractures into four types, based on the number of fragments in the calcaneus, i.e., Type I, Type II, Type III, and Type IV.

Figure 1 shows examples of calcaneal fractures on the transverse and coronal planes, the red circle shows the location of the calcaneus in the tarsal structure. Type I is a non-displaced extra-articular fracture with a displacement of less than 2 mm; this is also knowns as a line fracture. Types II–IV are displaced intra-articular fractures corresponding to an increase in the number of fragments, in which Type IV has more than three fragments.

Extra-articular fractures represent about 25% of calcaneal fractures and include all fractures that do not involve the posterior aspect of the subtalar joint [1,8]. Extra-articular fractures are normally caused by trauma, such as a crush or mild injury. Based on the location of the fracture, several methods [1,10,12] classify extra-articular fractures of the calcaneus into three types, i.e., Type A, Type B, and Type C. Figure 2 shows an example of fractures on the sagittal plane. The red circle shows the location of the calcaneal fractures in sagittal images. The Type A fracture involves the anterior process of the calcaneus (Figure 2b). The Type B fracture extends through the calcaneus or middle calcaneus, including the lateral processes (Figure 2c); and the Type C fracture is a calcaneal fracture involving the posterior (Figure 2d).

**Figure 1.** The Sanders system of fracture classification on the calcaneus [1,2,20]: (**a**) Normal calcaneus in transverse and coronal images; (**b**) Type II; (**c**) Type III; and (**d**) Type IV.

**Figure 2.** Classification of calcaneus fractures in sagittal images [1,8,10]: (**a**) Normal calcaneus; (**b**) Type A; (**c**) Type B; and (**d**) Type C.

#### *2.3. System Overview*

Figure 3 shows a general overview of the system proposed in this paper. The algorithm contains two steps to complete the main task: Step 1 involves the detection of the calcaneus location in the input image using a machine learning approach; and Step 2 involves the segmentation of the calcaneus ROI, based on several morphology methods, and contour detection of the fragmented region so as to determine the type of calcaneal fracture.

**Figure 3.** System overview of the proposed method.

#### *2.4. Step 1: Detection of Calcaneus Location*

In the first step, the extended local binary pattern (LBP) [23] and cascade classifier [24] are used to determine the anatomical plane orientation in the input image based on the shape of the tarsal bone. The basic idea behind LBP is that the image is composed of a micro-pattern. LBP is the first-order circular derivative pattern generated by concatenating the direction of the binary gradient. CT scans are generally available as DICOM files, where each image contains a 2D array with pixel intensity. The CT DICOM image is a grayscale image in which the bone area is represented by white pixels

surrounded by gray pixels. Therefore, LBP is suitable for defining the information in CT DICOM images, based on texture descriptors.

The LBP operator is the sum of the gray-level intensities label computed at each pixel location. The LBP code labels can be expressed as:

$$LBP(S,R) \ = \sum\_{S=0}^{S-1} b(\mathbf{g}) 2^S \tag{1}$$

where *g* = *gs* − *gc*, *S* is the number of pixels in a small circular neighborhood with radius *R* (R can be the value within 1–3, in this study we set *R* = 1), *gs* is the grey-level intensities label of *S*, *gc* is the grey-level intensity of the center pixel, and the function *b*(*g*) is defined as:

$$b(\mathbf{g}) = \begin{cases} \text{l, if } \mathbf{g} \ge 0 \\ \text{0, otherwise} \end{cases} \tag{2}$$

The LBP code label histogram contains information about edge distribution and other local features in the image, so it can be used to describe the image texture in the CT DICOM image. The LBP feature is extracted from the input image at the pixel location (*i*,*j*). Then, the image is divided into several small non-overlapping blocks to get the feature histogram. The region blocks Ak are the same size, where *k* = 0, ... , *K* and *K* is the number of blocks in the image. The LBP histogram of the labeled image is defined as:

$$H(L) = \sum\_{(i,j)\in A\_k} h\_{(i,j)}(l) \tag{3}$$

where *h*(*i*,*j*)(*l*) is the value of the bin l which consists of a look-up table of 29-1 = 511 bins, l is the LBP feature computed at location (*i*,*j*) and L is the number of different labels produced by the LBP operator. Figure 4 shows the process that is used to provide three types of information: Code labels for local textures, histograms in local regions, and feature histograms whereby the tarsal bone structure in each anatomical plane orientation is described. The feature histogram is the concatenation of each region histogram into a single LBP histogram.

**Figure 4.** Input image region.

In this step, the AdaBoost algorithm [25,26] is used to select the features of the classifier in the training stage. The classifier is selected to evaluate a single LBP histogram which maximizes the margin between positive and negative samples. The classifier determines the best threshold classification function for each histogram so that the number of misclassified instances can be minimized. The negative image input of each anatomical plane is the orientation of the other plane. In the training stages, the features in positive and negative images will be learned as positive, and negative labels, respectively. Figure 5 shows the example images for training, which are the tarsal bone shape in each anatomical plane of various sizes of input images. In this study, the images used have minimum and maximum sizes of 126 × 147, and 980 × 1024, respectively.

**Figure 5.** Example images of the tarsal bone: (**a**) Coronal; (**b**) Transverse; and (**c**) Sagittal.

A training sample is set as (*xm*, *ym*), *m* = 1, 2, ... , *M*, where *ym* = 0, or 1 for negative, or positive labels, respectively, is the class label for the sample *xm*. Firstly, the initial weight vector is set as ω1(*m*) = 1/*M*. The classifier is defined as λ*n*(*xm*) = 0, or 1 where *n* = 1, 2, ... , *N* is the iteration number. The error associated with the classifier is evaluated as:

$$\varepsilon\_{\boldsymbol{n}} = \sum\_{m=-1}^{M} \omega\_{\boldsymbol{n}}(\boldsymbol{m}), \text{ if } \left| \lambda\_{\boldsymbol{n}}(\boldsymbol{x}\_{\boldsymbol{m}}) \neq y\_{\boldsymbol{m}} \right|. \tag{4}$$

The selected classifier is used to update the weight vector as:

$$\begin{array}{rcl} \omega\_{n+1}(m) &= \omega\_n(s)\beta\_n^{1-\tau\_m} \\ \text{where } \tau\_m = \begin{cases} 0, \text{ if } x\_{\text{ll}} \text{ classified correctly} \\ & 1, \text{ otherwise} \end{cases} \end{array} \tag{5}$$

and β*<sup>n</sup>* is the weighting parameter computed from:

$$
\beta\_n = \frac{\varepsilon\_n}{1 - \varepsilon\_n} \tag{6}
$$

The result of the training stage is the labeled result of each region which is represented as:

$$\mathcal{W}(\mathbf{x}) = \begin{cases} 1, \text{ if } \sum\_{n=1}^{N} \left[ \lambda\_n(\mathbf{x}) \times \log \left( \frac{1}{\beta\_n} \right) \right] \ge \frac{1}{2} \sum\_{n=1}^{N} \log \left( \frac{1}{\beta\_n} \right) \\\ 0, \text{ otherwise} \end{cases} \tag{7}$$

In the testing stages, the shape of the tarsal bones is detected by the sliding window method [24], in which each sub-window contains labels for each anatomical plane orientation from the training stage. Each area, which passes by each sub-window, is labeled at each classification stage either as, positive (1) or negative (0). If the label detected is positive, the region is recognized as the object and the classifier passes to the next stage. Otherwise, the region is rejected immediately. Then, the last stage will show the result of the detector in the current window, as shown in Figure 6. If there is more than one anatomical plane orientation detected in the input image, then the largest area of the current window is selected. The location of the calcaneus, based on the selected anatomical plane orientation, is determined according to Figure 7. Then, the calcaneus ROI is used as the location of the input image for the next step.

**Figure 6.** Cascade classifier stage to select anatomical plane orientation.

**Figure 7.** Region of interest (ROI) of the calcaneus based on the anatomical plane orientation.

#### *2.5. Step 2: Segmentation of the Calcaneus Fragments*

#### 2.5.1. Classification of Calcaneal Fractures in Coronal and Transverse Images

In the next step, the regional segmentation method, based on the contour detection [27], is applied to determine the type of fracture. Figure 8 shows the steps for determining the type of calcaneal fracture in coronal and transverse images.

**Figure 8.** Algorithm for determining the type of calcaneal fracture in coronal and transverse image.

Figure 9a shows the selected tarsal bone region using LBP features and cascade classifier. Figure 9b shows the location of the calcaneus determined in accordance with Figure 7 (see Section 2.4). Several morphological operations are applied in the calcaneus region to separate the fractional area. Figure 9c shows the calcaneus region having been converted into a binary image using Otsu segmentation [28]. This step selects a threshold to minimize the intra-class variance in black (background) and white (foreground) pixels. Then, the automatic threshold selection is used to segment the image. The optimal threshold selection is based on thresholds that minimize within-class weighted variance [28] which is equivalent to maximizing the intra-class variance.

**Figure 9.** (**a**) Tarsal bone detection; (**b**) Calcaneus region; (**c**) Binary image of calcaneus region; and (**d**) Calcaneus region after morphological operations.

For each given threshold intra-class variance can be computed by

$$
\sigma\_B^2(z) = q\_b(z)(1 - q\_b(z)) \left(\mu\_b(z) - \mu\_f(z)\right)^2 \tag{8}
$$

where *z* = 1, 2, ... , 256 is the gray level of the input calcaneus image, μ*<sup>b</sup>* and μ*<sup>f</sup>* are the means of background and foreground, respectively, which are defined as:

$$\mu\_b(z) = \sum\_{i=0}^{z-1} \frac{i \times P(i)}{q\_b(z)} \tag{9}$$

$$\mu\_f(z) = \sum\_{i=z}^{E-1} \frac{i \times P(i)}{q\_f(z)} \tag{10}$$

where *<sup>E</sup>* is the bins of the histogram, *qb*(*z*) = *<sup>z</sup>* −1 *i* = 0 *<sup>P</sup>*(*i*) and *qf*(*z*) = *<sup>E</sup>* <sup>−</sup><sup>1</sup> *i* = *z P*(*i*) are the gray level probability distributions *P*(*i*) for the background, and foreground, respectively. Then, the optimum threshold is obtained by maximizing the between-class variances.

$$\text{Th} = \arg\limits\_{1 \le z < 256} \text{arg}\limits\_{B} \sigma\_B^2(z). \tag{11}$$

Figure 9d shows the erosion and dilation morphological operations [29], which are implemented on binary images. The erosion filter removes white pixels along the foreground boundaries in the form of fragments of the calcaneus, so that the value of neighboring pixels in the foreground becomes minimum. Dilation adds white pixels to the foreground boundaries in the image so that the neighboring pixels in the foreground can have a maximum value.

#### 2.5.2. Contour Detection

The final step is to use the contour detection algorithm to extract regional boundaries in a calcaneal fracture. The initial value is set to 1 to determine the affiliation of the new contours and other contours. The outer border is the boundary with white pixels (1-component) which are enclosed by the black pixels (0-component). The hole border is the 1-component boundary which is enclosed by the 0-component. The frame is the boundary of the image that is wrapped by 1-component. NBD is the number of contours in the current calculation. LNBD is the last border number encountered when scanning a starting point. Figure 10 shows the connected components and borders in the image.

**Figure 10.** Connected components and borders.

The sub-steps for detection of contour of the calcaneus ROI are as follows:

1. Find the starting point of the contour

Start from the upper-left pixel at location (0,0) of the calcaneus ROI, then the image is scanned row-by-row. This condition indicates that the outer contour that has not been passed as the pixel value is changed in the next contour. Values greater than 1 indicates an 8-connected case that can represent the hole contours that have not been passed.

2. Contour Following

Next, start from the starting point that obtained from the previous step. Then the image is scanned row by row and the edges of 1-component is recorded. The square-tracing algorithm [30] is used since it is suitable for 4-connected cases as follows:


The outer contour is the contour that passes through the entire contour in different directions. Then, change the pixel value in the following contour procedure. If the neighboring point (*p* + 1, *q*) of the current contour point (*p*, *q*) has a pixel value of 0, set the current point to –NBD. In addition, if the current point value is more than 1 NBD, it remains unchanged. Otherwise, it is set to NBD.


#### 3. Build Contour Hierarchies

The next sub-step is to find out the relationship between contours. NBD that hits the contour during step 1 will be scanned and recorded in LNBD. Each boundary extracted is stored as a point vector which is retrieved and reconstructed with a full hierarchy of nested contours. Then, each contour is encoded with four points. The result is a closed two-dimensional contour for each remaining region as shown in Figure 11a. The contour boundary is a dense set of sequential neighbor points called a blob. A blob larger than 100 pixels is selected and the minimum closing circle is found so that each blob represents one fragment as shown in Figure 11b. The algorithm is shown as follows:

**Figure 11.** Fragment segmentation: (**a**) Contour detection and (**b**) Color segmentation.

In transverse and coronal images, if the calcaneus is segmented into a single full blob then it is classified as a normal calcaneus. If the calcaneus is detected as one blob but the area is not full, it is classified as type I (line fracture). Otherwise, the type of fracture is classified by a number of blobs.

#### 2.5.3. Classification of Calcaneal Fractures in Sagittal Images

Figure 12 defines the steps for determining the type of calcaneal fracture in sagittal images. The calcaneus region is divided into three equal areas where each area represents an extra-articular region: Type A (left), Type B (middle), and Type C (right). The fracture classification in sagittal image is based on the area of the fracture line. However, it is slightly difficult to separate the lines from the bone structure. Thus, the median filter [31] is applied to reduce the ambiguity between a bone structure and fracture, as shown in Figure 13a. Then, the automatic contrast enhancement [32] is implemented to obtain a clear fragment separation in bone structure, as shown in Figure 13b. The image histogram, which shows the relationship between the gray level and the corresponding frequency, can be expressed as:

$$I\_{\text{list}}(z) = \begin{array}{c} \Delta z \\ \Delta \end{array} \tag{12}$$

where Δ*<sup>z</sup>* is the number of pixels in *z*, and Δ is the total number of pixels in the input image. The histogram equalization (HE) accumulates a histogram of the pixel values in the input image, then displaces all pixel values to enhance the contrast. HE with interval [0, *D* − 1] can be computed by:

$$Map\_{\overline{z}} = F(j) \ = \ (D-1) \sum\_{z=1}^{j-1} I\_{\text{hist}}(z) \tag{13}$$

where *Mapz* indicates an *F(j)* mapping function that maps every pixel value *j* from the input image to *Mapz*, and *D* is the dynamic range of HE in the output image.

Figure 13c shows the Otsu segmentation, which is implemented to convert Figure 13b into a binary image. Figure 13d shows the erosion and dilation which makes the fracture line more obvious. Figure 13e shows the fracture lines and fragmented areas using a contour detection algorithm (see Section 2.5.2). The largest blob is selected as the calcaneus region and other blobs can be recognized as fractures. Figure 13e indicates that there is more than one fracture line detected in the calcaneus region. Then, the largest line is selected to determine the type of fracture, based on the line position in the calcaneus body, as shown in Figure 14a.

**Figure 12.** Algorithm for determining the type of calcaneal fracture in sagittal images.

**Figure 13.** (**a**) Apply median filter; (**b**) Contrast enhancement; (**c**) Otsu segmentation; (**d**) Morphological operations; and (**e**) Fracture detection.

Figure 14a indicates the area of the selected fracture line. In Figure 14b, the calcaneal fracture is classified as Type B, since most of the area of the fracture is in the middle region. In the case of fragments, the region of each fragment is selected, as shown in Figure 15a. Figure 15b shows the fracture type (called Type B in the middle region), which is determined by the area between the two fragments.

**Figure 14.** Steps to determine the type of fracture in case of a line fracture: (**a**) The region of the fracture line is represented by a yellow box and (**b**) The fracture type is based on the line region.

**Figure 15.** Steps to determine the type of fracture in case of fragment fracture: (**a**) The region of the fragment is represented by a yellow box and (**b**) The fracture type is based on the area between the two fragments.

Figure 16 shows another case of fragment detection. Figure 16a shows the area of two fragments detected in the same region. Since the smaller fragment is inside the larger fragmented region, only the largest fragment will be selected as shown in Figure 16. Figure 16c indicates this calcaneal fracture is classified as Type A, since the left area consists of two fragmented regions.

**Figure 16.** (**a**) Two fragments are in the same region; (**b**) The largest fragment is selected; and (**c**) The fracture type is based on the area between these two fragments.

#### **3. Experimental Results and Discussion**

Figures 17 and 18a show the calcaneus classified as non-fracture and Type I, in which both have one segmented region, represented by only one color of the calcaneus segmentation. In non-fractured calcaneus images, the calcaneus region may be fully segmented. In Type I fracture images, the calcaneus region cannot be fully segmented, as there are several fracture lines in the calcaneus body.

**Figure 17.** Normal calcaneus segmentation: (**a**) Coronal; (**b**) Transverse; and (**c**) Sagittal.

Figure 18 shows each color on the calcaneus segmentation representing one fragment. Although the type of fracture can be determined based on the number of fragments, the location of the fragments in the calcaneus is different with different shapes. In images of Type II fractures, failure of segmentation results will cause errors in the detection of fracture types. Thus, the calcaneus will be detected as a Type III fracture. This is due to the ambiguity in the bone structure that causes errors in separating a single blob into two blobs. In images of Type III fractures, failure of segmentation

results will cause errors in detecting two blobs as a single blob. This is due to the ambiguity of the CT image, which makes the algorithm unable to separate the white pixels between the two fragments.

Due to several factors mentioned, the most accurate detection results relate to Type IV fracture images. Since Type IV fractures have more than three fragments, some errors in the segmentation results do not significantly affect the detection of fracture types, which are determined by the number of fragments.

**Figure 18.** Segmentation result of the calcaneal fractures based on Sanders classification: (**a**) Type I; (**b**) Type II; (**c**) Type III; and (**d**) Type IV.

Table 1 shows the accuracy of fracture detection in coronal and transverse images. In coronal images, fragment separation is shown more clearly than in transverse images. In transverse images, the bone structure is slightly ambiguous, thus making segmentation fail to separate the foreground (bone structure) and background. If the segmentation fails, the fragment region cannot be filled as a single blob and it will be considered as an additional fragment. The proposed method achieves an average accuracy of 87.25% for coronal images and 83.25% for transverse images.

**Table 1.** Accuracy of the fractures type detection based on Sanders classification.


Figure 19 shows the fracture segmentation results in sagittal images. Table 2 shows the accuracy of the fracture type detection in sagittal images. In sagittal images, fractures in the calcaneus region

are too ambiguous and almost similar to the bone structure. Common mistakes in fracture detection in sagittal image are as follows:


The most accurate results in sagittal image is for Type B fractures, where the area of the fracture is located in the middle of the calcaneus. In images with type A (left) and type C (right) fractures, the fracture line is too vague within the bone structure, so that some errors occur in the segmentation result. The proposed method achieves an average accuracy of 82% for sagittal images. Thus, the average accuracy performance for fracture type detection in the calcaneus is 84.17%.

**Figure 19.** Segmentation result of the calcaneal fractures in sagittal images: (**a**) Type A; (**b**) Type B; and (**c**) Type C.


The results of computation performance are summarized in Table 3 in respect of frames per second (fps). The average time cost is about 7.5 ms per frame or 133 fps. The fastest processing time is for sagittal images that have the smallest calcaneus region, compared to other anatomical planes. The slowest processing time is for coronal images, which have the largest calcaneus region. The part that uses the greatest cost in computational time is segmentation so that the size of the calcaneus region determines the processing speed of a CT image.

**Table 3.** Computational performance in fps.


Table 4 shows the performance accuracy in terms of True Positive (TP), False Positive (FP), False Negative (FN), Precision Rate (PR), recall, and F-measure, based on the classification results, which are indicated in the segmented area. This performance result is checked for each image manually. TP is the detected area corresponds to the associated fracture. FP is the detected area not related to the fracture. FN is the area associated with a fracture which is not detected. The accuracy of the performance can be computed by:

$$PR = \frac{TP}{TP + FP} \tag{14}$$

$$\text{Recall} = \frac{TP}{TP + FN} \tag{15}$$

$$\text{F-measure} = 2 \times \frac{PR \times \text{Recall}}{PR + \text{Recall}} \tag{16}$$

The proposed method achieves an average precision rate of 0.86 and recall of 0.89. The highest recall is 0.92 for coronal images that have a clearer separation between the bone structure on the calcaneus, and the background compared to transverse and sagittal images. The sagittal image has the lowest recall caused by ambiguity on the fracture line with the bone structure.

**Table 4.** Detection result performance.


#### **4. Conclusions**

This paper presents a new method in automatically segmenting and detecting calcaneal fractures in CT images in real time applications. As shown in the result, the proposed algorithm in detecting the calcaneal fractures is relatively accurate. Using the proposed algorithm, bone fractures can be further highlighted in the processed images. This can help physicians to analyze the CT images better and increase the possibility of getting the real fracture condition. In addition, the method designed by us can estimate the distance of fracture separation and the angle between broken bone pieces, as well as other quantitative fracture assessments, which may not be easily accessed and measured through visual inspection. The proposed algorithm provides an analysis guide to fracture detection automatically with fast processing of more than one hundred images in one second. Thus, this method can help physicians to reduce decision-making and diagnostic time, which is very important for traumatic calcaneus injury.

**Author Contributions:** W.R. contributed to the conception of the study and wrote the manuscript; performed the experiment and data analyses; and contributed significantly to algorithm design and manuscript preparation; W.-J.W. helped perform the analysis and contribute to constructive discussions.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
