Next Article in Journal
Polarization of Lyman-α Line Due to the Anisotropy of Electron Collisions in a Plasma
Previous Article in Journal
GENAVOS: A New Tool for Modelling and Analyzing Cancer Gene Regulatory Networks Using Delayed Nonlinear Variable Order Fractional System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Retrieval of Shoeprints Using Modified Multi-Block Local Binary Pattern

1
Department of Computer Engineering, Karadeniz Technical University, Ortahisar, Trabzon 61080, Turkey
2
Department of Computer Science, VSB–Technical University of Ostrava, 17, Listopadu 2172/15, 708 00 Poruba, Ostrava, Czech Republic
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(2), 296; https://doi.org/10.3390/sym13020296
Submission received: 21 January 2021 / Revised: 2 February 2021 / Accepted: 6 February 2021 / Published: 9 February 2021
(This article belongs to the Section Computer)

Abstract

:
A shoeprint is a valuable clue found at a crime scene and plays a significant role in forensic investigations. In this paper, in order to maintain the local features of a shoeprint image and place a pattern in a block, a novel automatic method was proposed, referred to as Modified Multi-Block Local Binary Pattern (MMB-LBP). In this method, shoeprint images are divided into blocks according to two different models. The histograms of all blocks of the first and second models are separately measured and stored in the first and second feature matrices, respectively. The performance evaluations of the proposed method were carried out by comparing with state-of-the-art methods. The evaluation criteria are the successful retrieval rates obtained using the best match score at rank one and cumulative match score for the first five matches. The comparison results indicated that the proposed method performs better than other methods, in terms of retrieval of complete and incomplete shoeprints. That is, the proposed method was able to retrieve 97.63% of complete shoeprints, 96.5% of incomplete toe shoeprints, and 91.18% of incomplete heel shoeprints. Moreover, the experiments showed that the proposed method is significantly resistant to the rotation, salt and pepper noise, and Gaussian white noise distortions in comparison with the other methods.

1. Introduction

A crime scene refers to a place where a criminal or convict commits actions that are against the law; such a place includes valuable evidence, signs, and indications that can be used for investigating the crime. Indeed, a crime scene is considered the source of facts and information that are related to the crime and the criminal. When the evidence and signs are properly (and systematically) investigated, it can help investigators identify the criminal(s). Edmond Locard’s exchange theory proposes that perpetrators almost-always (or naturally) leave trace evidence behind at the crime scene. At the same time, these perpetrators will also leave with something from the crime scene [1]. Given the importance and the decisive role of the signs and evidence left at the crime scene, it can be used to determine the occurrence of the crime and prove the innocence of people who have been unduly accused.
The most common types of evidence (or clues) that remain at the crime scene include fingerprints, blood, hair, and shoeprints. Regarding these clues, many research studies have been extensively conducted on fingerprints, blood, and hair, and effective methods have been proposed for identifying them [2,3,4,5]. Moreover, in recent years, criminals have tried to avoid leaving trace evidence behind that could be used by forensic experts to identify them (i.e., they used methods such as covering their faces and using gloves to eliminate the effects of their fingerprints). However, criminals are not usually aware of their shoeprints being used as trace evidence. Studies indicate that shoe marks are frequent (similar to fingerprints) at crime scenes [6]; moreover, 35% of crime scenes include shoeprints [7].
When investigating a crime scene, sampling should be done on the ground—on the level where shoe marks are left [8]. Shoe marks appear in three-dimensional (3D) form when the ground surface is soft (such as mud), otherwise they appear in 2D form. Using shoeprints to identify criminals has attracted researchers’ attention; therefore, semi-automatic and fully automatic methods have been proposed, which are reviewed in Section 2.
Effective methods should be developed to identify shoeprint images that can automatically be matched with shoeprint traces in databases—this would allow for more efficient exploitation of shoeprints left at crime scenes. By developing such methods, the process of identifying criminals by investigators would be significantly enhanced. The main contributions of this paper are:
  • Retrieving shoeprints using the Ojala’s Local Binary Pattern (LBP) [9] for the first time.
  • A novel Modified Multi-Block Local Binary Pattern (MMB-LBP) method is proposed for shoeprint retrieval.
  • A detailed comparison of the proposed method with a number of related methods, with and without the presence of rotation, salt and pepper noise, and Gaussian white noise distortions.
The rest of the paper is organized as follows: Section 2 reviews the related works and studies. In Section 3, LBP and MB-LBP are investigated. Section 4 presents the proposed method, which includes preprocessing, extracting image features, and matching shoeprint images with one another. Section 5 presents the empirical results and discussions. Section 6 draws the conclusions.

2. Related Works

Early studies on identifying shoeprints were mainly conducted without using detection pattern techniques [10,11,12,13]. Table 1 is a summary of the reports on shoeprint or shoe mark recognition, from early to recent dates. The table provides brief information about the used features, reference and query database images, investigations under translation, rotation, scale, noise distortions, and partial shoeprints. A coding system for shoeprint patterns, which allocates numerical and alphabetical values to different elements of the patterns, was presented in [14]. A two-digit prefix and a three-digit numeric suffix were added to the patterns, as the number of new patterns increased significantly. Thus, the system was expanded and improved. In [12], a semi-automatic method for describing shoe soles via certain geometric patterns, such as zigzags, circles, squares, and letters, was proposed. Using these geometric patterns, unknown shoeprints were compared with the available shoeprints in the database. Since describing these shapes was challenging and time-consuming, researchers have been motivated to devise and use pattern detection techniques to identify shoeprints.
As a case in point, in 1993, a database was produced in Holland that included 14,000 shoeprint samples belonging to three categories: suspect shoeprints, shoeprints left at crime scenes, and shoeprints available in shoe stores. Geradts et al. proposed an algorithm for classifying the above-mentioned database [53]. This algorithm automatically classifies the patterns of the external parts of shoe soles. Initially, it splits and divides shoe profiles into different profiles and measures the Fourier features for these profiles. Then, it selects the best Fourier feature for classifying by neural networks. In [15,16], fractals were used for describing shoeprints and arithmetic square noise was used for specifying final matching. Using Fourier transform, De Chazal et al. introduced a method for automatic classification of shoeprint images, which included 476 complete images [17]. Images belonged to 140 groups; each group included two (or several) shoeprint samples. Due to its independence from rotation and translation of Fourier transform, this method had higher efficiency with regard to translation changes and rotation in bigger datasets. The images used in this method had high quality, and noisy images were not mentioned here.
Pavlou et al. proposed an automatic classification system for shoe patterns in [27], which was based on the local shape structure of patterns. Features and descriptors of the selected patterns were invariant of affine; consequently, they could be resistant to rotation and relative translation. The abundant nature and the locality of these features provide the opportunity for accurate detection and identification of the incomplete traces of shoeprints. Gueham et al. introduced a technique in [25] for the automatic identification of shoeprint images. They used Mellin transform to produce features that were invariant of translation, rotation, and scale. Fast Fourier transform is firstly obtained from the image and high and low frequencies are filtered from the results of Fourier transform. Next, the results are mapped by log–polar, and another Fourier scale is measured by means of fast Fourier transform. In this method, a two-dimensional correlation is used for the similarity criterion. According to their report, the efficiency of the algorithm regarding scale distortion and under noise conditions was good. Moreover, it had remarkable capabilities in identifying partial of shoeprints.
Using Hu Moment invariants, AlGarni et al. developed an automatic method in [24] for matching shoeprints. Features and descriptors of the selected pattern were invariant of affine and resistant to the rotation and relative translation. Dardi et al. introduced a descriptor based on Mahalanobis distance for retrieving shoeprint traces [37]. This descriptor operates based on geometrical pixel structures so that a block distance matrix is obtained for each shoeprint from the value and image density variance. This distance is referred to as Mahalanobis distance. Then, this descriptor measures power spectrum density. In this method, the correlation coefficient was used for matching the queried image with the database images. As a result of the comparison, the images are matched with the queried image, in terms of the degree of similarity and returned.
In [28], a method for automatic identification of shoeprints using directional properties of shoe sole patterns was proposed. In this method, co-occurrence matrices Fourier transform and a directional matrix was used for extracting features that match the direction of shoeprint patterns. In [29], Nibouche et al. introduced a method for retrieving rotated incomplete shoeprints using a combination of the intended local points and Scale Invariant Feature Transform (SIFT). In this method, the respective points of the shoeprint image were identified using the Harris–Laplacian feature detector. Later, the produced features were coded by SIFT. In the matching task, the random sample collection method was used for estimating the transform model and for producing some inliers that were later multiplied by the total Euclidean point-by-point distance under the threshold.
In [18], Zhang et al. proposed an automatic retrieval system based on the information of the edges of shoeprint patterns in a way that the direction of the available edges in shoeprint shapes are described in a histogram. In doing so, firstly, the Canny method was used as the edge detector for extracting the information of shoeprint edges. Then, the extracted information was quantized and the histogram of the shoeprint image was produced. In [41], Tang et al. proposed a shoeprint image retrieval method based on clustering. In this method, for enhancing shoeprint retrieval speed, the reference database is clustered based on the lower patterns of shoeprints. The available geometrical shapes in shoeprints, such as segments, circles, and ovals, are used as shoeprint features; then, these features are structurally clustered into attributed relational graphs (ARGs). In [32], a local adaptation was used for histogram of the Radon transform. That is, the shoeprint image is analyzed into connected variables and local descriptors. Hence, for finding the best local matching between connection components and the similarity between two images, the average local similarity degree was used.
In [44], Kong et al. proposed a shoeprint identification method in which Gabor and Zernike textural features were used to extract shoeprint image moment and the similarity degree between images was measured based on the above-mentioned features. In [33], Wei et al. used the SIFT for detecting noisy and incomplete shoeprints. Different scale-spaces were used in this method for detecting local maximums. Then, the local maximum features were used for matching shoeprint images. In [34], Wei et al. used core point alignment for retrieving shoeprints. In this method, the contours that are more reliable for simulating left and right margins of shoeprint image are selected. Next, the concave points along with the left and right margins of the image are determined as the core points of the image. Finally, the shoeprint image is divided into circular sections. Next, the moments of each section are measured and the Euclidean distance is measured for determining the similarity between the two shoeprint images.
Kortylewski et al. [45] introduced an unsupervised shoeprint retrieval algorithm for noisy environments. In this algorithm, local rotation in the image is measured by adjusting an alternating pattern; as a result, that part of the image that has rotation is normalized. Then, the local Fourier transform of that section of the image is measured. Next, the patterns are divided and the features of the frequency range in each situation where the alternating pattern is fixed is used for matching shoeprints. Matching of shoeprints is carried out by comparing Fourier transforms of the alternating patterns. The performance of this method indicates that it is resistant to noise but it can retrieve shoeprints only when shoeprints have alternating patterns.
Using Gabor transform, Patil et al. proposed an automatic shoeprint matching method in [30], which was invariant of rotation and brightness. For extracting features of shoeprint images, they obtained Gabor filters at eight different angles. Among the eight results of the Gabor filter, the four images with the highest energy are selected. Then, these four images are divided into 16 × 16-pixel blocks and their average variance is selected as the feature vector. In [35], Almaadeed et al. developed a method for retrieving incomplete shoeprints using multiple point-of-interest detectors and SIFT descriptors. For making this method invariant of the scale, and resistant to blob-like structures, the Harris and Hessian multiple-scale detectors were used. Moreover, for making it invariant of rotation, the SIFT descriptor was used for describing the intended points. Finally, by combining the advantages of the two detectors, the queried image is matched.
In [48], shoeprint retrieval has been carried out based on the similarity between hybrid features composed of global and local features. The ranking procedure includes an opinion score granted by the forensic expert, which is basically a relevancy score of the shoeprint to the query. In [50], using the blocking sparse representation technique, the queried image was divided into two blocks and two sparse representations were extracted by Wright’s sparse representation. Fourier transforms, Gabor transforms, Hessian–Harris’ multi-scale detectors, and SIFT descriptors are applied to extract the local and global features of the shoeprint image, its rotation, and corners, respectively.

3. Local Binary Pattern

Local Binary Pattern (LBP) introduced is well-known feature extraction and texture classification method [9]. Good features of this method, such as separation, invariability in the uniform changes of the gray-scale, implementation simplicity, and computational speed, have led to extensive use of it. The main rationale and justification for using LBP to detect shoeprints are that shoeprint images are made of a combination of several sub-patterns that can properly be described by this method.
The performance of LBP was firstly examined on eight neighbors of a pixel in the form of a 3 × 3 square operator. This operator functions in a way where the central pixel is considered as the threshold value, which is compared with the values of the eight neighboring pixels for producing an 8-bit code. If the value of each of the neighboring pixels is greater than or equal to the threshold, it will be replaced with 1 in the binary code; otherwise, it will be replaced with 0. Then, each digit of the binary code is multiplied by its locational value and their sum is measured as the LBP value for that pixel. By applying this 3 × 3 operator on the entire image pixels, an image with the same size is produced. Let (x, y) be the coordinates of a pixel of the input image, then, LBPP,R will be measured as in (1). Figure 1 shows an example of the LBP operator.
L B P P , R ( x , y ) = i = 0 P 1 S ( g i g c ) 2 i
S ( t ) = { 1 , t 0 0 , t < 0
where P refers to the number of neighboring pixels which is equal to 8; R denotes the distance between the central pixel and the neighboring pixels which is equal to 1 here, gc stands for the value of pixel (x, y) and gi denotes its ith neighbor pixel.

A Multi-Block Local Binary Pattern

Multi-Block Local Binary Pattern (MB-LBP) is regarded as a developed model of LBP, which has been proposed in different ways [54,55,56]. In this model, for extracting image features, the image is firstly divided into n areas, i.e., R0, R1, …, Rn−1. Then, the LBP operator functions invariantly for each area. Next, the histograms of the n areas are measured and incorporated within a feature vector. Figure 2 shows the results of the application of MB-LBP for a sample of a shoeprint image.

4. The Proposed Technique for Retrieval Shoeprints

The overview of the proposed method for retrieval shoeprints is given in Figure 3. The method includes three stages of preprocessing, extracting feature, and matching shoeprint image. In the first stage, preprocessing operations on shoeprint images are conducted by the means of certain techniques so that the image is prepared for the feature extraction stage. Such operations include noise removal, rotation, and the change of the scale of shoeprint images. In the feature extraction stage, features of the preprocessed image are extracted using the proposed MMB-LBP method. In the matching stage, features extracted from the queried shoeprint image are compared with features of the reference images via the Chi-square test; then, the results of the comparison are ranked so that the shoeprint image with the highest similarity is put at the beginning of the list. Finally, the correct retrieval rate is computed using the best match score at rank one and cumulative match score for the first five matches.

4.1. A Preprocessing

The preprocessing stage is vital for shoeprint images before the feature extraction stage. Figure 4 demonstrates the conducted steps and procedures in the preprocessing stage.

4.2. Noise Elimination

The first task in the preprocessing stage is to remove and eliminate noises that were created in the stage of recording shoeprints and scanning images. Hence, the colored shoeprint image is converted into a grey image, and the noises of the shoeprint images are removed using a median filter with a 5-pixel neighborhood. However, it should be noted that there might be larger spots noises in the shoeprint images, which are not eliminated with the median filter with a 5-pixel neighborhood. Thus, the Otsu method is used for thresholding the shoeprint image. Next, the reverse of the output image produced by the Otsu thresholding method is convoluted with the grey image. Finally, the pixels forming the shoeprint are isolated from the image background. Figure 4a indicates the result of noise elimination from a shoeprint image. It should be noticed that applying the median filter before Otsu thresholding leads to a reduction of pixels, which are separately located in the image and the enhancement of the values of the pixels located in the neighborhood of the shoeprint member. Consequently, this results in the improvement of the image division using the Otsu thresholding method.

4.3. Rotating Image

It is not always expected that input images as the traces and shoeprints are located at a specific angle and since the proposed method is invariant of rotation, all of the shoeprint images should be rotated before extracting features, so, they are in an identical condition for the investigation. Hence, after the noise elimination stage, the shoeprint image is vertically rotated forwards. Some parts of the shoeprint might likely be lost from the margins of the image while rotating the shoeprint image; hence, empty spaces should be added to the sides of the main image to prevent the loss of sections of the image. Figure 4b shows the result of adding space to the sides of the shoeprint image.
The Karhunen–Loeve method [57] is used for automatic rotation. This method for the automatic rotation of the image uses the concept of Eigenvector and the center of gravity. That is, the gravity center of the image is firstly measured from the binary image. Then, the covariance matrix, between row and column dimensions of all of the pixels related to the shoeprint, is obtained according to (3); the result indicates the changes of two dimensions in relation to one another.
K = [ T x x T x y T x y T y y ]
T x x = i = 0 N 1 j = 0 M 1 I ( i , j ) ( i x c ) 2    
T x y = i = 0 N 1 j = 0 M 1 I ( i , j ) ( i x c ) ( j y c )
T y y = i = 0 N 1 j = 0 M 1 I ( i , j ) ( j y c ) 2
where xc and yc denote the coordinates of the gravity center of the shoeprint image. Based on this matrix, the eigenvalues and eigenvectors are measured. Then, the sine and cosine of the angle between eigenvectors related to the largest eigenvalues are obtained through (7–8), which is considered as the required angle for rotating the image. The rotation matrix is formed from the obtained angle. Then, the positions of pixels in the new image are obtained using the rotation matrix and the beginning of the gravity center coordinate through (9–10). Figure 4c shows the result of the rotated shoeprint image. In the obtained image, the eigenvector is related to the greatest eigenvalue perpendicular to the axis X.
sin θ = T y y T x x + ( ( T y y T x x ) 2 + 4 T x y 2 ) 8 T x y 2 + 2 ( T y y T x x ) 2 + 2 ( T y y T x x ) ( ( T y y T x x ) 2 + 4 T x y 2 )
cos θ = 2 T x y 8 T x y 2 + 2 ( T y y T x x ) 2 + 2 ( T y y T x x ) ( ( T y y T x x ) 2 + 4 T x y 2 )
        x 2 = ( x 1 x c ) cos θ + ( y 1 y c ) sin θ + x c
y 2 = ( y 1 y c ) cos θ ( x 1 x c ) sin θ + y c
where x1 and y1 denote the coordinates of the input image, x2 and y2 refer to the pixel coordinates obtained from rotating the input image in the output image.

4.4. Scale Change

In this step, the scale of all shoeprint images is changed into a 256 × 128 fixed format. The available shoeprint in the image is enclosed in a frame and is cut from the whole image. For doing so, in the shoeprint image, the column index of the closest pixel of the shoeprint member is measured in relation to the right edge of the image. The width of the frame is obtained from the value differences between them. Next, for obtaining frame length, the row index of the nearest pixel of the shoeprint image in relation to the upper edge of the image, and the index of the nearest pixel of the shoeprint image in relation to the lower edge of the image is measured. By measuring these coordinates, the intended shoeprint can be separated from the image. By doing so, the additional margins in the image are cut out and only the shoeprint framework is preserved. Figure 4d shows the result of removing the margins of the shoeprint image.
Now, the scale of the shoeprint image can be changed to 256 × 128. For doing so, image dimensions should be measured; if image length is greater than its double-width, image size is changed based on its length; otherwise, the image size will be changed based on its width. That is, if changing image size is based on its length, shoeprint image dimensions will be changed in a proportion of 256 pixels relative to the frame length so that image length is perpendicularly transformed for 256 pixels. Consequently, its width is also changed. On the other hand, if changing image size is based on its width, the scale of the shoeprint image dimensions will be transformed for 128 pixels relative to the frame width in a way that image width is horizontally transformed for 128 pixels; accordingly, the frame length is also changed. Then, the obtained image is placed on the 256 × 128 frame. It should be noted that the axil dividing the obtained image columns into two equal parts should match the axil dividing columns of the 256 × 128 frame. As a result, the shoeprint image is vertically standing in the frame with 256 × 128 pixels. By doing this, all of the images are changed into an identical frame, which leads to the independence of the proposed method from the scale. Figure 4e illustrates the result of changing the scale of the shoeprint image.
After the vertical rotation of the image, likely, the shoeprint heel is upwards and it should be noticed that our purpose is that the shoeprint toe and paw are upwards. Hence, in this stage, it is assumed that, in shoeprint patterns, the contact surface of the upper half of the shoe is more than the lower half. In other words, in a binary image of the shoeprint, the density of the black pixels in the upper half should be more than those in the lower half. Hence, after the shoeprint image is made perpendicular, the intended densities are measured; in case the density of the upper half of the image is less than those of the lower half, the image is rotated for 180 degrees, so that the heel is placed in the lower part of the image. However, observing and examining the rotation of shoeprint images indicate that this is not always true; that is, in about 15% of cases, the opposite results are observed, and under such circumstances, the images are manually rotated so that the shoeprint image heel is downwards and shoeprint toe is upwards.

4.5. Extracting Features

In the machine vision, the features extracted from images should express and indicate the maximum characteristics of the images. Therefore, for enhancing the identification accuracy of LBP, so that the extracted features can better represent the shoeprint image, the local features should be preserved in extracting image histograms. For this purpose, the Modified Multi-Block Local Binary Pattern method is proposed.

4.6. Modified Multi-Block Local Binary Pattern

In the Modified Multi-Block Local Binary Pattern (MMB-LBP) method, shoeprint images are divided into blocks because not dividing them means that there is only one histogram for the entire image. The major weakness of one histogram is the lack of sensitivity among bins of this histogram, with regard to the location of shoeprint image features. On the other hand, in dividing shoeprint images into blocks before the application of the LBP operator, false features resulting from dark lines in the margins of the blocks will be produced; it will also lead to the destruction of the patterns located in the margins of the blocks. Hence, for eliminating this weakness and enhancing the sensitivity, the LBP operator is firstly applied to the entire image. As a result, an image with the same size is produced. Then, by dividing the result of the LBP operator into different blocks, histograms of those blocks will be extracted. Consequently, feature loss and false features in the margins of blocks will be avoided. Figure 5 shows the result of applying the LBP on a shoeprint image.
Since blocking is applied automatically on a shoeprint image without any human interference, shoeprint patterns are likely to be fragmented. That is, while blocking, if the axil isolating blocks pass through one or several shoeprint patterns, part of one pattern will be in one block and another part will be in the neighboring block or blocks. In other words, if part of a pattern is located in one block, it will be wrongly identified as a complete pattern in that block. Hence, when a pattern is located in two or more neighboring blocks, that pattern will not be accurately identified which will have a destructive impact while shoeprint images are matched.
Hence, for accurately extracting pattern features located in block margins and for reducing the destructive impact of pattern fragments while matching shoeprint images, the shoeprint image in the proposed method is blocked according to two different models. That is, in case one pattern in one blocking model is located in the margin of a block, in new blocking, that pattern should be completely located in the middle of the block. Hence, as the features of a pattern are completely located in a block from new blocking, the destructive impact of that pattern fragment will be avoided.
In shoeprint images, blocking begins from two different positions and continues without overlapping until the end of the image. In the first blocking model, blocking begins from the 1 × 1 position in the upper-left edge of the image and is carried out up to the 256 × 128 position in the lower right edge of the image for the size of 32 × 32 pixels. Hence, it can be argued that from 1 × 1 position up to 32 × 32 position of the image is recognized as the first image block. Moreover, the second block begins from the 1 × 31 position of the image and continues up to the 33 × 36 position.
This blocking process continues without overlapping up to the lower right edge of the image so that four blocks in rows and eight blocks in columns are produced. Finally, 32 blocks will be produced from one shoeprint image in the first blocking model. In the second blocking model, blocking begins in the 17 × 17 position in the upper-left edge of the image and continues up to the 240 × 112 position in the lower right edge of the image. Hence, block one in the second blocking model is from the 17 × 17 position to the 48 × 48 position. This blocking procedure continues in the way that was mentioned in the second blocking model, so that three blocks in the rows and seven blocks in the columns are created. Finally, a total of 53 blocks from the shoeprint image will be produced according to both models. Figure 6 depicts the two blocking models of a shoeprint image. It should be noted that blue lines in the figure are just for the sake of illustration.
Next, for extracting image features, the histograms of all blocks of the first model are separately measured and stored in the first feature matrix. Then, the histograms of all of the blocks of the second blocking model are stored in the second feature matrix. Image features include two feature matrices; the first matrix has 8 × 4 elements where each element consists of a histogram with 256 values. Moreover, the second matrix has 7 × 3 elements where each element consists of a histogram with 256 values. Figure 7a illustrates an overview of the corresponding histograms based on the first blocking model and Figure 7b shows the overview of the corresponding histograms based on the second blocking model for the shoeprint image. Figure 8 demonstrates the distinction between the texture features of two neighboring blocks from two different shoeprint images. As shown in this figure, it is obvious that the two corresponding blocks from two different images have dissimilar histograms. Indeed, these histograms are used for matching the queried shoeprint image with the reference images. Hence, in the proposed method, histograms of each area of the shoeprint image are produced according to (11) and (12).
F M 1 , i , j = H i s t ( L B P ( i × S B + 1 : i × S B + S B , j × S B + 1 : j × S B + S B ) ) , i = 0 , 1 , ,   7 ,       j = 0 , 1 , , 3
F M 2 , i , j   = H i s t ( L B P ( i × S B + 17 : i × S B + S B , j × S B + 1 : j × S B + S B ) ) , i = 0 , 1 , ,   6 ,       j = 0 , 1 , , 2    
where the i refers to the area index and j refers to the bin index, SB denotes the size of blocks and is equal to 32. FM1 and FM2, respectively, refer to the first feature matrix and the second feature matrix.

4.7. A Shoeprint Image Matching

The feature matrices that were extracted in the feature extraction stage are applied in this stage for matching. As mentioned above, shoeprint images were divided in to different blocks and each of the blocks or areas is of a different degree of significance. Hence, for distinguishing high-significance areas of the shoeprint image from low-significance areas with regard to pattern density, W1 and W2 weight matrices are introduced as given in Table 2. As a result, while comparing histograms, the presence of similarities and differences in high-significance areas of images are better highlighted. In other words, the similarity in high-significance areas of the image indicates the similarity of shoeprints; also, the difference in high-significance areas of the images indicates the lack of similarity between shoeprints. Hence, the Chi-square test in (13) is used as the similarity criterion for obtaining the corresponding similarity between different areas of the queried image with those of the reference image. Using this method, blocks of the queried image were compared with those of the reference image. Then, the final results of block comparison were measured and summarized to obtain the degree of similarity between the queried image and the reference image. Hence, in case the results of block comparison for an image is close to zero, it will be interpreted as the similarity between the queried image and the database image. If feature matrices for the queried image and the reference image are called Q and P, respectively, the Chi-square test is defined as follows:
X w 2 ( P , Q ) = i , j w i , j ( p i , j q i , j ) 2 p i , j + q i , j
where i, j, and wi,j denote the corresponding histogram index with the area, the index among histogram bins and wi,j stands for the area significance coefficient, respectively.

5. Results and Discussion

In this study, the database of participants from two cities, namely Miandoab in Iran and Trabzon in Turkey, which is referred to as the Iranian–Turkish Shoeprints Database (ITSP DB available at https://ceng2.ktu.edu.tr/~itspdb, accessed on: 14 December 2020), is used. The database includes five separate images for each shoe and, in total, 950 shoeprint images. Here, shoes of different sizes were used. The point that was taken into consideration while recording samples was that each remaining shoe image sample was not identical with the other ones, because shoeprints from one type of shoe in each contact with the surface leave a partially different effect and impression, especially in its margins. Figure 9 shows samples of the shoeprint images from ITSP DB. The complete description of this database can be found in [50].
The evaluations were carried out on the database with 190 categories where each category included five cases. One sample from each shoeprint category was selected and registered in the database. Then, the remaining four samples from each category, i.e., 760 shoeprint images were recorded in the database and used for testing queried images. In other words, in all of the evaluations, 20% of shoeprint images were allocated to training data and 80% of shoeprint images were dedicated to testing data. To include partial and incomplete shoeprints in the evaluations, the complete images were divided into two sections, i.e., heel and toe. Figure 10 illustrates the heel and toe incomplete shoeprints that were obtained from complete shoeprints.
For a better evaluation and investigation of the proposed method, the queried images were examined in the presence of rotation distortions, salt and pepper noise, and Gaussian white noise. In the following sections, the performance of the proposed method was compared with those of LBP, MB-LBP, Patil [30], and Almaadeed [35] methods, in terms of retrieval shoeprints.
The successful retrieval rate of the proposed method is measured by the best match score at rank one and the cumulative match score for the first five matches:
Cumulative   match   score = the   number   of   accurately   retrieved   images   total   query   images × 100

6. Evaluation of the Performance of the LBP, MB-LBP, and MMB-LBP Methods

6.1. Performance of LBP versus MB-LBP

The first test was carried out for evaluating and comparing the performance of LBP with MB-LBP with different sizes of blocks. For substantiating and demonstrating that the MB-LBP performs better than LBP, both methods were implemented, and the cumulative match scores of their performances were compared as depicted in Figure 11. As shown, the MB-LBP had an 81% retrieval score in all of the sizes, i.e., 8 × 8, 16 × 16, 32 × 32, 64 × 64, and 128 × 128 and its performance in retrieval shoeprint images was better than that of the LBP. The reason for the better performance of the MB-LBP is that it extracts local features of shoeprint images. From Figure 11, the comparison results indicate that whatever the dimensions of blocks, the MB-LBP has a higher cumulative match score than the LBP. Therefore, it can be argued that extracting several histograms for one shoeprint image is better than extracting one histogram for the entire shoeprint image.

6.2. Performance of the MB-LBP versus the MMB-LBP

The purpose of this test was to investigate the performance of MB-LBP versus the proposed MMB-LBP. For indicating that blocking shoeprint image for extracting histograms after the application of LBP has a better performance than doing it before the application of LBP, both methods in different sizes of blocks were implemented. The results are shown in Figure 12, which demonstrates that the shoeprint retrieval score in the image-blocking model after the application of LBP has a higher performance than the one before the application of the LBP.
The difference in performance is attributed to the lack of false features resulting from dark lines in the margins of blocks. Based on the result shown in the figure above, it can be argued that the problem of the placement of a part of the shoeprint pattern in one block and another part in the next block or neighboring blocks have been sorted out. As given in Figure 11, the best shoeprint retrieval result was 96%, which was related to 32 × 32 block size. On the other hand, the best shoeprint retrieval score was 97.63%, which was obtained in 32 × 32 block size. This 1% difference in performance indicates the higher performance of the proposed method. That is, appropriate shoeprint blocking leads to high retrieval accuracy of the MB-LBP. In fact, in case the sizes of other blocks are investigated, it is found that shoeprint retrieval score in 8 × 8 size for MB-LBP is 83%; in contrast, regarding MMB-LBP, the retrieval score is 90%. Moreover, a 1% increase in the 16 × 16 block size and a 2% increase in the 64 × 64 block size are observed.
The second aim of this test was carried out for investigating the MMB-LBP with different sizes of blocks. Since the size of the blocks of MMB-LBP can have a significant impact on image retrieval performance, the purpose of this test was to decide upon the size of the blocks of the proposed method. Hence, the MB-LBP and MMB-LBP were investigated in the block sizes of 8 × 8, 16 × 16, 32 × 32, 64 × 64, and 128 × 128. Figure 12 shows the evaluation results in the first rank. As shown, the best shoeprint retrieval score was 97.63%, which was related to the 32 × 32 size.

6.3. A Comparison with Patil and Almaadeed

The first experiment in this category was conducted for investigating and comparing the proposed MMB-LBP with Patil [30] and Almaadeed [35]. Table 3 gives the results for this investigation on complete and incomplete shoeprints in different ranks. As it can be observed, the cumulative match score of the proposed method for complete shoeprints was 97.63 and 96.05 for incomplete toe prints. Moreover, regarding incomplete heel shoeprints, the cumulative match score was 91 in the first rank. However, it should be noted that 83.29% of complete shoeprint images in the Almaadeed method and 64% of complete shoeprints in the Patil method were observed in the first rank. Moreover, 74% of incomplete shoeprint images in these two methods were observed in the first rank. Cumulative match score in the MMB-LBP, complete shoeprint images reached 100 in the fifth rank and incomplete shoeprints reached higher than 97 in the fifth rank. In contrast, in Patil and Almaadeed, the best match for incomplete shoeprints in the fifth rank reached 78.
It should be maintained that the low cumulative match score in the Patil method is that shoeprint images are significantly rotated. That is, Radon transform could not properly rotate shoeprint images vertically. Moreover, shoeprint images are blocked before feature extraction in the Patil; hence, for having an equal evaluation and investigation of the performances of the proposed method and Patil method, the preprocessing process of the MMB-LBP was used instead of Radon transform for image rotation in Patil method.
As shown in Table 3 and Table 4, it can be observed that the proper rotation of shoeprint image, due to the way the Patil method functions, has a remarkable impact on shoeprint matching performance. That is, whereas 64 cumulative match score in the first rank was obtained for complete shoeprints via Radon transform, this cumulative match score was enhanced to 82 for the proposed method. Indeed, it can be mentioned that the shoeprint image cumulative match score in the proposed method was enhanced for all conditions in such a way that in the fifth rank, both complete and incomplete shoeprint cumulative match score surpassed 89. Therefore, it can be argued that the proposed method had a better performance than the other two methods in matching complete and incomplete shoeprints.
Here, we investigate the rotation independency of shoeprint images in the proposed method. In this test, queried images were randomly rotated clockwise in one of these angles, i.e., 15, 30, 45 degrees. Figure 13 demonstrates the results of this test for the proposed and Patil methods. As shown, the MMB-LBP is resistant to rotation distortions.
It is seen from Figure 13 that the cumulative match score achieved in the first rank for complete, incomplete toe, and incomplete heel shoeprints are above 97, 95, and 91, respectively. Moreover, 100% matching for complete shoeprints and cumulative match score over 98 were obtained for incomplete shoeprints in the fifth rank. On the other hand, as shown in Figure 13 and Table 4, about 1% matching reduction for complete shoeprints, incomplete toe, and heel shoeprints were observed in the Patil method in the first rank. Moreover, a 5% matching reduction for complete shoeprints and a 9% reduction in incomplete toe and heel shoeprints were observed in the Patil method in the fifth rank. It is worth noting that the cumulative match score of the Almaadeed method here was less than 50 and, therefore, was not included in the report.

6.4. Evaluations under Distortions and Noises

Here, the resistance of the proposed method to salt and pepper noise and Gaussian white noise is investigated. The database shoeprint images while digitalizing were stained with salt and pepper noise and Gaussian white noise. Hence, the proposed method was investigated in such noisy conditions. The queried shoeprint images were stained salt and pepper and Gaussian white noise with different signal to noise ratio (SNR), i.e., 15.28, 18.80, 24.82, 26.76, 32.78, and 38.80. The noise variance for a sample shoeprint image with the signal to noise ratio is defined as:
SNR ( db ) = 20 log ( P s 2 σ n 2 )
where Ps denotes the average power of shoeprint image, σ n 2 refers to the noise variance.
Figure 14 illustrates the images, which had been stained with salt and pepper noise and Gaussian white noise. Figure 15 and Figure 16 show the results for the proposed, Patil, and Almaadeed methods in the noisy condition. As it can be observed in the related tables and figures, the proposed method is resistant to noise. That is to say, under slat pepper noise and Gaussian white noise condition in different SNRs, in the MMB-LBP, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching nearly reached 98, 95, and 91, respectively in the first rank. Comparatively, in the Patil method, under the same noise conditions, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching reached 81, 69, and 68, respectively in the first rank. Furthermore, as shown in Figure 15, as the signal to noise ratio decreases further, a more drastic reduction in the cumulative match score of Almaadeed method under the salt and pepper noise condition will be observed.
From Figure 16, as the signal to noise proportion decreases, a sharp reduction is observed in the cumulative match score of the Almaadeed method under salt pepper noise condition. Furthermore, an about 10% reduction of signal to noise proportion is observed in the Almaadeed method under Gaussian white condition. Thus, it can be argued that the MMB-LBP had a better performance than Patil and Almaadeed methods under salt pepper noise and Gaussian white conditions.

7. Conclusions

In this paper, a novel MMB-LBP method was proposed for matching shoeprint images automatically. Indeed, the MMB-LBP was used for extracting the texture features of shoeprint images. The results showed that the proposed method has a higher retrieval success rate in comparison with the LBP and MB-LBP. The evaluation results also demonstrated that the proposed method is robust under rotation, Gaussian white noise, and salt and pepper noise, and it has better cumulative match scores when compared with the Patil and Almaadeed methods. The cumulative match scores in the first rank for complete, incomplete toe, and incomplete heel shoeprints were 97, 96, and 91, respectively, in the presence of rotation distortions, salt and pepper, and Gaussian white noises. Finally, the cumulative match score for complete, incomplete toe, and incomplete heel shoeprints in the fifth rank was 100, over 98, and 97, respectively.

Author Contributions

Conceptualization, C.K. and V.V.N.; methodology, S.A. and H.B.J.; software, S.A.; validation, S.A. and H.B.J.; formal analysis, S.A.; investigation, S.A.; resources, H.B.J.; data curation, S.A. and H.B.J.; writing—original draft preparation, S.A.; writing—review and editing, H.B.J.; visualization, S.A.; supervision, C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Locard, E. The analysis of dust traces. Part I, II and III. Am. J. Police Sci. 1930, 1, 276–298, 401–418, 496–514. [Google Scholar] [CrossRef]
  2. Robertson, J.R. Forensic Examination of Hair; CRC Press: London, UK, 2002. [Google Scholar]
  3. Kaur, R.; Mazumdar, S.G. Fingerprint Based Gender Identification using frequency domain analysis. Int. J. Adv. Eng. Technol. 2012, 3, 295–299. [Google Scholar]
  4. Buckleton, J.S.; Bright, J.-A.; Taylor, D. Forensic DNA Evidence Interpretation; CRC Press: London, UK, 2016. [Google Scholar]
  5. Robertson, B.; Vignaux, G.A.; Berger, C.E. Interpreting Evidence: Evaluating Forensic Science in the Courtroom; John Wiley & Sons: West Sussex, UK, 2016. [Google Scholar]
  6. Bodziak, W.J. Forensic Footwear Evidence; CRC Press: London, UK, 2017. [Google Scholar]
  7. Girod, A. Shoeprints: Coherent exploitation and management. Presented at the European Meeting for Shoeprint Toolmark Examiners, The Hague, The Netherlands, 23 April 1997. [Google Scholar]
  8. Bouridane, A. Shoemark recognition for forensic science: An emerging technology. In Imaging for Forensics and Security; Springer: Boston, MA, USA, 2009; pp. 143–164. [Google Scholar]
  9. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  10. Rankin, B. Footwear marks—A step by step review. Forensic Sci. Soc. Newsl. April 1998, 3. [Google Scholar]
  11. Mikkonen, S.; Astikainen, T. Databased Classification System for Shoe Sole Patterns—Identification of Partial Footwear Impression Found at a Scene of Crime. J. Forensic Sci. 1994, 39, 1227–1236. [Google Scholar] [CrossRef]
  12. Ashley, W. What shoe was that? The use of computerised image database to assist in identification. Forensic Sci. Int. 1996, 82, 7–20. [Google Scholar] [CrossRef]
  13. Mikkonen, S.; Suominen, V.; Heinonen, P. Use of footwear impressions in crime scene investigations assisted by computerised footwear collection system. Forensic Sci. Int. 1996, 82, 67–79. [Google Scholar] [CrossRef]
  14. Birkett, J. Scientific scene linking. J. Forensic Sci. Soc. 1989, 29, 271–284. [Google Scholar] [CrossRef]
  15. Alexander, A.; Bouridane, A.; Crookes, D. Automatic classification and recognition of shoeprints. In Proceedings of the Seventh International Conference an Image Processing and Its Applications (Conf. Publ. No. 465), Manchester, UK, 1999; Volume 2, pp. 638–641. [Google Scholar]
  16. Bouridane, A.; Alexander, A.; Nibouche, M.; Crookes, D. Application of fractals to the detection and classification of shoeprints. In Proceedings of the 2000 International Conference on Image Processing (Cat. No.00CH37101), Vancouver, BC, Canada, 10–13 September 2000; Volume 1, pp. 474–477. [Google Scholar]
  17. De Chazal, P.; Flynn, J.; Reilly, R.B. Automated processing of shoeprint images based on the Fourier transform for use in forensic science. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 341–350. [Google Scholar] [CrossRef]
  18. Zhang, L.; Allinson, N. Automatic shoeprint retrieval system for use in forensic investigations. In Proceedings of the 2005 U.K. Workshop on Computational Intelligence (UKCI2005), London, UK, 5–7 September 2005; pp. 137–142. [Google Scholar]
  19. Pavlou, M.; Allinson, N.M. Automatic extraction and classification of footwear patterns. In Intelligent Data Engineering and Automated Learning–IDEAL; Springer: Berlin/Heidelberg, Germany, 2006; pp. 721–728. [Google Scholar]
  20. Crookes, D.; Bouridane, A.; Su, H.; Gueham, M. Following the footsteps of others: Techniques for automatic shoeprint classification. In Proceedings of the Second NASA/ESA Conference on Adaptive Hardware and Systems (AHS 2007), Edinburgh, UK, 5–8 August 2007; pp. 67–74. [Google Scholar]
  21. Su, H.; Crookes, D.; Bouridane, A.; Gueham, M. Local image features for shoeprint image retrieval. In Proceedings of the BMVC, Warwick, UK, 10–13 September 2007. [Google Scholar]
  22. Gueham, M.; Bouridane, A.; Crookes, D. Automatic recognition of partial shoeprints based on phase-only correlation. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 September 2007; pp. IV-441–IV-444. [Google Scholar]
  23. Gueham, M.; Bouridane, A.; Crookes, D. Automatic Classification of Partial Shoeprints Using Advanced Correlation Filters for use in Forensic Science. In Proceedings of the 2008 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar]
  24. Algarni, G.; Hamiane, M. A novel technique for automatic shoeprint image retrieval. Forensic Sci. Int. 2008, 181, 10–14. [Google Scholar] [CrossRef] [PubMed]
  25. Gueham, M.; Bouridane, A.; Crookes, D.; Nibouche, O. Automatic recognition of shoeprints using Fourier-Mellin transform. In Proceedings of the 2008 NASA/ESA Conference on Adaptive Hardware and Systems, Noordwijk, The Netherlands, 22–25 June 2008; pp. 487–491. [Google Scholar]
  26. Pei, W.; Zhu, Y.-Y.; Na, Y.-N.; He, X.-G. Multiscale gabor wavelet for shoeprint image retrieval. In Proceedings of the 2009 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 1–5. [Google Scholar]
  27. Pavlou, M.; Allinson, N.M. Automated encoding of footwear patterns for fast indexing. Image Vis. Comput. 2009, 27, 402–409. [Google Scholar] [CrossRef]
  28. Jing, M.-Q.; Ho, W.-J.; Chen, L.-H. A novel method for shoeprints recognition and classification. In Proceedings of the 2009 International Conference on Machine Learning and Cybernetics, Hebei, China, 12–15 July 2009; Volume 5, pp. 2846–2851. [Google Scholar]
  29. Nibouche, O.; Bouridane, A.; Gueham, M.; Laadjel, M. Rotation invariant matching of partial shoeprints. In Proceedings of the 2009 13th International Machine Vision and Image Processing Conference, Dublin, Ireland, 2–4 September 2009; pp. 94–98. [Google Scholar]
  30. Patil, P.M.; Kulkarni, J.V. Rotation and intensity invariant shoeprint matching using Gabor transform with application to forensic science. Pattern Recognit. 2009, 42, 1308–1317. [Google Scholar] [CrossRef]
  31. Li, Z.; Wei, C.; Li, Y.; Sun, T. Research of shoeprint image stream retrival algorithm with scale-invariance feature transform. In Proceedings of the 2011 International Conference on Multimedia Technology, Hangzhou, China, 26–28 July 2011; pp. 5488–5491. [Google Scholar]
  32. Hasegawa, M.; Tabbone, S. A local adaptation of the histogram radon transform descriptor: An application to a shoe print dataset. In Structural, Syntactic, and Statistical Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2012; pp. 675–683. [Google Scholar]
  33. Wei, C.-H.; Li, Y.; Gwo, C.-Y. The Use of Scale-Invariance Feature Transform Approach to Recognize and Retrieve Incomplete Shoeprints. J. Forensic Sci. 2013, 58, 625–630. [Google Scholar] [CrossRef]
  34. Wei, C.-H.; Gwo, C.-Y. Alignment of core point for shoeprint analysis and retrieval. In Proceedings of the 2014 International Conference on Information Science, Electronics and Electrical Engineering, Sapporo, Japan, 26–28 April 2014; pp. 1069–1072. [Google Scholar]
  35. Almaadeed, S.; Bouridane, A.; Crookes, D.; Nibouche, O. Partial shoeprint retrieval using multiple point-of-interest detectors and SIFT descriptors. Integr. Comput. Eng. 2015, 22, 41–58. [Google Scholar] [CrossRef]
  36. Dardi, F.; Cervelli, F.; Carrato, S. An automatic footwear retrieval system for shoe marks from real crime scenes. In Proceedings of the 6th International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria, 16–18 September 2009; pp. 668–672. [Google Scholar]
  37. Dardi, F.; Cervelli, F.; Carrato, S. A Texture Based Shoe Retrieval System for Shoe Marks of Real Crime Scenes. In Proceedings of the International Conference on Image Analysis and Processing, Vietri sul Mare, Italy, 8–11 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 384–393. [Google Scholar]
  38. Dardi, F.; Cervelli, F.; Carrato, S. A combined approach for footwear retrieval of crime scene shoe marks. In Proceedings of the 3rd International Conference on Imaging for Crime Detection and Prevention (ICDP 2009), London, UK, 3 December 2009; pp. 1–6. [Google Scholar]
  39. Cervelli, F.; Dardi, F.; Carrato, S. A translational and rotational invariant descriptor for automatic footwear retrieval of real cases shoe marks. In Proceedings of the 18th European Signal Processing Conference, Aalborg, Denmark, 23–27 August 2010; pp. 1665–1669. [Google Scholar]
  40. Tang, Y.; Srihari, S.N.; Kasiviswanathan, H.; Corso, J.J. Footwear print retrieval system for real crime scene marks. In Computational Forensics; IWCF 2010, Lecture Notes in Computer Science; Sako, H., Franke, K.Y., Saitoh, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6540. [Google Scholar]
  41. Tang, Y.; Srihari, S.N.; Kasiviswanathan, H. Similarity and Clustering of Footwear Prints. In Proceedings of the 2010 IEEE International Conference on Granular Computing, San Jose, CA, USA, 14–16 August 2010; pp. 459–464. [Google Scholar]
  42. Tang, Y.; Kasiviswanathan, H.; Srihari, S.N. An efficient clustering-based retrieval framework for real crime scene footwear marks. Int. J. Granul. Comput. Rough Sets Intell. Syst. 2012, 2, 327. [Google Scholar] [CrossRef]
  43. Li, X.; Wu, M.; Shi, Z. The Retrieval of shoeprint images based on the integral histogram of the Gabor transform domain. In Proceedings of the International Conference on Intelligent Information Processing, Hangzhou, China, 3–6 October 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 249–258. [Google Scholar]
  44. Kong, X.; Yang, C.; Zheng, F. A Novel Method for Shoeprint Recognition in Crime Scenes. In Biometric Recognition; Springer: Cham, Switzerland, 2014; pp. 498–505. [Google Scholar]
  45. Kortylewski, A.; Albrecht, T.; Vetter, T. Unsupervised footwear impression analysis and retrieval from crime scene data. In Lecture Notes in Computer Science, Proceedings of the 12th Asian Conference on Computer Vision (ACCV 2014), Singapore, 1–5 November 2014; Jawahar, C., Shan, S., Eds.; Springer: Cham, Switzerland, 2014; Volume 9008. [Google Scholar]
  46. Wang, X.; Sun, H.; Yu, Q.; Zhang, C. Automatic Shoeprint Retrieval Algorithm for Real Crime Scenes. In Proceedings of the 12th Asian Conference on Computer Vision (ACCV 2014), Singapore, 1–5 November 2014; Springer: Cham, Switzerland, 2014; pp. 399–413. [Google Scholar]
  47. Kortylewski, A.; Vetter, T.; Wilson, R.C.; Hancock, E.R.; Smith, W.A.P.; Pears, N.E.; Bors, A.G. Probabilistic Compositional Active Basis Models for Robust Pattern Recognition. In Proceedings of the British Machine Vision Conference 2016, York, UK, 19–22 September 2016; BMVA Press: York, UK, 2016; pp. 30.1–30.12. [Google Scholar]
  48. Wang, X.; Zhang, C.; Wu, Y.; Shu, Y. A manifold ranking based method using hybrid features for crime scene shoeprint retrieval. Multimed. Tools Appl. 2016, 76, 21629–21649. [Google Scholar] [CrossRef]
  49. Wu, Y.; Wang, X.; Nankabirwa, N.L.; Zhang, T. LOSGSR: Learned Opinion Score Guided Shoeprint Retrieval. IEEE Access 2019, 7, 55073–55089. [Google Scholar] [CrossRef]
  50. Alizadeh, S.; Kose, C. Automatic retrieval of shoeprint images using blocked sparse representation. Forensic Sci. Int. 2017, 277, 103–114. [Google Scholar] [CrossRef]
  51. Cui, J.; Zhao, X.; Liu, N.; Morgachev, S.; Li, D. Robust Shoeprint Retrieval Method Based on Local-to-Global Feature Matching for Real Crime Scenes. J. Forensic Sci. 2019, 64, 422–430. [Google Scholar] [CrossRef] [PubMed]
  52. Wu, Y.; Wang, X.; Zhang, T. Crime Scene Shoeprint Retrieval Using Hybrid Features and Neighboring Images. Information 2019, 10, 45. [Google Scholar] [CrossRef] [Green Version]
  53. Geradts, Z.; Keijzer, J. The image-database REBEZO for shoeprints with developments on automatic classification of shoe outsole designs. Forensic Sci. Int. 1996, 82, 21–31. [Google Scholar] [CrossRef]
  54. Liao, S.; Zhu, X.; Lei, Z.; Zhang, L.; Li, S.Z. Learning multi-scale block local binary patterns for face recognition. In Advances in Biometrics; Springer: Berlin/Heidelberg, Germany, 2007; pp. 828–837. [Google Scholar]
  55. Zhang, L.; Chu, R.; Xiang, S.; Liao, S.; Li, S.Z. Face detection based on multi-block LBP representation. In Advances in Biometrics; Springer: Berlin/Heidelberg, Germany, 2007; pp. 11–18. [Google Scholar]
  56. Nguyen, D.T.; Cho, S.R.; Park, K.R. Human age estimation based on multi-level local binary pattern and regression method. In Future Information Technology; Springer: Berlin/Heidelberg, Germany, 2014; pp. 433–438. [Google Scholar]
  57. Yüceer, C.; Oflazer, K. A rotation, scaling and translation invariant pattern classification system. Pattern Recognit. 1993, 26, 687–710. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The LBP operator.
Figure 1. The LBP operator.
Symmetry 13 00296 g001
Figure 2. Results of the Multi-Block Local Binary Pattern (MB-LBP) for a shoeprint image.
Figure 2. Results of the Multi-Block Local Binary Pattern (MB-LBP) for a shoeprint image.
Symmetry 13 00296 g002
Figure 3. The proposed shoeprint retrieval system.
Figure 3. The proposed shoeprint retrieval system.
Symmetry 13 00296 g003
Figure 4. The steps in the image preprocessing stage, (a) noise elimination, (b) image enlargement, (c) image rotation, (d) removing margins, and (e) scale changing.
Figure 4. The steps in the image preprocessing stage, (a) noise elimination, (b) image enlargement, (c) image rotation, (d) removing margins, and (e) scale changing.
Symmetry 13 00296 g004
Figure 5. Result of applying the LBP operator image (b) on the image (a).
Figure 5. Result of applying the LBP operator image (b) on the image (a).
Symmetry 13 00296 g005
Figure 6. (a) The first blocking model, (b) the second blocking model.
Figure 6. (a) The first blocking model, (b) the second blocking model.
Symmetry 13 00296 g006
Figure 7. Corresponding histograms of (a) the first blocking, (b) the second blocking.
Figure 7. Corresponding histograms of (a) the first blocking, (b) the second blocking.
Symmetry 13 00296 g007
Figure 8. Histograms of two neighboring blocks of (a) the first image (b) the second image.
Figure 8. Histograms of two neighboring blocks of (a) the first image (b) the second image.
Symmetry 13 00296 g008
Figure 9. (a) Samples of the recorded shoeprint images from the participants, (b) samples with the unwanted margins have been removed.
Figure 9. (a) Samples of the recorded shoeprint images from the participants, (b) samples with the unwanted margins have been removed.
Symmetry 13 00296 g009
Figure 10. Partial heel and toe images from the complete shoeprints.
Figure 10. Partial heel and toe images from the complete shoeprints.
Symmetry 13 00296 g010
Figure 11. Cumulative match score the LBP vs. the MB-LBP.
Figure 11. Cumulative match score the LBP vs. the MB-LBP.
Symmetry 13 00296 g011
Figure 12. A comparison of the performance of MB-LBP vs. MMB-LBP.
Figure 12. A comparison of the performance of MB-LBP vs. MMB-LBP.
Symmetry 13 00296 g012
Figure 13. Results for the MMB-LBP method and Patil with regard to the shoeprint image rotation.
Figure 13. Results for the MMB-LBP method and Patil with regard to the shoeprint image rotation.
Symmetry 13 00296 g013
Figure 14. Images stained with various noise: (a) salt and pepper noise and (b) Gaussian white noise.
Figure 14. Images stained with various noise: (a) salt and pepper noise and (b) Gaussian white noise.
Symmetry 13 00296 g014
Figure 15. Results of the MMB-LBP, Patil, and Almaadeed methods with regard to salt and pepper noise.
Figure 15. Results of the MMB-LBP, Patil, and Almaadeed methods with regard to salt and pepper noise.
Symmetry 13 00296 g015
Figure 16. Results for the MMB-LBP, Patil, and Almaadeed methods with regard to Gaussian white noise.
Figure 16. Results for the MMB-LBP, Patil, and Almaadeed methods with regard to Gaussian white noise.
Symmetry 13 00296 g016
Table 1. A taxonomy and summary of related methods on shoeprint retrieval; (X@Y where X refers to the cumulative match score and Y indicates n first match).
Table 1. A taxonomy and summary of related methods on shoeprint retrieval; (X@Y where X refers to the cumulative match score and Y indicates n first match).
DB SizeFeaturesReviewed DistortionsReported ResultReference
Shoe-prints
(obtained in controlled conditions)
32FractalR, T-[15]
145FractalR, T88% @ 1%[16]
475PSDR, T87% @ 5%[17]
512DFTS, N, R, T97.7% @ 4%[18]
368MSER + SIFTR, T85% @ 1[19]
500Harris + SIFTS, R, N, P100% @ 1[20]
500Harris + SIFTS, N, R, P87% @ 1[21]
100POCP, N93% @ 1[22]
100ACFP, N, R95.68% @ 1[23]
500HMN, R99.4% @ 1[24]
500FMTP, S, N, R, T99% @ 10[25]
6000GWP, N61.7% @ 5[26]
374MSER + SIFT-87% @ 1[27]
300FTR, T-[28]
300SIFT + RANSACP, N, R90% @ 1[29]
1400GTR, P, N91% @ 1[30]
430SIFTP, N90% @ 2%[31]
512HRT--[32]
430SIFTP, N, R90% @ 5%[33]
1230ZM-0.726 in first for Zarnik moments[34]
300Harris + Hessian + SIFTS, R, P, N99.33% @ 1[35]
Shoe marks
(recovered from crime scenes)
87TextureN49% % 1[36]
87TextureS, R, T73% @ 10[37]
87PSDMN100% @ 6[38]
75TextureT, R, N100% @ 1[39]
2660ARGS, R, T, P71% @ 1%[40]
1000ARGS, R, T, N70% @ 1%[41]
2660ARGS, T, R, N74% @ 10%[42]
2000IHGT--[43]
1225GF + ZM-53.40% @ 10[44]
1175PPT, N27.1% @ 2%[45]
210,000WFTT, R, S90.87% @ 2%[46]
1175PCABM-71% @ 20%[47]
10,096HW + FMT + PSD-93.5% 2%[48]
1175LOSGSR-96.6% 2%[49]
1000BSRR, S99.47% 1[50]
536DBN + SPM-65.67% 10[51]
10096NSE-92.5% 2%[52]
FT = Gabor Transform, HM = Hu’s moments, FMT = Fourier–Mellin Transform, GW = Gabor Wavelet, ZM = Zernike moments, PP = Pattern Periodicity, WFT = Wavelet-Fourier Transform, IHGT = Integral Histogram in the Gabor Transform, GF = Gabor Filter, HW = Haar Wavelet, PSD = Power Spectral Density, DBN = Deep Belief Network, SPM = Spatial Pyramid Matching, LOSGSR = Learned Opinion Score Guided Shoeprint Retrieval, BSR = Blocking Sparse Representation, NSE = Neighborhood-Based Similarity Estimation, R = Rotation, T = Translation, S = Scale and Noise.
Table 2. (A) the weight matrix (W1), (B) the weight matrix (W2).
Table 2. (A) the weight matrix (W1), (B) the weight matrix (W2).
1234 123
124421242
224422242
324423242
424424232
523325131
623326242
724427242
82442
AB
Table 3. Shoeprint retrieval score using the proposed, Almaadeed and Patil.
Table 3. Shoeprint retrieval score using the proposed, Almaadeed and Patil.
MethodType of ShoeprintCumulative Match Score
First RankSecond RankThird RankFourth RankFifth Rank
MMB-LBPComplete97.6399.2199.6199.87100
Toe96.0597.7698.5598.5598.95
Heel91.1894.4795.7996.3297.50
AlmaadeedComplete83.2986.5887.6388.1689.21
Toe74.0876.0576.8477.6378.16
Heel72.3774.0875.0076.3277.11
PatilComplete63.9568.9570.6672.5073.95
Toe58.6865.1369.2171.8473.16
Heel46.4552.9055.7958.6861.58
Table 4. Shoeprint retrieval score by means of Patil method using the rotation of the proposed method.
Table 4. Shoeprint retrieval score by means of Patil method using the rotation of the proposed method.
Type of ShoeprintCumulative Match Score
First RankSecond RankThird RankFourth RankFifth RankSixth Rank
Complete81.5885.4086.9788.0389.2194.08
Toe69.4775.9279.0881.4582.7691.32
Heel67.6374.8777.7680.4082.5091.18
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alizadeh, S.; Jond, H.B.; Nabiyev, V.V.; Kose, C. Automatic Retrieval of Shoeprints Using Modified Multi-Block Local Binary Pattern. Symmetry 2021, 13, 296. https://doi.org/10.3390/sym13020296

AMA Style

Alizadeh S, Jond HB, Nabiyev VV, Kose C. Automatic Retrieval of Shoeprints Using Modified Multi-Block Local Binary Pattern. Symmetry. 2021; 13(2):296. https://doi.org/10.3390/sym13020296

Chicago/Turabian Style

Alizadeh, Sayyad, Hossein B. Jond, Vasif V. Nabiyev, and Cemal Kose. 2021. "Automatic Retrieval of Shoeprints Using Modified Multi-Block Local Binary Pattern" Symmetry 13, no. 2: 296. https://doi.org/10.3390/sym13020296

APA Style

Alizadeh, S., Jond, H. B., Nabiyev, V. V., & Kose, C. (2021). Automatic Retrieval of Shoeprints Using Modified Multi-Block Local Binary Pattern. Symmetry, 13(2), 296. https://doi.org/10.3390/sym13020296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop