Next Article in Journal
Quantification of Sub-Pixel Dynamics in High-Speed Neutron Imaging
Next Article in Special Issue
Comparison of Convolutional Neural Networks and Transformers for the Classification of Images of COVID-19, Pneumonia and Healthy Individuals as Observed with Computed Tomography
Previous Article in Journal
Force Estimation during Cell Migration Using Mathematical Modelling
Previous Article in Special Issue
Dynamic Label Assignment for Object Detection by Combining Predicted IoUs and Anchor IoUs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New LBP Variant: Corner Rhombus Shape LBP (CRSLBP)

1
LRIT Laboratory, Rabat IT Center, Faculty of Sciences, Mohammed V University in Rabat, Rabat B.P. 1014, Morocco
2
Mines Saint-Etienne, French National Center for Scientific Research, Joint Research Unit 5307 Laboratory Georges Friedel, Centre SPIN 158 Cours Fauriel, CEDEX 2, 42023 Saint-Etienne, France
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(7), 200; https://doi.org/10.3390/jimaging8070200
Submission received: 17 April 2022 / Revised: 27 June 2022 / Accepted: 11 July 2022 / Published: 17 July 2022
(This article belongs to the Special Issue The Present and the Future of Imaging)

Abstract

:
The local binary model is a straightforward, dependable, and effective method for extracting relevant local information from images. However, because it only uses sign information in the local region, the local binary pattern (LBP) is ineffective at capturing discriminating characteristics. Furthermore, most LBP variants select a region with one specific center pixel to fill all neighborhoods. In this paper, a new variant of a LBP is proposed for texture classification, known as corner rhombus-shape LBP (CRSLBP). In the CRSLBP approach, we first use three methods to threshold the pixel’s neighbors and center to obtain four center pixels by using sign and magnitude information with respect to a chosen region of an even block. This helps determine not just the relationship between neighbors and the pixel center but also between the center and the neighbor pixels of neighborhood center pixels. We evaluated the performance of our descriptors using four challenging texture databases: Outex (TC10,TC12), Brodatz, KTH-TIPSb2, and UMD. Various extensive experiments were performed that demonstrated the effectiveness and robustness of our descriptor in comparison with the available state of the art (SOTA).

1. Introduction

Texture, as a significant characteristic, can be depicted as part of the large ambit of an object surface or set of object images, comprising size, illumination, organization, color, and other physical or natural features. In 2002, a new statistical method for image processing and pattern recognition was proposed by [1,2], which was the first to introduce local binary patterns (LBPs). The main purpose of this new method is to process a textured image using a particular kernel function that constitutes the statistical relationship between neighbors and the center and allows one to compute the transformation value by capturing local structural patterns. The simplicity, robustness, and rapidity of the LBP calculation has attracted attention from researchers looking to create their own local operators by developing other variants.
The authors of [3] remarked that the high frequency of occurrences counted by an LBP could achieve predominant texture information by introducing dominant local binary patterns (DLBPs). The original LBP method was extended by [4] to solve noise limitation by using three rather than two valued codes, which they called a local ternary pattern (LTP). Guo et al. [5] introduced completed LBP (CLBP) modeling, which took into consideration both magnitude (CLBP-magnitude) and sign (CLBP-sign). Furthermore, the CLBP-center contained the same information as an LBP. To overcome the high sensitivity to noise of CLBP and dimensionality, Liu et al. [6] proposed binary rotation invariant and noise tolerant (BRINT) texture classification, which combines three descriptors—BRINT S, BRINT M, and BRINT C—which enhance noise tolerance by quantizing the average gray pixel value. A scale-selective LBP (SSLBP) was suggested by [7] to take the pre-learned dominant LBP pattern at variant scale spaces. To maintain good discriminant features, Liu et al. [8] proposed a median robust Extended LBP (MRELBP) that uses regional image medians rather than raw image intensities. This method combines three descriptors: MRELBP NI, MRELBP RD, and MRELBP CI. A radial mean LBP (RMLBP) was suggested by [9] to solve the problem of noise sensitivity by using the mean of points over each radial instead of employing angular neighbor points. In [10] a cross-complementary LBP (CCLBP) was proposed to enhance the robustness to scale, viewpoint, and number of training samples by diversifying two parameters accordingly. Recently, many other interesting modifications and improvements to LBPs have been developed: LOOP [11], ACS-LBP and RCS-LBP [12], MLD-CBP [13], CLSP [14], LCvMSP [15], Hess-ACS-LBP [16], ACPS [17], and LDT [18].
In texture classification, many descriptors and extensions of an LBP use just one center as a reference to threshold the neighboring pixels. Therefore, the relationship between the center pixels is loosened. Furthermore, a LBP uses bilinear interpolation, which has many limitations such as the loss of sharpness, inaccuracy of the gray value, and high computational complexity. A new LBP version is proposed in this article to overcome these weaknesses: the corner rhombus-shaped LBP (CRSLBP). In fact, the CRSLBP is an improved version of the LBP method because it takes into consideration sign and magnitude information and uses a single parameter (radius) with the addition of the chosen even block, which permits the thresholding of four center pixels. This serves to obtain relationships not only between neighbors, but also between the centers and the neighbor of centers. Three different processes are used to obtain three descriptors that give a better characterization of images. The histogram of each image is extracted and concatenated with the others to obtain discriminant and robust features. Specifically, to obtain more than one center, the CRSLBP uses 4 × 4 blocks to select four center pixels at the same time. From this, the relationship between the center pixels and between the center and neighbors of the neighboring center pixels can be determined. Furthermore, bilinear interpolation is eliminated, so all focus is on information from the block and exploiting it using various thresholding methods that have been adaptively computed by examining local structures and their properties.
This study is structured as follows: Section 2 and Section 3 introduce a brief related work and the proposed texture analysis descriptor: the corner rhombus-shaped LBP (CRSLBP), respectively. Section 4 discuses the performance of the proposed method by using classifiers compared to SOTA approaches. The paper is concluded in Section 5.

2. Related Work

Before going into our proposed approach, we first need to present a brief review of the main works in the literature that inspired us. We start with the original LBP and then present the motivation that gave us the idea for our new method.

2.1. Local Binary Pattern (LBP)

The original LBP was created by [1,2] with 3 × 3 blocks containing eight neighbors with a center to capture important local information. The LBP code feature is generated by the following equation:
L B P R , P ( c ) = i = 0 P 1 s ( g i g c ) 2 i , s ( x ) = 1 x 0 0 o t h e r w i s e ,
where g c and g i represent, respectively, the center pixel and its neighbors on the i-th position with radius R; P is the number of samples; and s ( ) is the sign function.

2.2. Research Motivation

By recovering the publications of LBP variants from this year, we discovered that most approaches used one center as a reference to threshold all neighbors and replaced it with LBP code. Consequently, the relationship between the centers is loosened. On the other hand, the bilinear interpolation that the LBP used made possible the calculation of the value, which is supposed to be placed at the same distance from the central pixel (gray circle). However, it has many weaknesses such as the loss of sharpness, inaccuracy of the gray value, imprecise texture information, and high computational complexity.
To avoid these issues and limitations, we created a new LBP variant with big differences in the form, shape, and code of the extracted local pattern. The problems were solved by mapping the code LBP with even blocks, as opposed to most LBP variants that use odd blocks to select one center with their neighbors. In this way, we had the chance to work with four centers at the same time, allowing us not only to obtain the interconnection between the center pixels, but also each center pixel with its neighbors. Furthermore, each center pixel gained a relationship with the neighbor pixels of the center neighborhood pixels. Additionally, the new proposed descriptor eliminated bilinear interpolation and exploited all the information provided by the neighboring pixels in the block using multiple thresholds computed adaptively by examining different local structures and their properties.
On the other hand, we extracted information from the relationship between neighbors based on the center pixels. As provided in CLBP, and to preserve more intrinsic features, two important vectors were extracted from the image: sign and magnitude. However, of the two, the sign was the most influential. Based on this idea, we extracted the sign from the rib pixels of the rhombus-shaped neighbor pixels. In addition, to obtain a depth relationship between the center and its neighbors, each pixel center was thresholded with the neighbors of the neighboring center.
Based on the preceding, this new encoding was useful for acquiring more intrinsic information, which allowed for a significant improvement in classification accuracy.

3. Proposed Methodology

In this section, we present our new LBP variant for texture classification to solve the weaknesses of the original LBP and to obtain more robust features with low complexity. In general, the CRSLBP is constructed in the following major steps. Contrary to the LBP and most of its variants, our input data were divided into even blocks of 4 × 4 pixels, making it possible to select four center pixels in each block h c e n t e r ( i ) (see Figure 1f, the green one), and to exploit the relationship between the four centers and their neighbor pixels. These were partitioned into corner h c o r n e r ( i ) and rhombus-shaped neighbor pixels h r h o m b u s ( j ) and are marked by pink and orange circles, respectively, in Figure 1.
After extracting all the required pixels, we began the construction of the binary encoded pattern as follows:
Step 1: The four selected corner pixels were compared by the mean of all center pixels, which gave four binary patterns (Figure 1b). (1) The corner neighbor pixels of the block h c o r n e r ( i ) were given by the following equation:
C R S L B P c o r n e r r i u 2 ( r , N ) = i = 0 N 1 s ( h c o r n e r ( i ) h M c e n t e r ) , s ( x ) = 1 x 0 0 x < 0
where r represent the radius and in our proposed method radius {1,2,3} is respectively the block of {( 4 × 4 ), ( 6 × 6 ), ( 8 × 8 )}; h M c e n t e r represent the mean of all centers pixels; and s ( ) is the sign function.
Step 2: As shown in (Figure 1c) each specific rib of the rhombus contains two pixels. First, we took the maximum of the two pixels and compared it with the horizontal switching of the center pixels. This gave us four new binary patterns. Formally, the first process of the rhombus-shaped neighbor pixels is defined as:
C R S L B P r h o m b u s 1 r i u 2 ( r , N ) = j = 0 ( N 2 ) 1 s ( max ( h r h o m b u s ( 2 j + 1 ) , h r h o m b u s ( 2 j + 2 ) ) h c e n t e r ) , s ( x ) = 1 x 0 0 x < 0
Step 3: We extracted the minimum and maximum numbers from each specific rib of the rhombus pixels and compared them with the horizontal switching of the center pixels, which created a relationship between each center and its far neighbors. Next, we calculated the C value, which is used to threshold the neighbors by subtracting the mean of all maximum numbers with the average of all the minimum numbers. We then subtracted the maximum numbers of each specific rhombus rib from the horizontal switching of the center pixels and compared them with the C value to generate another four binary-encoded patterns (Figure 1d). The second process of the rhombus-shaped neighbor pixels is given by:
C R S L B P r h o m b u s 2 r i u 2 ( r , N ) = j = 0 ( N 2 ) 1 B ( max ( h r h o m b u s ( 2 j + 1 ) , h r h o m b u s ( 2 j + 2 ) ) h c e n t e r ) , B ( x ) = 1 x C 0 x < C
where B ( x ) is the sign function based on the contrast value. The C value, which is used to improve the quality of the image based on operations such as contrast enhancement and the reduction or removal of noise is calculated as follows:
C = 1 / N ( j = 0 ( N 2 ) 1 max ( h r h o m b u s ( 2 j + 1 ) , h r h o m b u s ( 2 j + 2 ) ) j = 0 ( N 2 ) 1 min ( h r h o m b u s ( 2 j + 1 ) , h r h o m b u s ( 2 j + 2 ) ) ) .
Step 4: For each specific rib of rhombus pixels following a particular direction as presented in (Figure 1e) the ratio of every two pixels was calculated, and the entire value was captured to extract four additional binary patterns. The last equation is defined as follows:
C R S L B P r h o m b u s 3 r i u 2 ( r , N ) = j = 0 ( N 2 ) 1 s ( h r h o m b u s ( 2 j + 1 ) / h r h o m b u s ( 2 j + 2 ) ) , s ( x ) = 1 x 1 0 x < 1
In Equations (3) and (4), the center h c e n t e r (presented in Figure 1f) for thresholding each of rib rhombus shape neighbor pixel is organized as follows:
h c e n t e r = { h c e n t e r 2 , h c e n t e r 1 , h c e n t e r 4 , h c e n t e r 3 } .
Step 5: Equations (24) and (6) generated four binary patterns. After extracting all of them, we formed three decimal codes by concatenating two four-binary patterns pixel by pixel, as follows:
(1)
Step 1 with Step 2
C R S L B P 1 r i u 2 ( r , N ) = i = 0 ( N 1 ) j = 2 i + 1 ( N 1 ) ( C R S L B P c o r n e r r i u 2 ( i ) 2 ( j 1 ) + C R S L B P r h o m b u s 1 ( r , N ) r i u 2 ( i ) ) 2 j
(2)
Step 1 with Step 3
C R S L B P 2 r i u 2 ( r , N ) = i = 0 ( N 1 ) j = 2 i + 1 ( N 1 ) ( C R S L B P c o r n e r r i u 2 ( i ) 2 ( j 1 ) + C R S L B P r h o m b u s 2 r i u 2 ( i ) ) 2 j
(3)
Step 1 with Step 4
C R S L B P 3 r i u 2 ( r , N ) = i = 0 ( N 1 ) j = 2 i + 1 ( N 1 ) ( C R S L B P c o r n e r r i u 2 ( i ) 2 ( j 1 ) + C R S L B P r h o m b u s 3 r i u 2 ( i ) ) 2 j
The total process of the CRSLBP explained above is illustrated in Figure 1.
To increase the discrimination and effectiveness of the feature representation, the three encoded pattern processes CRSLBP 1–3, given in Equations (7)–(9) are grouped into a hybrid distribution named CRSLBP Equation (10), which allowed us to create a robust model with reduced noise sensitivity and improved effectiveness. In addition, by using a linear combination of several characteristics generated from different processes of pattern encoding, a multi-scale approach was used to capture coarse and fine information. The CRSLBP is presented as follows:
C R S L B P r i u 2 ( r , N ) = C R S L B P 1 r i u 2 ( r , N ) , C R S L B P 2 r i u 2 ( r , N ) , C R S L B P 3 r i u 2 ( r , N )
Figure 2 shows the texture features after CRSLBP extraction.

4. Experiment Results

This section concerns a series of experiments with various databases conducted to verify the effectiveness of the CRSLBP strategy.

4.1. Texture Datasets

Datasets from the Outex [19], KTH-TIPS2b [20], UMD [21] and Brodatz [22] representation databases were used in our experiments to evaluate the robustness and effectiveness of the proposed CRSLBP. Table 1 summarizes the information from each database. The suggested method is compared with other LBP variants, some of which are classified in the same category, “combining with complementary features”, as our method.
In the experiments in this paper, all descriptors were considered as parameters setting the rotation invariant and uniform (riu2) with normalized features to decrease the number of features, thereby reducing processing time and providing discriminating features. The suggested method was tested using a support vector machine (SVM) and neural network (NN) and compared to other LBP variants, some of which are classified in the same category, “combining with complementary features”, as our method. For a comparative result, the SVM classifier was trained with a radial basis function (RBF) kernel, which is one of the most widely used due to its similarity to the Gaussian distribution. The RBF kernel support vector machine depends highly on two hyperparameters: C for SVM and γ for the RBF Kernel, whereas the optimum value of C and gamma ( γ ) had been selected by the grid search method using 10-fold cross-validation.

4.2. Experimental Results of Outex Database

The classification results of this experiment are illustrated in Table 2.
First, we compared our descriptor with the original LBP method. Remarkably, the performance of the CRSLBP was much higher for all resolutions: various Outex (TC10 and TC12), the classifiers (SVM, NN), and the radius. The average classification accuracy was 99.76 and 99.79% for Outex TC10 SVM and NN, respectively. Additionally, we compared the CRSLBP with a homogeneous LBP (HLBP), homogeneous rotated LBP (HRLBP) and circular part LBP (CPLBP), which were introduced by [23,24,25]. As can be seen, our proposed method improved upon the HLBP, HRLBP and CPLBP descriptors, with higher classification accuracies in various illuminations of Outex (Inca, T184 and horizon) and for each proposed resolution (classifier, Outex databases, radius and homogeneity tolerance). Furthermore, we achieved the best results even though HLBP+LBP and HRLBP+RLBP were concatenated, demonstrating the robustness and high performance of our method: first, from the four center pixels extracted from the block; second, from the relationship derived from each center and neighbor of center pixels. Last, we compared our approach with SOTA approaches. As shown in the table, the average performance of the CRSLBP with both SVM and NN classifiers for all Outex types (TC10, TC12) was higher than the SOTA, apart from MRELBP [8]. It is normal to obtain small differences in classification accuracies between the two approaches (MRELBP and CRSLBP) owing to the set of four radius values used in MRELBP to generate a code that enables multiple scales at the same time.

4.3. Experimental Results with the KTH-TIPS2b Database

The KTH-TIPS2b [20] database is primarily designed to assess the impact of real-world imaging conditions on material classification. Table 3 displays its classification accuracy in evaluating the performance of our descriptor using SVM and NN classifiers
It can be seen that the CRSLBP outperformed the original LBP with various radius values and classifiers by over 8.87% for SVM and 10% for NN. Similarly, the CRSLBP had an average classification accuracy 5–10% higher than those presented by [23,24] for HLBP and HRLBP and their concatenation with LBP and RLBP, respectively. To further evaluate the performance of the CRSLBP, we made another comparison with some SOTA methods, as shown in the table. The CRSLBP achieved much better classification accuracies than LBP: 96.89, 96.76, and 97.19% for the SVM classifier and 94.81, 95.37, and 95.65% for NN classifier (radius R = 1, R = 2, and R = 3, respectively). Just like the Outex database, CRSLBP did not obtain better results on KTH-TIPS2b compared to the MRELBP.

4.4. Experimental Results with the UMD Database

The experimental results with the UMD dataset [21] are listed in Table 3 for both the SVM and NN classifiers. We primarily examined the CRSLBP in comparison with the original LBP. Despite the high resolution, arbitrary rotation, large changes in viewpoint. and different scales within the UMD dataset, the proposed approach obtained the highest accuracy: 100% for R = 2 with NN. For both classifiers, the CRSLBP was much more robust than the LBP. Our second experiment, tested it in comparison to HLBP and HRLBP. and CRSLBP displayed higher classification accuracy: 98.5, 98.4, and 98.80% with the SVM classier and 99.33, 100, and 98.67% with NN (radius R = 1, R = 2, and R = 3, respectively). Moreover, the CRSLBP demonstrated its robustness in comparison to the HLBP reinforced by the LBP and HRLBP reinforced by the RLBP. The last experiment showed the potential of the CRSLBP in comparison to SOTA approaches. As we can see, the average classification accuracy of our descriptor was much higher than the others across different resolutions (radius and classifier), except for the MRELBP, as explained before.

4.5. Experimental Results with Brodatz Database

The Brodatz [22] database, despite being relatively old, is still widely used. The experiment results with the Brodatz dataset are presented in Table 3. Using the CRSLBP, we obtained a 95.04 and 98.01% average classification accuracy, outperforming the original LBP, which had a 91.40 and 89.40% accuracy with SVM and NN, respectively, demonstrating the performance of the CRSLBP. On the other hand, we evaluated the robustness of the CRSLBP in comparison with the HLBP, HRLBP, HLBP+LBP and HRLBP+HRLBP, As shown in the tables, the highest accuracies were achieved by the CRSLBP, and SOTA methods, which yielded classification accuracies lower than ours with the exception of the MRELBP.

4.6. Experimental Results of CRSLBP with MRELBP

Concerning the low classification accuracy of our descriptor compared to the MRELBP method, it was normal to obtain small differences in classification accuracy between the two approaches owing to the set of four radius values used in the MRELBP to generate a code to enable multiple scales at the same time. To avoid this illegality between CRSLBP and MRELBP, we performed another experiment with lawful parameters. For both descriptors, the select parameters were a set of radius values (2, 4, 6, 8) with an SVM classifier. Table 4 shows the results of this experiment.
Based on the analysis of the table, our method performed better than the MRELBP, with higher classification accuracies for most of the datasets (Outex TC12 and KTH-TIPSb2). Furthermore, the differences in results with the other databases is very small if we look at the large difference in dimension between the two: 120 (30 × 4) and 800 for the CRSLBP and MRELBP.

5. Conclusions

This work proposed a new approach for texture classification images: “corner rhombus-shaped LBP” (CRSLBP). In fact, it is an improved version of the LBP method that took into consideration sign and magnitude with the addition of the chosen even block, which allowed us to threshold four center pixels. In this way, we obtained relationships not only between neighbors, but also between the center. A variety of challenging texture databases (Outex [TC10, TC12], Brodatz, UMD, and KTH-TIPSb2) and two classifier approaches (SVM and NN) were used to evaluate the proposed method.
The experimental results showed that the CRSLBP outperformed the LBP and its new variants: the HLBP, HRLBP, HLBP + LBP, HRLBP + RLBP and CPLBP. On the other hand, we evaluated the CRSLBP with other SOTA methods, and generally the experimental results show that the CRSLBP largely outperformed these methods in classification accuracy against various classification challenges, including strong scale, changes in rotation, scale, illumination, and viewpoint. However, the CRSLBP was expected to be robust for noise data using a variety of noises, which should be investigated in future work.
Our future work will consist also of testing our method with color image databases. However, this operator tends to produce high-dimensional feature vectors. Thus, to address this problem, we will focus on the application of feature selection methods to CRSLBP-based features. Another upcoming project will also include analyzing and comparing the many ways presented in the literature for exploiting the features of several color spaces at the same time.

Author Contributions

Conceptualization, I.A.S.; methodology, I.A.S.; software, I.A.S.; validation, I.A.S., M.R. and J.D.; formal analysis, I.A.S.; investigation, I.A.S.; writing—original draft preparation, I.A.S.; writing—review and editing, I.A.S., M.R. and J.D.; visualization, I.A.S., M.R. and J.D.; supervision, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  2. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  3. Liao, S.; Law, M.W.; Chung, A.C. Dominant local binary patterns for texture classification. IEEE Trans. Image Process. 2009, 18, 1107–1118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar] [PubMed] [Green Version]
  5. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [PubMed] [Green Version]
  6. Liu, L.; Long, Y.; Fieguth, P.W.; Lao, S.; Zhao, G. BRINT: Binary rotation invariant and noise tolerant texture classification. IEEE Trans. Image Process. 2014, 23, 3071–3084. [Google Scholar] [CrossRef] [PubMed]
  7. Guo, Z.; Wang, X.; Zhou, J.; You, J. Robust texture image representation by scale selective local binary patterns. IEEE Trans. Image Process. 2015, 25, 687–699. [Google Scholar] [CrossRef] [PubMed]
  8. Liu, L.; Lao, S.; Fieguth, P.W.; Guo, Y.; Wang, X.; Pietikäinen, M. Median robust extended local binary pattern for texture classification. IEEE Trans. Image Process. 2016, 25, 1368–1381. [Google Scholar] [CrossRef] [PubMed]
  9. Shakoor, M.H.; Boostani, R. Radial mean local binary pattern for noisy texture classification. Multimed. Tools Appl. 2018, 77, 21481–21508. [Google Scholar] [CrossRef]
  10. Kou, Q.; Cheng, D.; Zhuang, H.; Gao, R. Cross-complementary local binary pattern for robust texture classification. IEEE Signal Process. Lett. 2018, 26, 129–133. [Google Scholar] [CrossRef]
  11. Chakraborti, T.; McCane, B.; Mills, S.; Pal, U. Loop descriptor: Local optimal-oriented pattern. IEEE Signal Process. Lett. 2018, 25, 635–639. [Google Scholar] [CrossRef] [Green Version]
  12. Ruichek, Y. Attractive-and-repulsive center-symmetric local binary patterns for texture classification. Eng. Appl. Artif. Intell. 2019, 78, 158–172. [Google Scholar]
  13. Kas, M.; Ruichek, Y.; Messoussi, R. Multi level directional cross binary patterns: New handcrafted descriptor for SVM-based texture classification. Eng. Appl. Artif. Intell. 2020, 94, 103743. [Google Scholar] [CrossRef]
  14. Xu, X.; Li, Y.; Wu, Q.J. A completed local shrinkage pattern for texture classification. Appl. Soft Comput. 2020, 97, 106830. [Google Scholar] [CrossRef]
  15. Alpaslan, N.; Hanbay, K. Multi-scale shape index-based local binary patterns for texture classification. IEEE Signal Process. Lett. 2020, 27, 660–664. [Google Scholar] [CrossRef]
  16. Alpaslan, N.; Hanbay, K. Multi-resolution intrinsic texture geometry-based local binary pattern for texture classification. IEEE Access 2020, 8, 54415–54430. [Google Scholar] [CrossRef]
  17. Pan, Z.; Hu, S.; Wu, X.; Wang, P. Adaptive center pixel selection strategy to Local Binary Pattern for texture classification. Expert Syst. Appl. 2021, 180, 115123. [Google Scholar] [CrossRef]
  18. Shakoor, M.H.; Boostani, R. Noise robust and rotation invariant texture classification based on local distribution transform. Multimed. Tools Appl. 2021, 80, 8639–8666. [Google Scholar] [CrossRef]
  19. Ojala, T.; Maenpaa, T.; Pietikainen, M.; Viertola, J.; Kyllonen, J.; Huovinen, S. Outex-new framework for empirical evaluation of texture analysis algorithms. In Proceedings of the 2002 International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; Volume 1, pp. 701–706. [Google Scholar]
  20. Caputo, B.; Hayman, E.; Mallikarjuna, P. Class-specific material categorisation. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 1, pp. 1597–1604. [Google Scholar]
  21. Xu, Y.; Ji, H.; Fermuller, C. A projective invariant for textures. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1932–1939. [Google Scholar]
  22. Brodatz, P. Textures: A Photographic Album for Artists and Designers; Dover Publication: New York, NY, USA, 1966. [Google Scholar]
  23. Al Saidi, I.; Rziza, M.; Debayle, J. A New Texture Descriptor: The Homogeneous Local Binary Pattern (HLBP). In Proceedings of the International Conference on Image and Signal Processing, Marrakesh, Morocco, 4–6 June 2020; pp. 308–316. [Google Scholar]
  24. Al Saidi, I.; Rziza, M.; Debayle, J. A novel texture descriptor: Homogeneous Rotated Local Binary Pattern (HRLBP). In Proceedings of the 2020 10th International Symposium on Signal, Image, Video and Communications (ISIVC), Saint-Etienne, France, 7–9 April 2021; pp. 1–5. [Google Scholar]
  25. Al Saidi, I.; Rziza, M.; Debayle, J. A novel texture descriptor: Circular parts local binary pattern. Image Anal. Stereol. 2021, 40, 105–114. [Google Scholar] [CrossRef]
Figure 1. (a) The 4 × 4 sub-block of the image. (b) The corner processing. The process of the first (c), second (d), and third (e) generated CRSLBP code. (f) mathematical representation of the block.
Figure 1. (a) The 4 × 4 sub-block of the image. (b) The corner processing. The process of the first (c), second (d), and third (e) generated CRSLBP code. (f) mathematical representation of the block.
Jimaging 08 00200 g001
Figure 2. The concatenation histogram of all CRSLBP processes.
Figure 2. The concatenation histogram of all CRSLBP processes.
Jimaging 08 00200 g002
Table 1. Summary of the characteristics of the texture databases used in our experiments.
Table 1. Summary of the characteristics of the texture databases used in our experiments.
NumberNameClassesSamples Per ClassTotal SamplesSample Resolution (Pixels)Image Format (Monochrome)Challenges
1Brodatz11291008512 × 512JPGVarious texture types
2KTH-TIPS2b114 × 1084752200 × 200BMPillumination, scale, pose changes
3OuTeX_TC_00010241804320128 × 128RASRotation changes (0 ) for training
and other degrees for test
4OuTeX_TC_00012242004800128 × 128RASRotation and illumination
(“Tl84”, “horizon”) changes
5UMD254010001280 × 960PNGSmall illumination changes and strong scale,
rotation, and viewpoint changes
Table 2. Classification accuracy (%) of the CRSLBP for different R on the Outex dataset and (SVM, NN) classifier.
Table 2. Classification accuracy (%) of the CRSLBP for different R on the Outex dataset and (SVM, NN) classifier.
Classification Accuracy (%) Outex (SVM)Classification Accuracy (%) (NN)
Outex_TC10Outex_TC12Outex_TC10
IncaT184HorizonInca
R = 1R = 2R = 3AverageR = 1R = 2R = 3AverageR = 1R = 2R = 3AverageR = 1R = 2R = 3Average
LBP classic96.2697.1097.9497.177.8379.2578.8378.6481.9882.5078.6981.0696.9197.3898.7697.68
HLBP92.4898.4098.1796.3572.5278.3774.8175.2376.1779.2176.0277.1393.0699.2398.1596.81
HRLBP92.6998.3198.3196.4373.5278.4674.5875.5275.0879.4275.9276.8094.6099.2398.7697.53
HLBP+LBP98.5999.5499.7099.2784.6084.5082.5083.8686.6585.0682.6784.7999.0899.5499.6999.43
HRLBP+RLBP98.5999.5699.7299.2985.2984.6082.4284.1086.8385.2182.6284.8898.9210010099.64
CPLBP95.3096.0698.0196.4576.3178.0477.5277.2979.9478.5279.2979.25
LTP99.2199.5699.4299.4085.7183.9283.4284.3585.9285.2382.2184.4599.5399.6999.8499.69
CLBP S/M98.7099.4099.4999.2085.6085.6283.2784.8386.9885.7782.6785.1499.2299.8499.6999.58
CLBP S96.0897.1397.9997.0777.9279.8578.5678.7881.4882.7578.3580.8695.9897.9998.7797.58
CLBP M94.4097.0898.1496.5473.1076.5075.0074.8777.5477.5876.1077.0795.3797.9998.6197.32
CLDP96.2377.1071.0681.4678.4059.8553.4063.8881.7964.8156.0067.5396.2377.1071.0681.46
RLBP96.3297.2797.8997.1678.4078.9478.4078.5881.7382.6078.7581.0395.9897.6898.3097.32
LBPV78.7590.7293.5687.6763.6978.7982.9675.1570.6783.4284.4279.5081.0190.1292.2887.80
CRSLBP99.6599.8499.7999.7694.2394.5093.1793.9795.4294.4593.8794.5899.6999.8499.8499.79
MRELBP99.9099.9087.0287.0287.0487.04100100
Table 3. Classification accuracy (%) of the CRSLBP for different R on the KTH-TIPS2b, UMD and Brodatz dataset and the SVM and NN classifiers.
Table 3. Classification accuracy (%) of the CRSLBP for different R on the KTH-TIPS2b, UMD and Brodatz dataset and the SVM and NN classifiers.
(a) Using SVM Classifier
Classification Accuracy (%) Outex (SVM)
KTH-TIPS2bUMDBrodatz
R = 1R = 2R = 3AverageR = 1R = 2R = 3AverageR = 1R = 2R = 3Average
LBP classic89.6788.6285.9488.0797.797.696.197.1390.7792.1691.2791.40
HLBP89.4190.5189.2989.7494.9095.2095.3095.1384.2384.3384.7284.43
HRLBP89.4491.3589.2690.0294.5094.8095.0094.7783.4385.0284.7384.39
HLBP+LBP96.0096.7696.2796.3498.7099.0098.6098.7793.5594.3593.9593.95
HRLBP+RLBP96.1196.5596.0696.2499.0098.7098.2098.6393.4594.2593.7593.81
LTP95.7396.4695.7595.9898.998.298.398.4793.6594.8494.9494.47
CLBP S/M94.9396.1495.1895.4298.898.298.0098.3393.6594.8494.9494.47
CLBP S89.1489.2785.6980.0397.6097.0096.3094.2090.6793.6591.4791.93
CLBP M85.6988.1585.6586.5094.7094.5093.4094.2081.6584.4283.6383.23
CLDP96.2377.1071.0681.4697.6087.1080.1088.2791.0769.3556.4572.29
RLBP89.5889.1685.6988.1497.5097.9096.6097.3391.0793.0691.6791.93
LBPV78.2483.1284.4181.9288.4092.709291.0364.4876.4975.0071.99
CRSLBP96.8996.7697.1996.9498.5098.4098.8098.5694.1595.5495.4495.04
MRELBP98.5598.5599.6099.6097.0297.02
(b) Using NN Classifier
Classification Accuracy (%) Outex (SVM)
KTH-TIPS2bUMDBrodatz
R = 1R = 2R = 3AverageR = 1R = 2R = 3AverageR = 1R = 2R = 3Average
LBP classic87.0884.8583.5985.1796.0092.6794.0094.2287.4290.0790.7389.40
HLBP81.7789.3489.2986.896.6795.3392.6694.8987.4286.0983.4485.65
HRLBP81.6286.2689.2685.7194.6494.0096.6795.1084.1184.1190.0786.10
HLBP+LBP94.2595.7996.2795.4498.0099.3310099.1195.3796.0394.0495.15
HRLBP+RLBP95.2394.5396.0695.2398.6499.3399.3399.193.3895.3792.0593.6
LTP93.6894.6892.7093.6998.0098.6697.3398.0094.0493.3894.0493.82
CLBP S/M92.1593.4091.7292.4297.3398.0098.0097.7892.0592.7194.0492.93
CLBP S85.9785.4181.9084.4395.3396.6691.3394.4489.4093.3892.7291.83
CLBP M82.6082.1882.3282.3789.3392.6686.6689.5584.7783.4487.4285.21
CLDP96.2377.1071.0681.4696.6683.3375.3385.1185.4360.2658.9468.21
RLBP87.7983.5980.7884.0594.6692.6693.3393.5593.3892.7290.0692.05
LBPV71.3979.9572.3774.5765.3384.0087.3378.8859.6078.1574.8370.86
CRSLBP94.8195.3795.6595.2899.3310098.6799.3395.3798.6810098.01
MRELBP96.4996.4910010097.3597.35
Table 4. Classification accuracy (%) of CRSLBP compared with MRELBP using a set of R and SVM classifier.
Table 4. Classification accuracy (%) of CRSLBP compared with MRELBP using a set of R and SVM classifier.
Outex (TC10,TC12)KTH-TIPS2bUMDBrodatz
IncaT184Horizon
MRELBP99.9%87.02%87.04%98.55%99.6%97.02%
CRSLBP99.88%94.77%95.56%99.22%99.1%95.84%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al Saidi, I.; Rziza, M.; Debayle, J. A New LBP Variant: Corner Rhombus Shape LBP (CRSLBP). J. Imaging 2022, 8, 200. https://doi.org/10.3390/jimaging8070200

AMA Style

Al Saidi I, Rziza M, Debayle J. A New LBP Variant: Corner Rhombus Shape LBP (CRSLBP). Journal of Imaging. 2022; 8(7):200. https://doi.org/10.3390/jimaging8070200

Chicago/Turabian Style

Al Saidi, Ibtissam, Mohammed Rziza, and Johan Debayle. 2022. "A New LBP Variant: Corner Rhombus Shape LBP (CRSLBP)" Journal of Imaging 8, no. 7: 200. https://doi.org/10.3390/jimaging8070200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop