Next Article in Journal
PH-CBAM: A Parallel Hybrid CBAM Network with Multi-Feature Extraction for Facial Expression Recognition
Previous Article in Journal
Compact Walsh–Hadamard Transform-Driven S-Box Design for ASIC Implementations
Previous Article in Special Issue
Transforming Color: A Novel Image Colorization Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Databases with Features Augmented with Singular-Point Shapes to Enhance Machine Learning

by
Nikolay Metodiev Sirakov
*,† and
Adam Bowden
Department of Mathematics, Texas A&M University-Commerce, Commerce, TX 75429, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2024, 13(16), 3150; https://doi.org/10.3390/electronics13163150
Submission received: 8 July 2024 / Revised: 2 August 2024 / Accepted: 5 August 2024 / Published: 9 August 2024

Abstract

:
The main objective of this paper is to present a repository of image databases whose features are augmented with embedded vector field (VF) features. The repository is designed to provide the user with image databases that enhance machine learning (ML) classification. Also, six VFs are provided, and the user can embed them into her/his own image database with the help of software named ELPAC. Three of the VFs generate real-shaped singular points (SPs): springing, sinking, and saddle. The other three VFs generate seven kinds of SPs, which include the real-shaped SPs and four complex-shaped SPs: repelling and attracting (out and in) spirals and clockwise and counterclockwise orbits (centers). Using the repository, this work defines the locations of the SPs according to the image objects and the mappings between the SPs’ shapes if separate VFs are embedded into the same image. Next, this paper produces recommendations for the user on how to select the most appropriate VF to be embedded in an image database so that the augmented SP shapes enhance ML classification. Examples of images with embedded VFs are shown in the text to illustrate, support, and validate the theoretical conclusions. Thus, the contributions of this paper are the derivation of the SP locations in an image; mappings between the SPs of different VFs; and the definition of an imprint of an image and an image database in a VF. The advantage of classifying an image database with an embedded VF is that the new database enhances and improves the ML classification statistics, which motivates the design of the repository so that it contains image features augmented with VF features.

1. Introduction

The number, sizes, and variety of image databases have grown in the last decade. These databases are generated to derive information and knowledge to support solutions to problems in medicine, science, security, and industry. Also, the demand for increasing accuracy when working with image databases is high and important. For example, face image databases usually contain hundreds of thousands of images. Hence, a single percentage of error will lead to thousands of incorrectly processed samples. Therefore, embedding additional features into an image carries the potential for enhancing the capabilities of the classifier [1] to correctly identify unknown input samples. This is because, in medical images, like those of skin lesions, the features are cluttered and chaotic. Hence, embedding a few additional image-related features makes classifiers more focused, which improves their accuracy [2,3].
VFs are classical tools in hydro-, aero-, and electrodynamics, but in recent decades, they have attracted the attention of Computer Vision and Image Analysis scientists. In [4], the authors developed an Ambrosio–Tortorelli scalar field for the purpose of object partitioning. The authors of [5] devised a method that applies the gradient vector field (GVF) of the solution of the Poisson partial differential equation to extract geometric features and objects. Another application of VFs is in facilitating the iterative matching of features through a “consensus” between VFs [6]. The advantage provided by this approach is the reduced number of false matches. This is because the VF is generated by a bank of directional morphological openings, which considers multiple directions of a contour and defines ellipses as special structuring elements [7], defining new adaptive filters for image segmentation.
The present paper describes a repository of image databases where the geometric image features are augmented with VF features like SP shapes, trajectories of vectors, and separatrices. The utility of such a repository has been validated in several publications [1,2,3,8] and generalized in the present study.
In [3], we proved that the SP shapes are invariant according to scaling, translation, and weak rotations. This property shows that the SP shapes are useful features to augment the image features and enhance the statistics of ML classifiers. We validated this advantage with sparse representation wavelet classification (SRWC) and a sparse representation classification quaternion wavelet (SRCQW) method in [1] and with five convolutional neural networks (CNNs) in [2,3].
For the purpose of validation, we applied the above-listed ML classifiers to the public image databases ISIC2018 [9], ISIC2020 [10], COIL100 [11], and YaleB [12]. These databases contain 2D images of different types and sizes. The first two image databases consist of skin lesion images, the third one consists of 100 objects recorded from 72 different angles, and the last database contains the images of 38 faces recorded under 64 different illumination conditions. Note that ISIC2020 is the largest among the above-listed databases and has 32,542 benign and 584 malignant images for training. In addition, 10,982 non-labeled benign and malignant images are available for testing.
To facilitate the embedding, learning, and classification processes, the original images were resized to 420 × 420 for the COIL100 database and to 192 × 168 for the Yale database. Further, from ISIC2018, two image databases were generated by resizing the images to 250 × 250 and 500 × 500 pixels. Additionally, two datasets were generated from ISIC2020 by resizing the original images to 256 × 192 and 332 × 590 pixels. Recently, the image repository was enriched by loading the digit-MNIST image database [13], where the images were enlarged to 400 × 400 pixels and the VFs ψ ^ and v ψ ^ were embedded.
To augment the image features with SP shapes, we have developed six VFs. In [1,8], we designed three VFs, u ^ , ϕ ^ , and ψ ^ , whose SPs have real Eigenvalues and shapes formed by vectors whose main directions are determined by the Eigenvectors of the SPs. Further, in [3], we developed two VFs, v u ^ and v ϕ ^ , which contain SPs with real and complex Eigenvalues, Eigenvectors, and shapes. In the present study, we formulate the sixth VF, define the mappings between their SPs, and determine their positions in an image.
We proved in [1,8] that the gradient VF (GVF) u ^ generates two kinds of SP shapes, namely, saddle and sinking. The two GVFs ϕ ^ and ψ ^ generate three real-shaped SPs’: saddle, sinking, and springing. Next, we proved in [3] that the VFs v u ^ and v ϕ ^ can create seven SP shapes, which include the three real-shaped SPs as well as attracting and repelling (in and out) spirals, along with clockwise and counterclockwise orbits.
We embedded the six VFs u ^ ,   ϕ ^ ,   ψ ^ ,   v u ^ , v ϕ ^ , and v ψ ^ with the help of Matlab tools incorporated into the ELPAC software [14], initially developed for automatic image segmentation with an active contour guided by the VF u ^ . By embedding a VF into every image of a database, we built a new image database where the image features are augmented with SP shapes. Thus, by applying ELPAC to the five image databases ISIC2018 [9], ISIC2020 [10], COIL100 [11], YaleB [12], and digit-MNIST [13], we generated a set of new image databases where the six VFs are embedded. These image databases are ready to use and have the advantage of enhancing the classification statistics compared to the classification statistics of the original database [1,2,3]. The first repository of image databases with embedded VFs is available for downloading at the URL given below. The web page also contains data to support the claim that embedding a VF increases the accuracy of classification: https://www.tamuc.edu/projects/augmented-image-repository/?redirect=none (accessed on 8 July 2024).
On the same web page, we link to the EPLAC software [1,14], which can be implemented for embedding any of the six VFs described in this paper into any image. An additional advantage of images with embedded VFs is image database augmentation [15,16] when there is a shortage of samples or imbalanced classes [17]. However, the implementation of this idea requires a deep and rigorous study supported by experimental results. Hence, it will be a subject of study in our future works. Further, the above web page provides the SRWC [1] code, which can be used to classify original and image databases with embedded VFs for the purpose of comparison with the users’ own classifiers.
The contributions of this study are as follows: (1) the definition of SP locations according to the image objects; (2) the generalization of mappings between the SP shapes if the six VFs are separately embedded into the same image; (3) the definition of a new type of image and image database named “imprint of an image and imprint of an image database in a VF”. These contributions provide the opportunity for the user to choose the most appropriate VF to be embedded into an image database in order to improve classification statistics.
The rest of this paper is organized as follows: Section 2 is composed of two subsections. The first one describes the six VFs used to augment the image features with SP shapes, while the second one describes the SP shapes, the mappings between them, and the SP locations in an image. Section 3 consists of two main subsections that describe the original database and the newly generated image database with embedded VFs, as well as the software tools available for image feature augmentation and image database classification. The paper ends with a discussion on the contributions and the advantages obtainable by classifying image databases with embedded VFs and/or imprints of image databases in VFs.

2. Vector Fields That Augment Image Features

2.1. Definition of Vector Fields

The set of VFs for augmenting image features with SP shapes can be characterized with the help of weighted Laurent polynomials [18], which distinguish 16 kinds (shapes) of SPs. Another way to describe the VFs is through the Eigenvalues of their Jacobian [1,19,20]. This approach defines seven different kinds (shapes) of SPs. In our studies, we adopted the latter classification and distinguished seven different SP shapes that we embedded into images. Three of the SP shapes are springing, sinking, and saddle. They correspond to the real Eigenvalues of the Jacobian of the gradient VFs (GVFs). The shapes of the three kinds of SPs are described in [1,8] and are illustrated in Figure 1.
The remaining four SP shapes are those corresponding to the complex Eigenvalues of the Jacobian of the VFs v u ^ , v ϕ ^ , and v ψ ^ . The SP shapes are formed according to the directions of the vectors oriented toward the directions of the Eigenvectors and resemble attracting (in) and repelling (out) spirals, as well as clockwise and counterclockwise orbits [3]. The SPs with complex shapes are usually located in homogeneous regions. Examples of SPs with spiral-out (repelling) and clockwise orbit shapes are presented in Figure 2.
To generate the seven kinds of SP shapes described above, we have developed three GVFs with real Eigenvalues [1,8] and three non-GVFs with real and complex Eigenvalues [3]. The six VFs are defined with the help of the solution u ^ ( x , y ) of the Poisson Image equation [1,3,8], where Ω denotes the image frame and I ( x , y ) the image function:
Δ u ^ ( x , y ) = | I ( x , y ) | 2 1 + I ( x , y ) ,   u ^ = I ( x , y ) , ( x , y ) Ω .
The developed GVFs are defined with the following equations [1,8]:
u ^ = u ^ x i + u ^ y j , ϕ ^ = ϕ ^ x i + ϕ ^ y j , ψ ^ = ψ ^ x i + ψ ^ y j ,
and they generate the sinking-, springing-, and saddle-shaped SPs. In Equation (2) is the function
ϕ ^ = u ^ 2 , ψ ^ = Δ u ^ + 1 u ^ = Δ u ^ + 1 ϕ ^ ,
where ϕ ^ is continuous for every ( x , y ) Ω , while ψ ^ is discontinuous at the CPs of u ^ . The next three VFs are conjugates of the GVFs in Equation (3) [3], and we will name them non-GVFs:
v u ^ = u x i u y j , v ϕ ^ = ϕ ^ x i ϕ ^ y j , v ψ ^ = ψ ^ x i ψ ^ y j .
The VFs defined in Equation (4) generate the above three real-shaped SPs and the following four complex-shaped SPs, named attracting (in) and repelling (out) spirals, as well as clockwise and counterclockwise orbits (also known as centers) [19,20].
The ELPAC software was initially introduced in [14] with the capability of embedding u ^ to evolve an active contour. The tools to embed ϕ ^ and ψ ^ were incorporated to conduct the study in [1], while [3] extended ELPAC with tools to embed v u ^ ,   v ϕ ^ . Most recently, we added a tool to embed v ψ ^ . Hence, ELPAC is capable of augmenting the features of an image with the shapes of the SPs generated by the six VFs, embedded in Figure 3 as shown in Figures 4, 6 and 7. Further, if we remove the objects from the image with embedded VFs, we receive an image of the VF generated from the original image. Thus, we call an image containing only the VF generated from the original image an “imprint of the image in the VF”. The imprints of the image from Figure 3 in the six VFs are shown in Figure 4b,d,f,h,j,l. Further, we define the notion of the “imprint of an image database in a VF” as the set of imprints of the images from the original database in the VF.

2.2. Vector Field SPs for Image Feature Augmentation

In the present section, we describe the SP shapes created by each of the six VFs and list approximately where, according to the image objects, the SPs are located. Also, we determine the mappings between the SP shapes if different VFs are separately embedded into an image. This helps define the similarities and differences between VF components, like SPs, separatrices [1,14], and architectures (skeletons [22]), if they are embedded into the same image. This comparison facilitates the choice of the VF most appropriate for embedding into an image database to enhance ML and improve classification.
Recall that each of the VFs introduced in Section 2.1 is defined with the solution u ^ ( x , y ) of Equation (1), which we solve for every image I ( x , y ) of an image database. This implies that, in every image, we embed VF features like SP shapes, the edges of SP shapes, and separatrices [1,19,22]. The last entity is created by SP shapes and trajectories that connect them. An SP edge is a shape composed of a string of SP shapes generated in the very close vicinity of an object’s edge, as shown in Figure 5c,d. In Figure 5c, one can observe a string of sinking SPs, while in Figure 5d, a string of springing out SPs is exhibited. Note that the listed VF features are created by SPs and the trajectories of vectors embedded into the original image through the corresponding VF. The embedding is obtained by drawing vectors using either the solution u ^ ( x , y ) of Equation (1) or the calculation of the functions in Equation (3) on the image frame Ω .
In [1,8], we proved that u ^ generates sinking and saddle SPs, while ϕ ^ and ψ ^ generate sinking, saddle, and springing SPs. It follows that u ^ possesses less variety and a smaller number of SPs if compared with the other two VFs. Further, we showed in [1] that, in homogeneous regions, ϕ ^ and ψ ^ have the same number of SPs such that the springing SPs of ϕ ^ are mapped to the sinking SPs of the ϕ ^ VF and vice versa. At the same time, the saddle SPs do not change shape, but their vectors have opposite directions in the two VFs. Furthermore, the saddle SPs can appear only in homogeneous regions, while the springing and sinking SPs can appear in both boundary and homogeneous regions. Also, the trajectories and the separatrices of the VFs ϕ ^ and ψ ^ have similar architectures, but their vectors have opposite directions [1]. The above-described SP properties and mappings of the VFs u ^ , ϕ ^ , and ψ ^ can be observed in Figure 5.
Figure 5b and Figure 6a show that the GVF u ^ generates no SPs at convexity vertices, edges, or external concavities but has sinking SPs at concavity corners, as shown in Figure 5e. Further, ϕ ^ generates sinking SPs on the boundary convexity vertices and edges, as shown in Figure 5c and Figure 6b, and no SPs at the boundary concavities’ corners (see Figure 5f). Next, ψ ^ generates springing out SPs on the boundary convexity vertices and edges, as shown in Figure 5d and Figure 6c, and no SPs at the boundary concavities’ corners (Figure 5g). Further, u ^ creates only saddle SPs in the object’s core homogeneous regions (see Figure 5e), while ϕ ^ generates springing SPs at the saddle SP locations of u ^ and a saddle SP at a non-CP between the springing SPs (Figure 5f). Also, the VF ϕ ^ generates saddle SPs at external concavities (Figure 6b). The VF ψ ^ preserves the saddle SPs of ϕ ^ , switching the vectors’ directions, while replacing the springing with sinking SPs (Figure 5f,g, as well as Figure 6b,c).
Note that Figure 6 provides an overall view of the four-ray star from Figure 5a, where the three GVFs with real-shaped SPs are embedded. One may observe that the objects’ external concavity regions in Figure 6b,c contain saddle SPs, but the vectors that create their transversal and hyperbolic trajectories have opposite directions. Once again, one may observe that the objects with the embedded VF ϕ ^ create the edges of sinking SPs, while ψ ^ generates the edges of springing SPs. SP edges of the latter kind are better visually exhibited than those of the former type.
We continue hereafter with a description of the properties and locations of the SPs generated by the conjugate VFs, which generate real and complex Eigenvalues. Recall that these VFs contain seven kinds of SPs: sinking, springing, saddle, spiral in (attracting), spiral out (repelling), and orbits with clockwise and counterclockwise directions. Examples of their shapes are shown in Figure 1, Figure 2, Figure 5, and Figure 7.
In [3], we proved that the CPs of the function u ^ ( x , y ) , which are the solution of Equation (1), can map to any of the seven kinds of SP shapes of v u ^ . Further, in the same paper, we proved that the CPs of ϕ ^ map to the saddle SPs of v ϕ ^ . Recently, we validated that the same holds for the CPs of ψ ^ and the SPs of v ψ ^ . Also, we validated that the SPs of v ϕ ^ map to the SPs of v ψ ^ such that a springing SP maps to a sinking one and vice versa; saddle SPs do not change shape; spiral-in SPs map to spiral-out SPs and vice versa; and the clockwise orbits map to counterclockwise orbits and vice versa. These mappings imply that the VFs v ϕ ^ and v ψ ^ possess the same number of SPs and trajectory architectures, but the vectors that compose the two kinds of VF architectures have opposite directions. Moreover, the positions of the SPs generated by the two VFs are in close vicinity to each other if projected on one and the same plane. Further, we note that the SP edges of the VFs v ϕ ^ and v ψ ^ are alike as structures, but the vectors of these structures are opposite to each other. One can observe in Figure 7b,c that v ϕ ^ possesses sinking SP edges, while v ψ ^ has springing SP edges. Further, v ϕ ^ has saddle SPs with incoming vertical transversals, while the horizontal ones are outgoing, as shown in Figure 7e. On the other hand, the transversals of the saddle SPs of v ϕ ^ at the same positions are built from vectors with opposite directions, as Figure 7f shows.
Next, one can observe that the images with the embedded VF v u ^ show no SPs located on the object’s exterior nor on its boundary. Hence, v u ^ does not create SP edges, nor are SPs present at object boundaries, on external concavities, on inner convexities, or at convexity vertices. However, any of the seven SPs may show up in the core part of the object. The above statements are validated in Figure 7a,d, where we present a zoomed-in portion of the lower ray and the core part of the four-ray star from Figure 5a with the embedded VF v u ^ . Note that part Figure 7d exhibits spiral-out (the upper-left SP), spiral-in (the upper-right SP), and springing SPs (the lower one). An overall view of the same object and VF is shown in Figure 7g where the SPs are present only in the object core.
The VF v ϕ ^ generates spiral-in SPs on the exterior concavities of the object (Figure 7h). Also, saddle SPs are located at the convex vertices of the boundary, as shown in Figure 7b. Moreover, v ϕ ^ generates edges of sinking SPs (Figure 7b) and springing SPs (see the horizontal rays in Figure 7h). Further, the core interior of the object may contain several SP shapes from the entire set of seven kinds of SPs. As one may observe from Figure 7e, the core of the four-ray star contains three saddle SPs, and springing and sinking SPs are between them. The upper two saddle SPs have outgoing transversal trajectories that go to the sinking SPs, while the incoming transversal of the lower saddle SP comes from the springing SP.
The last VF to consider is v ψ ^ . Recall that, if embedded into an object, it has the same number of SP shapes and similar architectures of its trajectories as v ϕ ^ . Note that the vectors that build up the VF v ψ ^ trajectories and SPs have opposite directions to the vectors of v ϕ ^ . This can be observed in part Figure 7c, where at the convex vertex is located a saddle SP whose vectors are opposite to those of the saddle SP in Figure 7b. Also, the four spiral-out SPs in the external concavities of the star in Figure 7i are created with vectors whose directions are opposite to the directions of the vectors that created the spiral-out SPs in the external concavities (Figure 7h). Further, Figure 7f shows that the VF v ψ ^ generates five SPs in the core of the four-ray star, as v ϕ ^ does in Figure 7e. In Figure 7f, there are again three saddle SPs, but they have opposite vectors if compared with the saddle SPs in Figure 7e. The remaining two SPs, in between the three saddles, are springing and sinking, but they have switched positions, such that the sinking SP is below the springing SP in Figure 7f, while the former is above the latter in Figure 7e.
Following the above reasoning and observations, we developed the diagram in Figure 8 to describe the mappings between the CPs of the functions u ^ , ϕ ^ , and ψ ^ and the SPs of the six VFs if they are separately embedded in one and the same image. Note that VFs having a one-to-one correspondence have the same trajectory architectures but vectors with opposite directions, as we described in the above paragraphs. Also, with the term “partial”, we mean that only part of the SPs of u ^ map to the SPs of ϕ ^ , because u ^ has sinking SPs ( u ^ has maxima, as proven in [1]) at concavity vertices (Figure 5e), while ϕ ^ does not have SPs at concavity vertices (Figure 5f). Also, ϕ ^ has sinking SPs ( ϕ ^ has maxima, as proven in [1])) at convexity vertices (Figure 5c), while u ^ has no SPs at convexity vertices (see Figure 5b).
Table 1 shows the distribution of the SPs of a VF across an image if the VF is embedded into the image. It may serve as a guide to help the user select the VF to be embedded into an image database in order to provide the best classification statistics. Also, sink denotes sinking SP(s), spring denotes springing SP(s), core denotes the “core of an object”, branches means the branches of objects, and edges refer to “boundary edges”.

3. Repository of Image Databases with Embedded VFs

3.1. Original Datasets

In this section, we detail the technical characteristics of the original datasets, into which we embedded the VFs defined above and which were used to obtain the results in [1,2,3]. Recently, we added the image database digit-MNIST [13] with the embedded VFs ψ ^ and v ψ ^ to the repository.
The ISIC 2018 [9] and 2020 [10] image databases contain skin lesion images along with malignant and benign labels for each skin lesion. For the original ISIC 2018 database, there are 3694 skin lesion images with dimensions ranging from 640 × 480 to 6670 × 4440 in JPG format. For the original ISIC 2020 database, there are 33,126 skin lesion training images and 10,982 skin lesion testing images with dimensions ranging from 640 × 480 to 6000 × 4000 in DCM format.
One primary use case for the ISIC2020 databases is to improve the ML diagnosis of skin lesions as either malignant or benign. Thus, the database is broken down into two classes of images, which were labeled by three dermatologists. ML is tasked with correctly labeling each skin lesion image as one of the two classes. Note that the training images were labeled by dermatologists, but the testing images have no labels, and classification is shown when the results are uploaded to the database website [10]. Examples with embedded VFs from the ISIC skin lesion datasets are found in Figure 9.
The original COIL100 [11] database contains 7200 images. These images were produced by taking 100 common household objects and rotating each object 5° for a total of 72 rotations. This produced 72 × 100 = 7200 images in total, stored in PNG format. The images are sized with dimensions of 128 × 128 pixels.
One purpose of this database is to help train ML to recognize a variety of objects and to recognize these objects regardless of the viewing angle. The different classes in this case are the 100 objects presented to the ML algorithm with 72 different rotations.
To embed VFs to augment the image features of the COIL100 database, we resized its images to 420 × 420. Examples from this database with embedded VFs are found in Figure 10.
The original Yale face database [12] contains 2414 face images taken under different lighting conditions. The original images are stored in the PNG format with dimensions of 341 × 385 . Note that to conduct the experiments in [1], we resized the original images to 192 × 168 and embedded the VFs again. Examples from this database are found in Figure 11.
The Yale face database allows ML to be trained on a set of face images under different lighting conditions in order to produce facial recognition. The classes in this case each include 39 faces, with a picture of each taken with a different angle of lighting.
The digit-MNIST [13] image database consists of about 60,000 images of the digits 0–9 and approximately 10,000 testing images, all of them with a size of 28 × 28 . Since such images are too small for embedding VFs, we extended their sizes to 400 × 400 .

3.2. Image Datasets with Augmented Image Features

Our new repository of image databases with embedded VFs, which augment the image features with SP shapes, currently hosts several such image databases. They were generated from the ISIC2018, ISIC2020, COIL100, and digit-MNIST databases with the help of the software ELPAC which could be found at https://www.tamuc.edu/projects/augmented-image-repository/?redirect=none (accessed on 6 July 2024) [14], which is included in the repository as well.
In particular, from the original ISIC2018 image database, we generated two ISIC2018 databases in PNG format with sizes of 500 × 500 and 250 × 250 . Then, in each of them, we embedded the VFs ϕ ^ and ψ ^ . Further, we resized the ISIC2020 images to 256 × 192 in PNG format and embedded the VF ϕ ^ or ϕ ^ into every training image of the resized database. We named the new image databases as follows:
  • ISIC2018-500 × 500- ϕ ^ and ISIC2018-500 × 500- ψ ^ ;
  • ISIC2018-250 × 250- ϕ ^ and ISIC2018-250 × 250- ψ ^ ;
  • ISIC2020-train- ϕ ^ and ISIC2020-test- ϕ ^ ;
  • ISIC2020-train- ψ ^ and ISIC2020-test- ψ ^ .
In addition, we made imprints of ISIC2020 into the VFs ϕ ^ and ψ ^ and denote them by ISIC2020-imprint- ϕ ^ and ISIC2020-imprint- ψ ^ . Note that in the repository, we used the name “VF only” to assist users who did not read the paper.
Also, we generated four image databases by using the training and testing sets of ISIC2020 and embedding the VFs v ϕ ^ and v ψ ^ into them. Analogously, we named the datasets as follows:
  • ISIC2020-train- v u ^ and ISIC2020-test- v u ^ ;
  • ISIC2020-train- v ϕ ^ and ISIC2020-test- v ϕ ^ .
In addition, in the resized COIL100, we embedded the VFs ϕ ^ , ψ ^ , v u ^ , v ϕ ^ , and v ψ ^ generating five new image databases whose image features are augmented with SP shapes. According to the adopted naming convention, we named the generated image databases as follows: COIL100- ϕ ^ , COIL100- ψ ^ , COIL100- v u ^ , COIL100- v ϕ ^ , and COIL100- v ψ ^ .
To further validate the robustness of the idea that ML classification is boosted by embedding a VF into an image database (which augments the image features with VF features), we added Gaussian noise to the original datasets COIL100 and ISIC2020. This Gaussian noise was generated using the built-in MatLab function with a 0 mean and variance set to 0.001. The noisy images were embedded with the VFs v u ^ and v ϕ ^ . Thus, we created the six new image datasets listed below:
  • COIL100-noise- v u ^ and COIL100-noise- v ϕ ^ ;
  • ISIC2020-train-noise- v u ^ and ISIC2020-test-noise- v u ^ ;
  • ISIC2020-train-noise- v ϕ ^ and ISIC2020-test-noise- v ϕ ^ .
In [2], we applied our CNN and classified the image database COIL100-noise- v u ^ with added noise and embedded the VF v u ^ with an accuracy of 92.97%. It is 3.56% higher than the accuracy of 89.41% when classifying the original database COIL100 with the same CNN. The same held true with ISIC2020-test-noise- v u ^ , which was classified with an accuracy of 92.09%, while the original image database ISIC2020 was classified with 86.67% accuracy by the same CNN.
On the other hand, ISIC2020-test-noise- v ϕ ^ was classified with an accuracy of 85.26%, which is 1.41% lower than the classification of the original ISIC2020. This is because the skin lesion images contain a large number of structures and details, which makes the image features chaotic. Hence, adding noise and embedding additional VF features clutters the image and decreases the classification because the consistency between the image and the VF features is broken (Figure 9a,c–e).
Recently, we extended the repository by adding digid-MNIST [13] whose images were enlarged to a size of 400 × 400 . Then we embedded the VFs ψ ^ and v ψ ^ into the enlarged digid-MNIST and named the two debases as digit-MNIST- ψ ^ and digit-MNIST- v ψ ^ .
To access the repository that contains the above-listed image databases and ELPAC and SRWC software, please use the URL https://www.tamuc.edu/projects/augmented-image-repository (accessed on 6 July 2024).
Files are hosted as single ZIP files for each respective image database and software. Files stored in each respective image database ZIP are saved in PNG format. New image databases with embedded VFs will be added in the future.

3.3. Software

The image database repository contains links to the software tool ELPAC, whose GUI is shown in Figure 12. This software provides sophisticated and mature capabilities for image segmentation based on the work in [14] and VF embedding based on the works in [1,2,3]. Hence, ELPAC incorporates the following features:
  • Segmenting via an evolving contour directed by the VF u ^ . To guide the active contour, with parameters customizable under “Contour Size and Shape”, a VF should be selected from the drop-down menu under “Vector Field Generation”. The recommended choice is “Nabla u Hat”.
  • Splitting and tracing contours around multiple objects for full image segmentation. Such options are available under “Splitting Options”.
  • Selecting any of the VFs from the “Vector Field Generation” drop-down menu. The six VFs described in Section 2 are at the top of the list. The first three of them have real-shaped SPs [1], while the next three have real- and complex-shaped SPs [3]. The selected VF will be embedded into the image file chosen using the Browse option.
In addition to the GUI for ELPAC, the repository provides code for batch VF embedding into an image database. Instructions for how to use the code and software are included in the repository. Furthermore, the code for the ML classifier SRWC [1] is given as well.

4. Conclusions

The present paper describes our repository containing a total of twenty-nine image databases built by embedding the VFs ϕ ^ ,   ψ ^ ,   v u ^ ,   v ϕ ^ , and v ψ ^ into four public image databases. Recall that six of the image databases are “injected” with Gaussian noise, and VFs are embedded afterward. Also, four of the image databases are imprints of ISIC2020 into ϕ ^ and ψ ^ . These image databases offer the advantage of enhancing classification statistics if compared with the classification statistics of the original image databases, as shown in [1,2,3]. The advantage was validated by applying three ML methods, namely, sparse representation wavelet classification (SRWC) [1], sparse representation classification with quaternions in wavelet domain (SRCQW) [1], and five CNNs [2].
Recall that the contributions and the novelties of the present paper, as formulated in the Introduction and validated throughout the text, are as follows:
  • We defined the image regions where the different kinds of SPs will appear, as shown in Table 1.
  • We determined the mappings between the SPs of the new VF v ψ ^ and the SPs of the VF v ϕ ^ , formulated in [2, 3], as shown in Figure 8, if the two VFs are separately embedded into the same image.
  • We defined a new type of image and image database, named “imprint of an image and imprint of an image database in a VF”.
The new findings provide the user with the opportunity to choose the VF to be embedded into a particular image database to provide the advantage of improving classification statistics if compared with the classification statistics of the original image database.
To understand the nature of the VFs and how they correlate with each other, in [1], we determined the mappings between the CPs of u , ϕ ^ , and ψ ^ and the SPs of the GVFs u ^ ,   ϕ ^ ,   ψ ^ . In [3], we investigated the mappings between the CPs of u and the SPs of v u ^ ,   v ϕ ^ , as well as between the shapes of the SPs of the latter two VFs. The present paper contributes to the study of SP shape mapping through the addition of the VF v ψ ^ to the diagram of SP mappings, as shown in Figure 8.
Further, the present paper introduces the notion of an “imprint of an image in a VF”. The imprints of the malignant skin lesion from Figure 3 into the six VFs are shown in Figure 4b,d,f,h,j,l. One may observe that a single image creates different imprints in the different VFs. To validate the usefulness and the efficiency of the new notion, we created an imprint of the ISIC2020 image database [10] in the VF ϕ ^ and named the new database ISIC2020-imprint- ϕ ^ . Then, in [1], we validated that classifying ISIC2020-imprint- ϕ ^ gives an accuracy of 88.89%, while the accuracy of classifying the original image database is 84.21%.
From the above descriptions, derivations, observations, Figure 8, and Table 1, we derive information regarding the VFs’ SP shapes, the mappings between them, and the SP locations in image objects. Hence, we conclude that the VF with the smallest number of SPs is v u ^ , which generates SPs only in the core part of the interior (Figure 7a,d,g). The next is u ^ , which generates SPs in the core part of the interior and at the concavity corners (Figure 5b,e, and Figure 6a). Note that v u ^ can generate the whole set of the seven types of SP shapes, while u ^ generates only saddle and springing SPs, as proven in [1,8].
The pair of VFs ϕ ^ and ψ ^ generate the three kinds of real-shaped SPs (springing out, sinking, and saddle) in the objects’ exterior concavities, branches, boundary convexity vertices, and boundary edges. Therefore, ϕ ^ and ψ ^ generate more SPs than v u ^ and u ^ (Figure 5b,e, Figure 6a, and Figure 7a,d,g). Recall that ϕ ^ and ψ ^ have similar architectures created by the SPs and the trajectories connecting them, but the vectors that build these architectures are inverse to each other, as stated in Section 2.2, and can be observed in Figure 5c,d,f,g, as well as Figure 6b,c.
The last pair of VFs in our study is v ϕ ^ and v ψ ^ , and they generate almost the same number of SPs as ϕ ^ and ψ ^ in the object core, the exterior concavities, convexity vertices, branches, and boundary edges. Comparing Figure 5 and Figure 6 with Figure 7, one can tell that the real-Eigenvalued VFs ( ϕ ^ and ψ ^ ) have springing and sinking SPs at the convexities’ vertices and saddle SPs in the external concave regions and the object branches. On the other hand, the complex-Eigenvalued VFs ( v ϕ ^ and v ψ ^ ) have saddle SPs on the convexities’ vertices, spiral-out SPs in the external concave regions, and springing, sinking, and spiral SPs at object branches. Regarding the edges of SPs, those generated by ϕ ^ and v ϕ ^ are similar (see Figure 5 and Figure 6). The same holds for ψ ^ and v ψ ^ . Finally, the VF architectures of v ϕ ^ and v ψ ^ are alike but have opposite vectors that create them (see Figure 7).
The above analysis provides knowledge to assist users in their choice of VF to be embedded into a given image database and/or to make an imprint of the database into a certain VF in order to improve the classification statistics.
For the needs of [2], we embedded the VFs v u ^ and v ϕ ^ into the image databases ISIC2020 [10] and COIL100 [11]. Then, five contemporary CNN classifiers were applied to classify the original image databases and the image database with embedded VFs. The results show that the maximum accuracies of classifying the original databases ISIC2020 [10] and COIL100 [11] were 91.69% and 90.99%, respectively [2]. Classifying the databases with embedded v u ^ increased the maximum classification accuracies to 93.25% and 91.85%, respectively [2]. However, embedding v ϕ ^ into the original databases decreased the accuracies to 78.19% and 90.08%, respectively.
Please note that embedding v ϕ ^ into the cluttered skin lesion images led to a drop in accuracy of about 13%, while embedding the same VF into COIL100, which has a few details, led to a drop of 0.91% but increased the precision from 70.46% to 71.57%.
Please note that, in [2], for the image databases COIL100- v u ^ , ISIC2020-test- v u ^ , and ISIC2020-test- v ϕ ^ , we used slightly different notations, namely, v u ^ ( C ) , v u ^ ( I ) , and v ϕ ^ ( I ) , respectively.
The advantage of the described method for the ML classification of image databases with embedded VFs is further validated in [1]. In this paper, the VFs ϕ ^ and ψ ^ were embedded into ISIC2018 [9], ISIC2020 [10], COIL100 [11], and YaleB [12], creating ten new image databases, including an imprint of ISIC2020 in ϕ ^ and ψ ^ . We classified the original image databases and their derivatives with embedded VFs by applying the software tools SRWC and SRCQW, described in [1]. The accuracy of classifications of the VF-embedded image databases increased, except in the case of YaleB. For example, COIL100- ϕ ^ was classified by SRWC with an accuracy of 82.83%, while the original image database exhibited 81.29% [1]. Also, as mentioned above, the accuracy of the classification of ISIC2020-imprint- ϕ ^ by SRCQW was 88.89%, while the original ISIC2020 was classified with 84.21% accuracy. Hence, the ISIC2020 imprint classification showed an increase of 4.68%.
There was a drop in accuracy for the database YaleB with embedded ϕ ^ and ψ ^ compared to the accuracy of the original YaleB because the objects’ boundaries are not present in the images. Hence, the SP edges, which play a role in the classification, were not generated by the VFs.
Our studies will continue by enlarging our repository with additional contemporary image databases with embedded VFs and imprints of image databases into VFs. Also, we are developing gradient and conjugate gradient VFs of the linear combinations of the functions ϕ ^ and ψ ^ and will embed the new VFs into the presently used and other contemporary image databases. Further, we will create an image database of SPs [23] to train ML classifiers and develop an approach to apply VFs to image data augmentation [15,16].

Author Contributions

Conceptualization, N.M.S. and A.B.; methodology, N.M.S.; validation, N.M.S. and A.B.; formal analysis, N.M.S. and A.B.; data curation, A.B.; writing—N.M.S.; review and editing—N.M.S. and A.B.; supervision—N.M.S.; project administration—N.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The link below provides access to a web page where the reader will find the image datasets with embedded VFs ready to use and capable of enhancing the classification statistics of machine learning (ML) methods. Also, this page provides the software ELPAC [1,14], which the reader can use to embed any of the above-described six VFs into her/his image dataset(s) to boost the classification statistics of the used classifier. The web page also provides a code for batch processing for embedding a VF into an image database and code for sparse representation wavelet classification (SRWC) [1]. The user can apply SRWC to classify the original image dataset and the corresponding dataset with an embedded VF: https://www.tamuc.edu/projects/augmented-image-repository (accessed on 6 July 2024).

Acknowledgments

The authors would like to express their thanks to Jeremy Gamezand Jeff Faunce, Center for IT Excellence at Texas A and M University-Commerce, and the team from the Office of Marketing and Communications at Texas A&M University-Commerce for partially supporting this project by providing storage space and web page design.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLmachine learning
VFvector field
GVFgradient vector field
SPsingular points
CPcritical points
NNneural network
SRWCsparse representation wavelet classification
SRCQWsparse representation classification quaternion wavelets
CNNconvolutional NN
SLskin lesion

References

  1. Sirakov, N.M.; Bowden, A.; Chen, M.; Ngo, L.H.; Luong, M. Embedding vector field into image features to enhance classification. J. Comput. Appl. Math. 2024, 441, 115685. [Google Scholar] [CrossRef]
  2. Igbasanmi, O.; Sirakov, N.M.; Bowden, A. CNN for Efficient Objects Classification with Embedded Vector Fields. In Computing, Internet of Things and Data Analytics; García Márquez, F.P., Jamil, A., Ramirez, I.S., Eken, S., Hameed, A.A., Eds.; ICCIDA 2023, Studies in Computational Intelligence; Springer: Cham, Switzerland, 2024; Volume 1145, pp. 297–309. [Google Scholar] [CrossRef]
  3. Igbasanmy, O.D.; Sirakov, N.M. On the Usefulness of the Vector Field Singular Points Shapes for Classification. Int. J. Appl. Comput. Math. 2024, 10, 52. [Google Scholar] [CrossRef]
  4. Tari, S.; Genctav, M. From a non-local Ambrosio-Tortorelli phase field to a randomized part hierarchy tree. J. Math. Imaging Vis. 2014, 49, 69–86. [Google Scholar] [CrossRef]
  5. Li, B.; Acton, S. Automatic active model initialization via Poisson inverse gradient. IEEE Trans. Image Process. 2008, 17, 1406–1420. [Google Scholar] [PubMed]
  6. Ma, J.; Ma, Y.; Zhao, J.; Tian, J. Image Feature Matching via Progressive Vector Field Consensus. IEEE Signal Process. Lett. 2015, 22, 767–771. [Google Scholar] [CrossRef]
  7. Legaz-Aparicio, A.G.; Verdu-Monedero, R.; Angulo, J. Adaptive morphological filters based on a multiple orientation vector field dependent on image local features. J. Comput. Appl. Math. 2018, 330, 965–981. [Google Scholar] [CrossRef]
  8. Chen, M.; Sirakov, N.M. Poisson Equation Solution and its Gradient Vector Field to Geometric Features Detection. In Lecture Notes in Computer Science 11324; Fagan, D., Martín-Vide, C., O’Neill, M., Vega-Rodríguez, M.A., Eds.; Springer Nature: Dublin, Irland, 2018; pp. 36–48. [Google Scholar]
  9. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2018, arXiv:1902.03368. [Google Scholar] [CrossRef]
  10. International Skin Imaging Collaboration. SIIM-ISIC 2020 Challenge Dataset. Internat. Skin Imaging Collaboration. Available online: https://challenge2020.isic-archive.com/ (accessed on 15 May 2023).
  11. Nene, S.A.; Nayar, S.K.; Murase, H. Columbia Object Image Library (COIL-100); Technical Report; CUCS-006-96. 1996. Available online: http://www1.cs.columbia.edu/CAVE/research/softlib/coil-100.html (accessed on 6 July 2024).
  12. Georghiades, A.S.; Belhumeur, P.N.; DJKriegman, P.N. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. PAMI 2001, 23, 643–660. [Google Scholar] [CrossRef]
  13. Digits-MNIST Image Database. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 3 July 2024).
  14. Bowden, A.; Sirakov, M.N. Active Contour Directed by the Poisson Gradient Vector Field and Edge Tracking. J. Math. Imaging Vis. 2021, 63, 665–680. Available online: https://rdcu.be/cflaI (accessed on 6 July 2024). [CrossRef]
  15. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  16. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New York, NY, USA, 7–12 February 2020. [Google Scholar]
  17. Leevy, J.; Khoshgoftaar, T.; Bauder, R.A.; Seliya, N. A survey on addressing high-class imbalance in big data. J. Big Data 2018, 5, 1–30. [Google Scholar] [CrossRef]
  18. Wei, L.; Eraldo, R. Detecting Singular Patterns in 2-D Vector Fields Using Weighted Laurent Polynomial. Pattern Recognit. 2012, 45, 3912–3925. [Google Scholar]
  19. Zhang, E.; Mischaikow, K.; Turk, G. Vector field design on surfaces. ACM Trans. Graph. 2006, 25, 1294–1326. [Google Scholar]
  20. Sosinsky, A. Vector Fields on the Plane. 2015. Available online: http://ium.mccme.ru/postscript/s16/topology1-Lec7.pdf (accessed on 6 July 2024).
  21. Argenziano, G.; Soyer, H.; De Giorgi, V. Dermoscopy: A Tutorial; New Media, Edra Medical Pub.: Milan, Italy, 2000. [Google Scholar]
  22. Siddiqi, K.; Bouix, S.; Tannenbaum, S.W. Hamilton–Jacobi skeletons. Int. J. Comput. Vis. 2002, 48, 215–231. [Google Scholar] [CrossRef]
  23. Bina, T.; Yib, L. CNN-based flow field feature visualization method. Int. J. Perform. Eng. 2018, 14, 434–444. [Google Scholar] [CrossRef]
Figure 1. The SP shapes are cropped from a synthetic image where the VF is embedded: (a) v u ^ —sinking-shaped SP; (b) v u ^ —springing-shaped SP; (c) ϕ ^ —saddle-shaped SP.
Figure 1. The SP shapes are cropped from a synthetic image where the VF is embedded: (a) v u ^ —sinking-shaped SP; (b) v u ^ —springing-shaped SP; (c) ϕ ^ —saddle-shaped SP.
Electronics 13 03150 g001
Figure 2. The SP shapes are cropped from a COIL100 [11] image where the VF v ϕ ^ has been embedded: (a) shows a spiral-out (repelling)-shaped SP; (b) presents a clockwise orbit SP.
Figure 2. The SP shapes are cropped from a COIL100 [11] image where the VF v ϕ ^ has been embedded: (a) shows a spiral-out (repelling)-shaped SP; (b) presents a clockwise orbit SP.
Electronics 13 03150 g002
Figure 3. A malignant skin lesion image from [21].
Figure 3. A malignant skin lesion image from [21].
Electronics 13 03150 g003
Figure 4. Parts (a,c,e,g,i,k) show the six VFs u ^ , ϕ ^ , ψ ^ , v u ^ , v ϕ ^ , v ψ ^ embedded into the skin lesion image shown in Figure 3. The remaining parts (b,d,f,h,j,l) show the imprints of the skin lesion in the six VFs, respectively.
Figure 4. Parts (a,c,e,g,i,k) show the six VFs u ^ , ϕ ^ , ψ ^ , v u ^ , v ϕ ^ , v ψ ^ embedded into the skin lesion image shown in Figure 3. The remaining parts (b,d,f,h,j,l) show the imprints of the skin lesion in the six VFs, respectively.
Electronics 13 03150 g004
Figure 5. (a) A synthetic object. The upper row shows zooms of the lower branch of the object in (a) with embedded VFs: (b) u ^ ; (c) ϕ ^ ; (d) ψ ^ . The lower row shows zooms of the core part of the object in (a) with embedded VFs: (e) u ^ ; (f) ϕ ^ ; (g) ψ ^ .
Figure 5. (a) A synthetic object. The upper row shows zooms of the lower branch of the object in (a) with embedded VFs: (b) u ^ ; (c) ϕ ^ ; (d) ψ ^ . The lower row shows zooms of the core part of the object in (a) with embedded VFs: (e) u ^ ; (f) ϕ ^ ; (g) ψ ^ .
Electronics 13 03150 g005
Figure 6. An overall view of the object in Figure 5a with embedded VFs: (a) u ^ ; (b) ϕ ^ ; (c) ψ ^ .
Figure 6. An overall view of the object in Figure 5a with embedded VFs: (a) u ^ ; (b) ϕ ^ ; (c) ψ ^ .
Electronics 13 03150 g006
Figure 7. The object in Figure 5a with embedded VFs: v u ^ in the left column, parts (a,d,g); v ϕ ^ in the middle column, parts (b,e,h); v ψ ^ in the right column, parts (c,f,i).
Figure 7. The object in Figure 5a with embedded VFs: v u ^ in the left column, parts (a,d,g); v ϕ ^ in the middle column, parts (b,e,h); v ψ ^ in the right column, parts (c,f,i).
Electronics 13 03150 g007
Figure 8. The mappings between the CPs of u ^ , ϕ ^ , and ψ ^ and the SPs of the six VFs derived from the three functions, as well as the mappings between the SPs of the VFs.
Figure 8. The mappings between the CPs of u ^ , ϕ ^ , and ψ ^ and the SPs of the six VFs derived from the three functions, as well as the mappings between the SPs of the VFs.
Electronics 13 03150 g008
Figure 9. Examples of ISIC 2018 and ISIC2020 images with embedded VFs.
Figure 9. Examples of ISIC 2018 and ISIC2020 images with embedded VFs.
Electronics 13 03150 g009
Figure 10. Examples of COIL100 images with embedded VFs. From left to right: ϕ ^ , ψ ^ , v u ^ , and v ϕ ^ .
Figure 10. Examples of COIL100 images with embedded VFs. From left to right: ϕ ^ , ψ ^ , v u ^ , and v ϕ ^ .
Electronics 13 03150 g010
Figure 11. Examples of YALE face database images with embedded VFs.
Figure 11. Examples of YALE face database images with embedded VFs.
Electronics 13 03150 g011
Figure 12. The GUI of ELPAC software [14]. The drop-down list below “Vector Field Generation” shows the list of VFs that can be embedded in an image.
Figure 12. The GUI of ELPAC software [14]. The drop-down list below “Vector Field Generation” shows the list of VFs that can be embedded in an image.
Electronics 13 03150 g012
Table 1. SP shape on the right side of /, whie on the left side is the location of the SP on the image objects according to the embedded VF.
Table 1. SP shape on the right side of /, whie on the left side is the location of the SP on the image objects according to the embedded VF.
VFSP/LocationSP/LocationSP/Location
u ^ saddle/coresinking/concavity corners
ϕ ^ saddle/core, branches, concavitiessink/convex vertices, edgesspring/core
ψ ^ saddle/core, branches, concavitiessink/corespring/edges, convex vertices
v u ^ saddle/coresink/corespring/core;
v u ^ spiral (in and out)/coreorbits/homogeneous regions
v ϕ ^ , v ϕ ^ saddle/core, convex verticessink/core, edges, branchesspring/core, edges, branches
v ϕ ^ , v ϕ ^ spiral (in and out)/core, concavities, branchesorbits/homogeneous regions
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sirakov, N.M.; Bowden, A. Image Databases with Features Augmented with Singular-Point Shapes to Enhance Machine Learning. Electronics 2024, 13, 3150. https://doi.org/10.3390/electronics13163150

AMA Style

Sirakov NM, Bowden A. Image Databases with Features Augmented with Singular-Point Shapes to Enhance Machine Learning. Electronics. 2024; 13(16):3150. https://doi.org/10.3390/electronics13163150

Chicago/Turabian Style

Sirakov, Nikolay Metodiev, and Adam Bowden. 2024. "Image Databases with Features Augmented with Singular-Point Shapes to Enhance Machine Learning" Electronics 13, no. 16: 3150. https://doi.org/10.3390/electronics13163150

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop