*3.2. Shape Feature Extractions*

The shape of leaf is a basic feature of leaf image; when people identify an object, its shape comes to our mind firstly. Similarly, for leaf image recognition, the shape is a simple and fast feature, but the effect of a feature may be not qualified for leaf recognition by itself, as many kinds of leaf images have similar shapes. Thus, in our system, shape feature is a minor feature. As shown in Figure 3, after getting the contour of leaf image, the contour is cut into numerous shape fragments; the middle points of each fragment are shown in Figure 3C.

**Figure 3.** SC acquisition.

Then, shape context is used as a descriptor of each fragment. Finally, the BOF model is used for feature coding and pooling so that we can get a more effective feature. Unlike bag of contour fragments (BCF), in this paper, uniform sampling method and simple fragments are used to improve the speed for feature classification.

#### *3.3. Texture Feature Extractions*

When DPCNN works, the parameters must be set firstly; the parameters of DPCNN in this paper are from Ref. [28], except the times of iteration. In our method, the image is divided into many blocks with 8 × 8 sizes (assuming that the number of patches is *N* to each kind of leaf images), each block is regarded as a patch. Then, after the iteration of *n* times, there will be *n* entropy images, and the entropy of each patch in every entropy image will be counted in order, as shown in Figure 4. If the entropy of

one patch is *ei*, the entropy vector of the *j*th patch will be *Ej* = [*e*1, *e*<sup>2</sup> ... ... *en*]. To a species, the features matrix will be *ENn*, and eight codes (center of clustering) will be counted based on this matrix too.

**Figure 4.** Flow of getting entropy vector.

The process of extracting leaf image features by BOF\_DPCNN combining DPCNN and BOF model is mainly divided into four stages: preprocessing, acquisition of DPCNN pulse images, low-level feature extraction, and feature coding. The process of obtaining image features of BOF\_DPCNN is shown in Figures 4 and 5. Since the datasets we used are processed, the preprocessing stage can be ignored. As shown in Figure 4, the color image is converted to grayscale image firstly, and then the grayscale image is divided into blocks of the same size. For each small block, the DPCNN model is iterated to obtain the pulse entropy images. Finally, the entropy vectors are calculated from the entropy images to obtain low-level features. The BOF model is used to construct the codebook with low-dimensional features, LLC is used to encode, and SPM is used for pooling, as shown as Figure 5.

**Figure 5.** Acquisition of image features using BOF\_DPCNN: (**a**) entropy vectors obtained by DPCNN model; (**b**) codebook obtained by learning features; (**c**) the LLC coding; and (**d**) SPM for pooling.
