**3. Materials and Methods**

This paper proposed a novel vegetation classification method combining an object-oriented classification method and BRDF. The relationship between BRDF characteristics and plant leaves and vegetation canopy structure is discussed to promote the development of multi-angle optical remote sensing in the application field of vegetation remote sensing. First, the method of image segmentation combining spectral and DSM features was studied to improve the accuracy of the object-oriented edge and the segmentation of the plaque. Second, multiple hyperspectral data sets were obtained using vertical and oblique photogrammetry, the acquired multi-angle observation data of the ground object were used, and then the semi-empirical kernel driver model was used to invert the BRDF model of each object patch. Third, according to the characteristics of BRDF for each segmentation patch, a multi-class feature set was constructed. Finally, object-oriented classification was carried out for fine vegetation classification. The specific research technology route is shown in Figure 5:

**Figure 5.** Flowchart of the classification method. DSM: digital surface model, DOM: digital orthophoto maps, BRDF: bidirectional reflectance distribution function.

## *3.1. Image Segmentation*

UAV remote sensing technology can acquire DSMs through the acquisition and processing of multiple overlapping images, and this information can be used as auxiliary information to improve the image segmentation accuracy. In this study, the DSM and spectral characteristics were taken as basic information, and the object set of the UAV hyperspectral image segmentation was constructed using Definiens eCognition Developer 7.0 (München, Germany). A multi-resolution segmentation method was adopted. A segmentation scale parameter was manually adjusted using a trial and error method and finally a segmentation scale of 50 was selected, which resulted in visually correct segmentation. The shape and compactness weight parameters [29] used in the segmentation algorithm were also found using trial and error, and values of 0.05 and 0.8 were used, respectively.

Next, the extraction of the feature sets for image segmentation patches were discussed, which were used for vegetation classification.

#### *3.2. Multi-angle Observation Data Acquisition and BRDF Model Construction*

First, the maximum inscribed circle of each object patch was obtained as the attribute representative of the patch. Second, corresponding image points for each pixel inside the inscribed circle were found. Then, the average value of the reflectances in the corresponding circular area in one image was read as the reflectivity of the segmented block under different observation angles. At the same time, the observation angle of each image block with the same name was obtained. Finally, the BRDF model of

the segmentation block was constructed by using the reflectance of a multi-angle observation. The specific steps were as follows:

(1) Aerotriangulation and camera attitude parameter solution:

On the basis of multi-angle image data sets with high amounts of overlap acquired through vertical photography and oblique photogrammetry, control point data obtained using synchronous field measurements were used to calculate the coordinates of pending points in the study area via aerotriangulation and then used as control points for multiple images and image correction. In this method, aerial camera stations were established for the whole network, and the acquired images were used for point transmission and network construction.

The exterior and interior orientation parameters were obtained via aerotriangulation with control point data, and the internal camera parameters were obtained via camera calibration. The camera calibration and orientation was carried out suing the Agisoft Photoscans software. Then, the coordinates of the known object in three-dimensional space, the corresponding image pixel coordinates, and the camera interior parameters were used to determine the exterior parameters of the object in a known space, namely the rotation vector and the translation vector. Finally, the rotation vector was analyzed and processed to obtain the three-dimensional altitude angle of the camera relative to the spatial coordinates of the known object by considering the pitch, rotation, and wheel angles.

(2) Search for the corresponding image points:

The corresponding image point refers to the image point of any ground object target point in different photos [30]. It was obtained by photographing the same object point multiple times at different photo points during the aerial photography. After calculating the coordinates of the pending points in the study area and the elements of the internal and external orientations of each image, a collinearity equation with digital photogrammetry was used to determine the image plane coordinates of the target point for each image; then, the characteristics of the sensor image were used to determine whether each coordinate was within the visual threshold range and to search the image for corresponding image points.

(3) Observation angle and reflectance of points with the same name:

After searching for points with the same name, the zenith angle and observation azimuth of the object point in each image and points with the same name were calculated using the orientation relationship between the camera station (projection center) and the object point. In addition, the reflectance of points with the same name was determined for the selected band image.

(4) Parameter calculation for the semi-empirical kernel driver model:

Algorithm for model bidirectional reflectance anisotropics of the land surface (AMBRALS) [31] was selected to construct the BRDF. The semi-empirical core-driven model can be expressed using Equation (1):

$$\mathcal{R}(\theta,\partial,\sigma) = f\_{\rm iso} + f\_{\rm vol} K\_{\rm vol}(\theta,\partial,\sigma) + f\_{\rm gen} K\_{\rm geo}(\theta,\partial,\sigma). \tag{1}$$

The bidirectional reflectance can be decomposed into the sum of the weights of the three parts of uniform reflection, bulk reflection, and geometric optical reflection. Therefore, the value of isotropic reflection is generally equal to 1. In the core-driven model, *R* represents the bidirectional reflectivity, θ represents the ray zenith angle, ∂ represents the observation angle of the zenith angle, and σ represents the corresponding azimuth angle. K*vol* and K*geo* are the bulk nuclear reflection and geometric optical nuclear reflection, respectively. *fiso, fvol*, and *fgeo* are constant coefficients that represent the proportions of uniform reflection, bulk reflection, and geometric optical reflection, respectively. The linear regression method was used to solve for the optimal parameter values. In addition, the bulk nuclear reflection and geometric optical nuclear reflection in the formula were calculated using the ray zenith angle, the observation zenith angle, and the corresponding azimuth angle, and therein, the ray zenith angle and the azimuth angle were calculated based on the time and date the image was obtained and the coordinates of the object point.
