Next Article in Journal
Bacillus velezensis Associated with Organomineral Fertilizer and Reduced Phosphate Doses Improves Soil Microbial—Chemical Properties and Biomass of Sugarcane
Next Article in Special Issue
Postharvest Geometric Characterization of Table Olive Bruising from 3D Digitalization
Previous Article in Journal
Combining Proximal and Remote Sensors in Spatial Prediction of Five Micronutrients and Soil Texture in a Case Study at Farmland Scale in Southeastern Brazil
Previous Article in Special Issue
The Role of Radiation in the Modelling of Crop Evapotranspiration from Open Field to Indoor Crops
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Planting Systems in Olive Groves Based on Open-Source, High-Resolution Images and Convolutional Neural Networks

by
Cristina Martínez-Ruedas
1,
Samuel Yanes-Luis
2,
Juan Manuel Díaz-Cabrera
3,
Daniel Gutiérrez-Reina
2,
Rafael Linares-Burgos
4 and
Isabel Luisa Castillejo-González
5,*
1
Department of Electronic and Computer Engineering, Campus de Rabanales, University of Cordoba, 14071 Córdoba, Spain
2
Department of Electronic Engineering, Camino de Los Descubrimientos, s/n, University of Seville, 41009 Sevilla, Spain
3
Departament of Electric and Automatic Engineering, Campus de Rabanales, University of Cordoba, 14071 Córdoba, Spain
4
Department of Rual Engineering, Civil Construction and Engineering Projects, University of Cordoba, 14071 Córdoba, Spain
5
Department of Graphic Engineering and Geomatics, Campus de Rabanales, University of Cordoba, 14071 Córdoba, Spain
*
Author to whom correspondence should be addressed.
Agronomy 2022, 12(11), 2700; https://doi.org/10.3390/agronomy12112700
Submission received: 5 October 2022 / Revised: 22 October 2022 / Accepted: 25 October 2022 / Published: 31 October 2022

Abstract

:
This paper aims to evaluate whether an automatic analysis with deep learning convolutional neural networks techniques offer the ability to efficiently identify olive groves with different intensification patterns by using very high-resolution aerial orthophotographs. First, a sub-image crop classification was carried out. To standardize the size and increase the number of samples of the data training (DT), the crop images were divided into mini-crops (sub-images) using segmentation techniques, which used a different threshold and stride size to consider the mini-crop as suitable for the analysis. The four scenarios evaluated discriminated the sub-images efficiently (accuracies higher than 0.8), obtaining the largest sub-images (H = 120, W = 120) for the highest average accuracy (0.957). The super-intensive olive plantings were the easiest to classify for most of the sub-image sizes. Nevertheless, although traditional olive groves were discriminated accurately, too, the most difficult task was to distinguish between the intensive plantings and the traditional ones. A second phase of the proposed system was to predict the crop at farm-level based on the most frequent class detected in the sub-images of each crop. The results obtained at farm level were slightly lower than at the sub-images level, reaching the highest accuracy (0.826) with an intermediate size image (H = 80, W = 80). Thus, the convolutional neural networks proposed made it possible to automate the classification and discriminate accurately among traditional, intensive, and super-intensive planting systems.

1. Introduction

Olea europaea L. (olive tree) can be considered one of the most important crops in the Mediterranean Basin, providing both edible fruit and storable oil. Worldwide, the Mediterranean Basin represents 93% of the olive tree area harvested [1] and 99% of the production [2], of which Spain is the largest olive producer in the world with 30.4% of the production, followed by Italy and Morocco with the 11.2% and 9.8%, respectively [3].
Over time, traditional olive groves have been an important economic, environmental, and social component of the Mediterranean landscape. Nevertheless, the current economic market has meant that many of the traditional olive groves are not considered sustainable, and more intensive systems are emerging in the valleys, such as drip irrigation and the intensification of patterns [4]. The management of this crop is changing from traditional low-density rainfed olive groves to medium- or high-density groves, mostly associated with irrigation, that are promoting the substitution of extensive crops such as wheat, barley, sunflowers, or cotton in some regions. This strong modification of olive growing systems can be clearly observed in Andalusia, one of the main olive-growing regions in the world with 46.7% of the olive-growing area in Spain [5]. From 2015 to 2019, high- and super high-density olive groves (more than 400 olive trees ha−1) steadily increased in area by 48.5%, from 54,140 ha in 2015 to 80,386 ha in 2019 [5,6].
Since olive groves produce an important range of ecosystem services such as healthy food, carbon sequestration, biodiversity, employment in rural areas, etc., the changes in the management of the olive groves are causing variations at the economic, environmental, and social levels. Government institutions are greatly concerned about the new challenges and their future impacts, and they promulgate legislation on different scales to enhance the sustainability of farming systems. For example, the European Union (EU) has shown great interest in the conservation of the traditional olive groves and their benefits through strategies such as the European Landscape Convention [7]. In the same direction, the European Commission and the Member States just adopted a new common agricultural policy (CAP), 2023-27, which seeks to ensure a sustainable future for European farmers, highlighting the importance of the environment and the conservation of the biodiversity of agricultural landscapes [8]. Of course, when necessary, government institutions also encourage policies that can promote more productive systems through farming intensification techniques, such as high tree densities or irrigation. In this sense, and although higher olive tree densities can show important improvements [9], such as an increase in carbon sequestration, the new intensive managements also show an important disadvantage since they would increase irrigation needs, which is a serious problem due to the low availability of water in arid and semi-arid areas [10]. In this sense, studies such as [11,12] stress the need for more data at the farm level to increase sustainable practices. Therefore, government institutions need techniques that allow them to accurately control the spatial distribution of crops and their management.
Since traditional methods based on sample and ground visits to a small percentage of fields are considered imprecise, expensive, and time-consuming, it is necessary to develop new techniques that increase the controlled area and reduce costs. Imagen analysis with remote sensing (RS) techniques can significantly improve the deficiencies of ground visits by obtaining accurate maps of large areas. A significant number of RS sensors are utilized, alone or in combination, to estimate and map different tree parameters in agroforestry environments to assist in making appropriate management decisions in a non-destructive manner [13]: RGB (Red-Green-Blue) [14,15], multispectral [16,17,18,19], hyperspectral [20,21,22,23], thermal [24,25,26,27], or even LiDAR (Light Detection and Ranging) and RADAR (Radio Detection and Ranging) sensors [28,29,30,31]. Specifically, in olive grove studies, different combinations of RS platforms and technologies have been widely used. Nevertheless, the main limitation of tree-level studies is the need to analyze images with very high spatial resolution.
Although images obtained from satellites have the advantage of covering wide areas quickly, delineation studies of olive tree crowns with these images alone are scarcer given the lower availability of satellite images with very high spatial resolution [32,33,34]. In some cases, it is necessary to implement complementary techniques, such as pan-sharpening fusion techniques, to allow obtaining images with higher spatial resolution than the originals [35,36]. On the other hand, images obtained from cameras on board Unmanned Aerial Vehicles (UAV) or manned flights can provide high-resolution data (cm resolution); this type of data is used more frequently in the analysis of olive tree canopies than satellite images [37,38,39]. Nevertheless, although this very high spatial resolution imagery overcomes the spatial limitation, this technology often requires high costs due to the equipment and field campaigns [40]. Moreover, since UAVs are mainly battery-powered, they have limited flight time and are not suitable for large-scale surveys [41]. The use of open data imagery with appropriate spatial resolution acquired from freely accessible platforms can provide a suitable alternative to reduce costs. Some countries have programs for acquiring and updating digital orthophotographs for cartography and general geographic knowledge of their territory, such as the digital aerial orthophotography of the National Plan for Aerial Orthophotography (PNOA) [42].
With the continuous increase of RS data (images and derived information), traditional classification methods based on spectral distance-angles or probabilities are not the most appropriate since they do not take advantage of all the information efficiently [43]. The rapid development of new technologies such as machine learning (ML) or deep learning (DL) techniques in the field of RS are showing more accurate classifications and target detections [44,45,46]. To facilitate the automation of classification processes, deep learning (DL) can be a good approach. DL techniques are artificial neural networks in which multiple layers of processing are able to automatically learn informative representations of input data and extract progressively higher-level features. Due to its great potential, the use of a DL technique in image classification presents a new challenge. RS images are more complex than the scene images used in conventional DL developments. For example, the high spatial resolution of RS images may involve various types of objects in the same scene, and the high spectral resolution (especially with hyperspectral images, which contain hundreds of bands) can involve a large amount of data. This characteristic may demand a large number of neurons in a DL network [47]. Although it is increasingly common to see studies that use DL techniques in RS, they are still scarce, and knowing how to utilize this information still requires further research.
Despite the importance of new methodologies to automatize the detection of planting systems, no studies directly related to this necessity were found. The studies that were found determine the density of plants [48,49,50,51], but they do not directly predict the plantation system of the study area. To achieve this, the analysis must take into account both the density of the plants and the spatial relationship among them (position and distance of each plant with respect to the others), or even the size of the plant canopy. In this regard, the main contribution of this study is to evaluate the potential of DL techniques to accurately distinguish among traditional, intensive, and super-intensive olive groves. For this purpose, this paper seeks to develop and validate a novel DL methodology based on sub-image (mini-crops in this paper) classification of the fraction canopy cover (FCC) of olive groves using convolutional neural networks. As a result, the proposed system can automatically discriminate between olive grove planting systems at farm level based on open data sources. Furthermore, the effect of mini-crop size variations to optimize the time-cost/accuracy ratio has been evaluated. By sequentially analyzing every sub-image from the orthophotography, the system results in a useful tool for not only planting system discrimination, but also as a segmentation analysis method for studying the distribution of the olives trees across a crop.
The article outline is as follows. In Section 2, the materials and procedures used to generate the dataset of high-resolution images of olives groves are described, and the convolutional neural network architecture used for the target classification problem in this article was performed. Section 3 contains the results that were obtained and their discussion. Finally, Section 4 comprises the conclusions that can be drawn from this work.

2. Materials and Methods

2.1. Study Area and Image Acquisition

The analysis was conducted on 1187 olive groves distributed throughout Andalusia, a representative agricultural region of the typical continental Mediterranean climate (Figure 1). This climate is characterized by short mild winters and long, dry, and hot summers. The olive plots were sampled in three Andalusian provinces with more hectares of olive trees: Jaen, Cordova, and Seville [5]. Jaen is the province with the largest olive-growing area in Spain, where the traditional olive groves are predominant. An important mixture of olive grove managements can be observed in Cordova, the second largest olive-growing area in Spain. In recent years, Seville has increased its olive-growing area, especially with more intensive management systems.
Several olive grove areas were evaluated; olive groves with three different planting systems were identified and analyzed: traditional, intensive, and super-intensive olive groves (Figure 2). The traditional olive groves are the dominant system in Spain, with stocking density of up 400 trees ha−1. The average density usually ranges between 80–120 olive trees ha−1, with square plantation frames where 10–12 m is the most frequent separation between trees. Densities up to 1000–1500 olive trees ha−1 are considered intensive management systems, although densities not higher than 600 are usually observed. The most usual plantation frames are 7 m × 7 m, 8 m × 4 m, and 7 m × 5 m, always leaving corridors with a minimum width of 6 m to favor the mechanization of olive groves. Finally, the super-intensive olive grove management shows densities of up to 2500–3000 olive trees ha−1, with an inter-row spacing of about 1.2–1.5 m and an intra-row spacing of 3.2–4.0 m. On the other hand, as can be observed in Figure 2, the crown diameter also varies according to the planting system: between 8, 5.5, and 3.5 m for super-intensive, intensive, and traditional systems, respectively. All the olive plots analyzed were farmer-managed groves, in which farmers made decisions individually. Thus, different sizes and morphologies of olive crowns and types of soil tillage were found.
The set of olive groves analyzed came from very high-resolution aerial orthophotographs of the PNOA [42]. The studied area was covered by the most recent PNOA photogrammetric flights, dated between June and August 2019. These orthophotographs provided three multispectral bands (blue, B: 450–520 nm; green, G: 520–600 nm; and red, R: 630–690 nm) with a spatial resolution of 0.5 m. The radiometric resolution was 12 bits.
The identification and location of olive grove plots were obtained from the Agricultural Plot Geographic Information System (SIGPAC) [52], which enables the geographic identification of plots declared by farmers and livestock farmers under any subsidy regime relating to the area which is cultivated or used for livestock.
The automatic image download and FCC extraction were obtained through the modules Automatic Image Acquisition and Identification of Elements of Interest developed in the methodology for the automatic inventory of olive groves at plot and polygon level [49].

2.2. Procedure

The first step was the identification of the images with the different planting systems, which were then downloaded and processed to obtain the FCC to create the PNOA dataset. Subsequently, the PNOA dataset was divided into mini-crop images to train, test, and validate the classifier through a segmentation method (data augmentation). Finally, with the information generated, two classifiers based on convolutional neural networks were developed, the first approach oriented in sub-image classification (mini-crop) and the second one at farm level based on the most frequent class detected in the sub-images of each crop.

2.2.1. PNOA Dataset Generation

The process of downloading, identifying, and extracting the FCC of the different olive groves contained three steps. (i) The olive groves with different planting systems were identified through an observer in SIGPAC [52]. (ii) After being identified, they were automatically downloaded from the PNOA through the module “Automatic Image Acquisition” developed in the study [53] which made use of the Web Map Service (WMS) [54] provided by the IGN [55]. (iii) Finally, the FCC of the olive groves was extracted using the method “Identification of elements of interest” developed and validated in the study [53]. Figure 3 shows the images downloaded for a crop.
This module generated three images for each crop (Crop.tiff, Crop_FCC.tiff, and Crop_mask.tiff). Crop.tiff was an RGB image with the information of only the olive grove from PNOA and the area selected by the observer. Crop_FCC.tiff was a binary image with olive grove FCC, and Crop_mask.tiff was a binary image with the delimitations of the olive crop. All images were geo-referenced and covered about 2.5 ha of crop. As a result, the PNOA dataset of processed images for 1187 crops was obtained.

2.2.2. Mini-Crop Set Generation

The PNOA images were divided into a validation set of 236 crops and a training set of 951 crops before the sub-image segmentation. These training images were subjected to the sub-image extraction operation and also divided into a training, test, and validation set (see Figure 4).
The sub-image extraction process divided the crop images into mini-crops to standardize the size and increase the number of samples (data augmentation). To perform this process, segmentation techniques were employed which used the mini-crop size (Hm, Wm) and the stride distance (s) to divide the image into sub-images. A mini-crop was only considered interesting and, therefore, selected for classification (either for training or validation) if the mean value of its corresponding sub-mask image was over a threshold defined as C T H (see Figure 5). As previously discussed, the separation between validation and training sets was performed before sub-image segmentation. Therefore, there was no problem with data duplication because we separated the validation from the testing values without any risk of data duplication.
Every crop was divided into 15 random mini-crops and designated to the training, test, or validation set. The number of mini-crops for every class was (4984, 4276, 5005) for super-intensive, intensive, and traditional classes, respectively. The division of the data for the training, test, and validation sets was (0.8, 0.1, 0.1). These values were chosen to maximize the data for training in this particular subtask. As the significant validation values corresponded to the complete crop classification task, the higher the training data in this subtask, the better the model learned. The test set was used for comparing the performance between trainings, and the validation served as a merit figure for the final performance of the sub-image classifier. Figure 6 shows the distribution of the dataset of mini-crops for each class.
Four mini-crop sets were generated for a set of values of (Hm, Wm, s) (50, 50, 5; 80, 80, 8; 100, 100, 10; 120, 120, 12). Figure 7 shows examples of different mini-crops of 80 × 80 px. The differences between classes can be easily observed. While the super-intensive crops tend to be formed by lines of cultivation, the intensive and traditional crops have sparse olives. It was imposed for every mini-crop that it must contain a C T H percentage of cultivable area to avoid unnecessary, uninformative images; namely, the value selected was 0.1. This value was set heuristically according to the resolution of the crops with scarce FCC zones. We have observed that values lower than 0.1 will overfill the network with too many void images. This thresholding also alleviates training times since the need of an additional ‘void’ label disappears.

2.2.3. Planting System Classifier Based on Convolutional Neural Network (CNN)

Two classifiers have been developed and evaluated, one at the sub-image level and the other at the olive grove farm-level. In both cases, the CNN was composed of 4 consecutive convolutional operations with (32,64,128,256) filters each (see Figure 8). Every filter had a kernel size operation of (3 × 3). Once the input image was processed with this initial feature extractor, the last convolutional layer was flattened and processed by a dense linear network of 3 layers of (3,512,1024) neurons each. Every layer performed a batch normalization operation for a better generalization. The maximum pooling operation after every convolutional module allowed the reduction of the dimension and alleviated the computation cost.
This architecture has been designed following the classical LeNet architecture [56], which is an efficient and compact network sufficient for this classification task. Larger architectures, like ResNet [57], albeit their good results in multi-class classification tasks, are inefficient in terms of computational speed and resources for this 3-class problem. In this sense, the aim of this simple design was to allow a fast inference without GPUs for researchers and authorities or a retraining with other data.
Table 1 shows the hyperparameters for the training phase of the model. The learning rate was chosen dynamically using a 1Cycle Learning Rate Scheduler [58]. This led to a robust convergence with higher speeds.
The classifier at sub-imagen level only made use of the CNN proposed, which predicted the planting system from the mini-crop validated in the segmentation method. On the other hand, the architecture of the proposed farm-level classifier is shown in Figure 9. This process of classifiers required more modules. (i) The input image was processed by the sliding windows operation that extracts the mini-crops. (ii) Valid mini-crops were selected depending on the percentage of orchard area in the image. (iii) Those valid mini-crops were processed by the convolutional classifier. (iv) The most frequent class was the selected one. The classifier also output the segmented image for a better explanation of the classification task.
The evaluation metrics to compute the performance of the methods were the following:
  • Accuracy: The accuracy computed the fraction of correct predictions among all the predictions done for the evaluation set.
  • Precision (macro): The precision was the ratio Tp/(Tp + Fp) where Tp was the number of true positives and Fp was the number of false positives. For this multiclass classification problem, the precision was computed as the average precision for every single label (macroscopic precision).
  • ROC AUC 1 vs. 1 (macro): The area under Receiver Operating Characteristic Curve (ROC AUC) indicated the area under the trade-off curve between the true positive rate and the false positive rate. When 1 vs. 1, the AUC was computed for all possible pairwise combinations of classes available and averaged (macro).
  • ROC AUC 1 vs. Others (macro): The ROC AUC 1 vs. Others (macro) was similar to the 1 vs. 1 AUC, but it computed the area of each class against the rest.

3. Results and Discussion

The main objective of this study was to develop and validate a DL methodology to discriminate between different planting systems in olive groves. In this sense, a model based on CNN has been proposed to identify super-intensive, intensive, or traditional olive groves in sub-images of the crop and in the farm. The results for the validation set for the sub-imagen and farm level are shown in Table 2 and Table 3, respectively. The highest result for a sub-image (accuracy of 0.957) was achieved for the largest sub-images (H = 120, W = 120) while the classification at farm level achieved the highest result (accuracy of 0.826) for intermediate values of sub-image size, specifically (H = 80, W = 80).
The accuracy at farm level was slightly lower than the results at sub-image level since some farms have areas with several planting systems. Figure 10 shows an example of a common misclassification for a grove. The most typical misclassification scenario occurred when a labeled intensive grove had characteristics from both intensive and traditional agriculture styles. In spite of the binary classification, the algorithm provided the segmentation matrix for better explanation.
In this context, sequential analysis at the mini-crop level has proven useful as a segmentation tool, allowing for improved system training through data augmentation. It also provides the possibility of identifying mixed plantation systems. This is proposed as a future line of research.
The study achieved an accuracy of 0.957 % for sub-images and 0.826% at farm level. The data obtained provides accurate and valuable information to current cartographic and agronomic information systems such as SIGPAC [52], making it possible to create periodic mosaics of the agricultural olive landscape and to incorporate additional information at the farm level, a need detected in the studies [11,12].
This automatic and periodic characterization of the olive landscape allows the study of the evolution of planting systems, which are tending towards intensification as shown in some studies [12,59,60]. This trend implies changes in soil management, irrigation, and ground cover vegetation [12], which has agronomic, economic, and environmental consequences [10]. Systematizing the monitoring of olive tree densities with detailed and updated information makes it possible to identify factors that influence the productive yield and sustainability of the olive grove.
One factor to be considered in soil management that changes based on intensification is the water requirement, which will differ according to the geographical location of the crop. The proposed system in this study makes it possible to calculate water demand based on the changes that occur in olive planting systems, with the possibility of improving decision-making.
Another point to highlight from the work is the ability to detect the super-intensive plantings. Such planting has been the easiest class to identify; this can be observed in Figure 11, which shows the confusion matrix for every resolution under validation. As can be seen, the most difficult task is to distinguish between the intensive class (I) and the traditional one (T). The super-intensive class (SI) is easily classified for most of the sub-image sizes. This may be since the super-intensive plantings are structured in rows to harvest the olives with fully mechanized, improved pruning, and improved pest control treatments, which are more complex or impossible in intensive and traditional plantation systems [9].
Although the main objective of this study was to detect planting systems for olive groves, identifying the super-intensive plantations with greater certainty is a step forward because, currently, the most complete published dataset shows a continuous increase in oil yield per hectare up to 14 years after planting [61]. Monitoring these planting systems will allow a more detailed study of their evolution.
Regarding the use of DL techniques for high-resolution image classification, it has been shown that they present promising results, and this study has contributed to making a step forward in the utilization of DL techniques on RS techniques [47].
Therefore, the methodology proposed and validated in the study enables the monitoring of the impact of olive grove intensification, as there are divided opinions on its sustainability. There are research and techniques that promote these cultivation models through deficit irrigation with the aim of increasing mechanization, improving profitability, and increasing carbon sequestration [9], and there are others who defend traditional systems, as they are typical agroforestry systems, cultural heritage, and contributing factors to soil and resource conservation [59,60].
Our proposal has achieved satisfactory results as it is based on very high resolution and freely available aerial orthophotography images from the PNOA [62]. On the other hand, the use of this data source is limited by the temporal resolution since it is updated every three years. To overcome this limitation, future studies should extrapolate the methodology to satellite images of lower spatial resolution (10 m × 10 m) but higher temporal resolution (5 days).

4. Conclusions

This study has made it possible to evaluate the potential of modern data analysis techniques, more specifically deep learning, to discriminate between traditional, intensive, and super-intensive olive groves. As previously mentioned, Olea europaea L. (olive trees) can be considered one of the most important crops in the Mediterranean basin and Andalusia, which is one of the main olive-growing regions in the world with 46.7% of the olive-growing area of Spain. The use of traditional methods based on samples and ground visits to a small percentage of fields are imprecise, expensive, and time-consuming, and they should be replaced by modern techniques, such as the one presented in this paper.
The automation of this process through the development of surface characterization algorithms by means of the interpretation of satellite images is a technique widely used in other fields and which has been validated in this case for the agri-food sector, offering a decision-support system that allows this process to be optimized, making it more efficient and sustainable. Furthermore, although images obtained from satellites have the advantage of covering wide areas quickly, they have the disadvantage of a lower availability of satellite images with very high spatial resolution. An added value of this work has been the acquisition of high-resolution digital orthophotographs from freely accessible platforms, which allows the optimization of results and reduction of costs.
For this purpose, a DL methodology based on the mini-crops classification of fraction canopy cover (FCC) of olive groves using convolutional neural networks to discriminate between olive grove planting systems at farm level based on automatic analysis techniques of open data sources has been developed and validated. The effect of mini-crop size variations to optimize the time-cost/accuracy ratio had to be verified. The goodness of the results obtained, with an accuracy of over 82%, allows us to ensure that it is a useful tool for not only planting system discrimination, but also as a segmentation analysis method for studying the distribution of the olive trees across a crop. Future lines of work should be based on the development of new algorithms that allow us to perform the classification with other types of images obtained from satellites in order to decrease the temporal resolution of the methodology.

Author Contributions

Conceptualization, C.M.-R. and I.L.C.-G.; methodology, C.M.-R., I.L.C.-G., S.Y.-L. and D.G.-R.; software, C.M.-R., I.L.C.-G., S.Y.-L., D.G.-R. and R.L.-B.; validation, C.M.-R., R.L.-B. and I.L.C.-G.; formal analysis, C.M.-R., J.M.D.-C., D.G.-R., S.Y.-L. and I.L.C.-G.; investigation, C.M.-R., J.M.D.-C., R.L.-B. and I.L.C.-G.; resources, C.M.-R., R.L-B., S.Y.-L. and D.G.-R.; data curation, S.Y.-L., D.G.-R. and C.M.-R.; writing—original draft preparation, C.M.-R., J.M.D.-C., R.L-B., I.L.C.-G., S.Y.-L. and D.G.-R.; writing—review and editing, C.M.-R., J.M.D.-C., R.L-B., I.L.C.-G., S.Y.-L. and D.G.-R.; visualization, S.Y.-L., D.G.-R. and C.M.-R.; supervision, C.M.-R., I.L.C.-G., J.M.D.-C. and D.G.-R.; project administration, C.M.-R., I.L.C.-G., D.G.-R. and J.M.D.-C.; funding acquisition, J.M.D.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available on request from the first author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. FAOSTAT (Food and Agriculture Organization of the United Nations). Available online: https://www.fao.org/faostat/en/#data/QCL (accessed on 4 October 2022).
  2. Loumou, A.; Giourga, C. Olive Groves: “The Life and Identity of the Mediterranean”; Kluwer Academic Publishers: Alphen aan den Rijn, The Netherlands, 2003; Volume 20. [Google Scholar]
  3. Maps and Statistics of the World and Regions. Available online: https://www.atlasbig.com/en-us/countries-olive-production (accessed on 4 October 2022).
  4. Stroosnijder, L.; Mansinho, M.I.; Palese, A.M. OLIVERO: The project analysing the future of olive production systems on sloping land in the Mediterranean basin. J. Environ. Manag. 2008, 89, 75–85. [Google Scholar] [CrossRef] [PubMed]
  5. Encuesta Sobre Superficies y Rendimientos de Cultivos. Análisis de Plantaciones de Olivar en España (Survey of Surfaces and Crop Yields. Analysis of Olive Groves in Spain); Ministry of Agriculture, Fisheries and Food: Palacio de Fomento Madrid, Spain, 2019. Available online: https://www.mapa.gob.es/es/estadistica/temas/estadisticas-agrarias/olivar2019_tcm30-122331.pdf (accessed on 4 October 2022).
  6. Análisis de la Densidad en las Plantaciones de Olivar en Andalucía (Density Analysis in Olive Groves of Andalusia); Government of the Andalusian Government. General Secretariat of Agriculture, Livestock and Food: Palacio de Fomento Madrid, Spain, 2019. Available online: https://www.juntadeandalucia.es/export/drupaljda/estudios_informes/19/11/An%C3%A1lisis%20densidad%20olivar%20andaluz%20v3.pdf (accessed on 4 October 2022).
  7. Council of Europe Landscape Convention/Official Website. Available online: https://www.coe.int/en/web/landscape (accessed on 4 October 2022).
  8. Europena Commision. The New Common Agricultural Policy: 2023–2027. Available online: https://agriculture.ec.europa.eu/common-agricultural-policy/cap-overview/new-cap-2023-27_en (accessed on 4 October 2022).
  9. lo Bianco, R.; Proietti, P.; Regni, L.; Caruso, T. Planting Systems for Modern Olive Growing: Strengths and Weaknesses. Agriculture 2021, 11, 494. [Google Scholar] [CrossRef]
  10. Mairech, H.; López-Bernal, Á.; Moriondo, M.; Dibari, C.; Regni, L.; Proietti, P.; Villalobos, F.J.; Testi, L. Is new olive farming sustainable? A spatial comparison of productive and environmental performances between traditional and new olive orchards with the model OliveCan. Agric. Syst. 2020, 181, 102816. [Google Scholar] [CrossRef]
  11. Gómez, J.A.; Montero, A.S.; Guzmán, G.; Soriano, M.A. In-Depth Analysis of Soil Management and Farmers’ Perceptions of Related Risks in Two Olive Grove Areas in Southern Spain. Int. Soil Water Conserv. Res. 2021, 9, 461–473. [Google Scholar] [CrossRef]
  12. Guzmán, G.; Boumahdi, A.; Gómez, J.A. Expansion of Olive Orchards and Their Impact on the Cultivation and Landscape through a Case Study in the Countryside of Cordoba (Spain). Land Use Policy 2022, 116, 106065. [Google Scholar] [CrossRef]
  13. Weiss, M.; Jacob, F.; Duveiller, G. Remote Sensing for Agricultural Applications: A Meta-Review. Remote Sens Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  14. Grybas, H.; Congalton, R.G. A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests. Remote Sens. 2021, 13, 2631. [Google Scholar] [CrossRef]
  15. Aparecido dos Santos, A.; Marcato Junior, J.; Santos Araújo, M.; Robledo Di Martini, D.; Castelão Tetila, E.; Lopes Siqueira, H.; Aoki, C.; Eltner, A.; Takashi Matsubara, E.; Pistori, H.; et al. Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVs. Sensors 2019, 19, 3595. [Google Scholar] [CrossRef] [Green Version]
  16. Xi, X.; Xia, K.; Yang, Y.; Du, X.; Feng, H. Evaluation of Dimensionality Reduction Methods for Individual Tree Crown Delineation Using Instance Segmentation Network and UAV Multispectral Imagery in Urban Forest. Comput Electron. Agric. 2021, 191, 106506. [Google Scholar] [CrossRef]
  17. Osco, L.P.; de Arruda, M.D.S.; Marcato Junior, J.; da Silva, N.B.; Ramos, A.P.M.; Moryia, É.A.S.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.T.; et al. A Convolutional Neural Network Approach for Counting and Geolocating Citrus-Trees in UAV Multispectral Imagery. ISPRS J. Photogramm. Remote Sens. 2020, 160, 97–106. [Google Scholar] [CrossRef]
  18. Ampatzidis, Y.; Partel, V. UAV-Based High Throughput Phenotyping in Citrus Utilizing Multispectral Imaging and Artificial Intelligence. Remote Sens. 2019, 11, 410. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, X.; Li, J.; Member, S.; Chapman, M.A. Quantifying the Carbon Storage in Urban Trees Using Multispectral ALS Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3358–3365. [Google Scholar] [CrossRef]
  20. Kurihara, J.; Koo, V.-C.; Guey, C.W.; Lee, Y.P.; Abidin, H. Early Detection of Basal Stem Rot Disease in Oil Palm Tree Using Unmanned Aerial Vehicle-Based Hyperspectral Imaging. Remote Sens. 2022, 14, 799. [Google Scholar] [CrossRef]
  21. Abbas, S.; Peng, Q.; Wong, M.S.; Li, Z.; Wang, J.; Ng, K.T.K.; Kwok, C.Y.T.; Hui, K.K.W. Characterizing and Classifying Urban Tree Species Using Bi-Monthly Terrestrial Hyperspectral Images in Hong Kong. ISPRS J. Photogramm. Remote Sens. 2021, 177, 204–216. [Google Scholar] [CrossRef]
  22. Sun, X.; Qu, Y.; Gao, L.; Sun, X.; Qi, H.; Zhang, B.; Shen, T. Target Detection through Tree-Structured Encoding for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4233–4249. [Google Scholar] [CrossRef]
  23. Skoneczny, H.; Kubiak, K.; Spiralski, M.; Kotlarz, J. Fire Blight Disease Detection for Apple Trees: Hyperspectral Analysis of Healthy, Infected and Dry Leaves. Remote Sens. 2020, 12, 2101. [Google Scholar] [CrossRef]
  24. Zakrzewska, A.; Kopeć, D.; Krajewski, K.; Charyton, J. Canopy Temperatures of Selected Tree Species Growing in the Forest and Outside the Forest Using Aerial Thermal Infrared (3.6–4.9 Μm) Data. Eur. J. Remote Sens. 2022, 55, 313–325. [Google Scholar] [CrossRef]
  25. Giménez-Gallego, J.; González-Teruel, J.D.; Soto-Valles, F.; Jiménez-Buendía, M.; Navarro-Hellín, H.; Torres-Sánchez, R. Intelligent Thermal Image-Based Sensor for Affordable Measurement of Crop Canopy Temperature. Comput Electron. Agric. 2021, 188, 106319. [Google Scholar] [CrossRef]
  26. Noguera, M.; Millán, B.; Pérez-Paredes, J.J.; Ponce, J.M.; Aquino, A.; Andújar, J.M. A New Low-Cost Device Based on Thermal Infrared Sensors for Olive Tree Canopy Temperature Measurement and Water Status Monitoring. Remote Sens. 2020, 12, 723. [Google Scholar] [CrossRef] [Green Version]
  27. Sepulcre-Cantó, G.; Zarco-Tejada, P.J.; Jiménez-Muñoz, J.C.; Sobrino, J.A.; de Miguel, E.; Villalobos, F.J. Detection of Water Stress in an Olive Orchard with Thermal Remote Sensing Imagery. Agric. For. Meteorol. 2006, 136, 31–44. [Google Scholar] [CrossRef]
  28. Hanssen, F.; Barton, D.N.; Venter, Z.S.; Nowell, M.S.; Cimburova, Z. Utilizing LiDAR Data to Map Tree Canopy for Urban Ecosystem Extent and Condition Accounts in Oslo. Ecol. Indic. 2021, 130, 108007. [Google Scholar] [CrossRef]
  29. Heffernan, S.; Strimbu, B.M. Estimation of Surface Canopy Water in Pacific Northwest Forests by Fusing Radar, Lidar, and Meteorological Data. Forests 2021, 12, 339. [Google Scholar] [CrossRef]
  30. Chen, R.H.; Pinto, N.; Duan, X.; Tabatabaeenejad, A.; Moghaddam, M. Mapping Tree Canopy Cover and Canopy Height with L-Band SAR Using LiDAR Data and Random Forests. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Waikoloa, HI, USA, 26 September–2 October 2020; pp. 4136–4139. [Google Scholar]
  31. Feng, Z.; Chen, Y.; Hyyppa, J.; Hakala, T.; Zhou, H.; Wang, Y.; Karjalainen, M. Estimating Ground Level and Canopy Top Elevation with Airborne Microwave Profiling Radar. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2283–2294. [Google Scholar] [CrossRef]
  32. Lin, C.; Jin, Z.; Mulla, D.; Ghosh, R.; Guan, K.; Kumar, V.; Cai, Y. Toward Large-Scale Mapping of Tree Crops with High-Resolution Satellite Imagery and Deep Learning Algorithms: A Case Study of Olive Orchards in Morocco. Remote Sens. 2021, 13, 1740. [Google Scholar] [CrossRef]
  33. Solano, F.; di Fazio, S.; Modica, G. A Methodology Based on GEOBIA and WorldView-3 Imagery to Derive Vegetation Indices at Tree Crown Detail in Olive Orchards. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101912. [Google Scholar] [CrossRef]
  34. Gonzalez, J.; Galindo, C.; Arevalo, V.; Ambrosio, G. Applying Image Analysis and Probabilistic Techniques for Counting Olive Trees in High-Resolution Satellite Images; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4678 LNCS, ISBN 9783540746065. [Google Scholar]
  35. Castillejo-González, I.L. Mapping of Olive Trees Using Pansharpened QuickBird Images: An Evaluation of Pixel- And Object-Based Analyses. Agronomy 2018, 8, 288. [Google Scholar] [CrossRef] [Green Version]
  36. Kurucu, Y.; Esetlili, T.; Erden, H.; Öztürk, G.; Güven, A.I.; Çamaşircioʇlu, E. Digitalization of Olive Trees by Using Remote Sensing Techniques. In Proceedings of the 2015 4th International Conference on Agro-Geoinformatics, Agro-Geoinformatics, Istanbul, Turkey, 20–24 July 2015; pp. 121–124. [Google Scholar]
  37. Safonova, A.; Guirado, E.; Maglinets, Y.; Alcaraz-Segura, D.; Tabik, S. Olive Tree Biovolume from Uav Multi-Resolution Image Segmentation with Mask r-Cnn. Sensors 2021, 21, 1617. [Google Scholar] [CrossRef] [PubMed]
  38. Modica, G.; Messina, G.; de Luca, G.; Fiozzo, V.; Praticò, S. Monitoring the Vegetation Vigor in Heterogeneous Citrus and Olive Orchards. A Multiscale Object-Based Approach to Extract Trees’ Crowns from UAV Multispectral Imagery. Comput. Electron. Agric. 2020, 175, 105500. [Google Scholar] [CrossRef]
  39. Jiménez-Brenes, F.M.; López-Granados, F.; Castro, A.I.; Torres-Sánchez, J.; Serrano, N.; Peña, J.M. Quantifying Pruning Impacts on Olive Tree Architecture and Annual Canopy Growth by Using UAV-Based 3D Modelling. Plant. Methods 2017, 13, 55. [Google Scholar] [CrossRef] [Green Version]
  40. Lima-Cueto, F.J.; Blanco-Sepúlveda, R.; Gómez-Moreno, M.L.; Galacho-Jiménez, F.B. Using Vegetation Indices and a UAV Imaging Platform to Quantify the Density of Vegetation Ground Cover in Olive Groves (Olea europaea L.) in Southern Spain. Remote Sens. 2019, 11, 2564. [Google Scholar] [CrossRef]
  41. AlMahamid, F.; Grolinger, K. Autonomous Unmanned Aerial Vehicle Navigation Using Reinforcement Learning: A Systematic Review. Eng. Appl. Artif. Intell. 2022, 115, 105321. [Google Scholar] [CrossRef]
  42. Ministerio de Transporte, M. y A. Urbana. PNOA: Plan Nacional de Ortofotografía Aérea. Available online: https://pnoa.ign.es/ (accessed on 4 October 2022).
  43. Khatami, R.; Mountrakis, G.; Stehman, S.V. A Meta-Analysis of Remote Sensing Research on Supervised Pixel-Based Land-Cover Image Classification Processes: General Guidelines for Practitioners and Future Research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef] [Green Version]
  44. Eide, A.; Koparan, C.; Zhang, Y.; Ostlie, M.; Howatt, K.; Sun, X. UAV-Assisted Thermal Infrared and Multispectral Imaging of Weed Canopies for Glyphosate Resistance Detection. Remote Sens. 2021, 13, 4606. [Google Scholar]
  45. Castillejo-González, I.L.; Angueira, C.; García-Ferrer, A.; Orden, M.S. de la Combining Object-Based Image Analysis with Topographic Data for Landform Mapping: A Case Study in the Semi-Arid Chaco Ecosystem, Argentina. ISPRS Int. J. Geo-Inf. 2019, 8, 132. [Google Scholar] [CrossRef] [Green Version]
  46. Jiang, W.; He, G.; Long, T.; Ni, Y. Detecting Water Bodies in Landsat8 OLI Image Using Deep Learning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2018, 42, 669–672. [Google Scholar] [CrossRef] [Green Version]
  47. Li, Y.; Zhang, H.; Xue, X.; Jiang, Y.; Shen, Q. Deep Learning for Remote Sensing Image Classification: A Survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1264. [Google Scholar] [CrossRef] [Green Version]
  48. Liu, T.; Yao, L.; Qin, J.; Lu, J.; Lu, N.; Zhou, C. A deep neural network for the estimation of tree density based on high-spatial resolution image. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4403811. [Google Scholar] [CrossRef]
  49. Hao, Z.; Lin, L.; Post, C.J.; Jiang, Y.; Li, M.; Wei, N.; Yu, K.; Liu, J. Assessing tree height and density of a young forest using a consumer unmanned aerial vehicle (UAV). New For. 2021, 52, 843–862. [Google Scholar] [CrossRef]
  50. Paul, A.; Bhattacharyya, S.; Chakraborty, D. Estimation of shade tree density in tea garden using remote sensing images and deep convolutional neural network. J. Spat. Sci. 2021. [Google Scholar] [CrossRef]
  51. Habibi, L.N.; Watanabe, T.; Matsui, T.; Tanaka, T.S.T. Machine learning techniques to predict soybean plant density using UAV and satellite-based remote sensing. Remote Sens. 2021, 13, 2548. [Google Scholar] [CrossRef]
  52. Junta de Andalucía: Consejería de Agricultura, G.P. y D.Sostenible. Descarga de Información Geográfica SIGPAC. Available online: https://www.juntadeandalucia.es/organismos/agriculturapescaaguaydesarrollorural/servicios/sigpac/visor/paginas/sigpac-descarga-informacion-geografica-shapes-provincias.html (accessed on 4 October 2022).
  53. Martínez-Ruedas, C.; Emilio Guerrero-Ginel, J.; Fernández-Ahumada, E. Methodology for the Automatic Inventory of Olive Groves at the Plot and Polygon Level. Agronomy 2022, 12, 1735. [Google Scholar]
  54. Open Geospatial Consurtium: Web Map Service. Available online: https://www.ogc.org/standards/wms (accessed on 4 October 2022).
  55. Instituto Geográfico Nacional. IGN: Servicios de Visualización y Descarga. Available online: https://www.ign.es/web/ign/portal/ide-area-nodo-ide-ign (accessed on 4 October 2022).
  56. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  57. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE Xplore 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  58. Smith, L.N.; Topin, N. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. In Proceedings of the SPIE—The International Society for Optical Engineering, San Diego, CA, USA, 11–15 August 2019. [Google Scholar]
  59. Guerrero-Casado, J.; Carpio, A.J.; Tortosa, F.S.; Villanueva, A.J. Environmental Challenges of Intensive Woody Crops: The Case of Super High-Density Olive Groves. Sci. Total Environ. 2021, 798, 149212. [Google Scholar] [CrossRef] [PubMed]
  60. Fernández-Lobato, L.; García-Ruiz, R.; Jurado, F.; Vera, D. Life Cycle Assessment, C Footprint and Carbon Balance of Virgin Olive Oils Production from Traditional and Intensive Olive Groves in Southern Spain. J. Environ. Manag. 2021, 293, 112951. [Google Scholar] [CrossRef] [PubMed]
  61. Diez, C.M.; Trujillo, I.; Martinez-Urdiroz, N.; Barranco, D.; Rallo, L.; Marfil, P.; Gaut, B.S. Olive Domestication and Diversification in the Mediterranean Basin. New Phytol. 2015, 206, 436–447. [Google Scholar] [CrossRef] [PubMed]
  62. Ministerio de Transporte Movilidad y Agencia Urbana: Instituto Geográfico Nacional Centro Descargas PNOA. Available online: https://pnoa.ign.es/productos-a-descarga (accessed on 4 October 2022).
Figure 1. Location of the study area in Andalusia, southern Spain.
Figure 1. Location of the study area in Andalusia, southern Spain.
Agronomy 12 02700 g001
Figure 2. Detail of the olive grove management systems: (a) Super-intensive; (b) Intensive, and (c) Traditional.
Figure 2. Detail of the olive grove management systems: (a) Super-intensive; (b) Intensive, and (c) Traditional.
Agronomy 12 02700 g002
Figure 3. Example of the images downloaded for a crop: (a) Image downloaded from the PNOA for a traditional olive crop (Grove); (b) Image processed with the FCC (Grove FCC); (c) Image with the delimitation of the olive crop (Grove mask).
Figure 3. Example of the images downloaded for a crop: (a) Image downloaded from the PNOA for a traditional olive crop (Grove); (b) Image processed with the FCC (Grove FCC); (c) Image with the delimitation of the olive crop (Grove mask).
Agronomy 12 02700 g003
Figure 4. Description of the dataset division for the training and validation sets.
Figure 4. Description of the dataset division for the training and validation sets.
Agronomy 12 02700 g004
Figure 5. Operation diagram for the mini-crop segmentation method where CTH defines the threshold for validating mini-crops.
Figure 5. Operation diagram for the mini-crop segmentation method where CTH defines the threshold for validating mini-crops.
Agronomy 12 02700 g005
Figure 6. The distribution of the dataset of mini-crops.
Figure 6. The distribution of the dataset of mini-crops.
Agronomy 12 02700 g006
Figure 7. Examples of different mini-crops of 80 × 80 pixels from an arbitrary orchard.
Figure 7. Examples of different mini-crops of 80 × 80 pixels from an arbitrary orchard.
Agronomy 12 02700 g007
Figure 8. The Convolutional Neural Network proposed for the mini-crop classification task.
Figure 8. The Convolutional Neural Network proposed for the mini-crop classification task.
Agronomy 12 02700 g008
Figure 9. Olive grove farm-level classifier architecture.
Figure 9. Olive grove farm-level classifier architecture.
Agronomy 12 02700 g009
Figure 10. Example of a common misclassification for a grove. (a) Intensive olive groves where areas of traditional olive groves can be identified. (b) Traditional olive groves where areas of intensive olive groves can be identified.
Figure 10. Example of a common misclassification for a grove. (a) Intensive olive groves where areas of traditional olive groves can be identified. (b) Traditional olive groves where areas of intensive olive groves can be identified.
Agronomy 12 02700 g010
Figure 11. Confusion matrix for every resolution under validation.
Figure 11. Confusion matrix for every resolution under validation.
Agronomy 12 02700 g011
Table 1. Hyperparameters for the training phase of the model.
Table 1. Hyperparameters for the training phase of the model.
Optimizer Adam
Adam Weight Decay 1 × 10−3
Training Epochs 50
Batch size 64
Learning Rate 1 Cycle LR Schedule with LRmax = 1 × 10−2
Table 2. Sub-image grove classification results for the validation set.
Table 2. Sub-image grove classification results for the validation set.
Sub-Image Size (H,W)Stride Size (s)AccuracyPrecision (Macro)AOC 1 vs. 1AOC 1 vs. R
(50, 50)50.8870.8900.9740.974
(80, 80)80.9450.9440.9900.990
(100, 100)100.9300.9310.9860.986
(120, 120)120.9570.9570.9940.994
Table 3. Classification metrics for the complete grove validation set.
Table 3. Classification metrics for the complete grove validation set.
Sub-Image Size (H,W)Stride Size (s)AccuracyPrecision (Macro)AOC 1 vs. 1AOC 1 vs. R
(50, 50)50.8000.8070.8480.845
(80, 80)80.8260.8320.8740.876
(100, 100)100.8190.8130.8680.970
(120, 120)120.8090.8130.8670.867
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martínez-Ruedas, C.; Yanes-Luis, S.; Díaz-Cabrera, J.M.; Gutiérrez-Reina, D.; Linares-Burgos, R.; Castillejo-González, I.L. Detection of Planting Systems in Olive Groves Based on Open-Source, High-Resolution Images and Convolutional Neural Networks. Agronomy 2022, 12, 2700. https://doi.org/10.3390/agronomy12112700

AMA Style

Martínez-Ruedas C, Yanes-Luis S, Díaz-Cabrera JM, Gutiérrez-Reina D, Linares-Burgos R, Castillejo-González IL. Detection of Planting Systems in Olive Groves Based on Open-Source, High-Resolution Images and Convolutional Neural Networks. Agronomy. 2022; 12(11):2700. https://doi.org/10.3390/agronomy12112700

Chicago/Turabian Style

Martínez-Ruedas, Cristina, Samuel Yanes-Luis, Juan Manuel Díaz-Cabrera, Daniel Gutiérrez-Reina, Rafael Linares-Burgos, and Isabel Luisa Castillejo-González. 2022. "Detection of Planting Systems in Olive Groves Based on Open-Source, High-Resolution Images and Convolutional Neural Networks" Agronomy 12, no. 11: 2700. https://doi.org/10.3390/agronomy12112700

APA Style

Martínez-Ruedas, C., Yanes-Luis, S., Díaz-Cabrera, J. M., Gutiérrez-Reina, D., Linares-Burgos, R., & Castillejo-González, I. L. (2022). Detection of Planting Systems in Olive Groves Based on Open-Source, High-Resolution Images and Convolutional Neural Networks. Agronomy, 12(11), 2700. https://doi.org/10.3390/agronomy12112700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop