Next Article in Journal
The Application of Remote Sensing Technology in Post-Disaster Emergency Investigations of Debris Flows: A Case Study of the Shuimo Catchment in the Bailong River, China
Previous Article in Journal
Quantifying the Relationship between Slope Spectrum Information Entropy and the Slope Length and Slope Steepness Factor in Different Types of Water-Erosion Areas in China
Previous Article in Special Issue
Multi-Scenario Simulation of Land System Change in the Guangdong–Hong Kong–Macao Greater Bay Area Based on a Cellular Automata–Markov Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Tile Size and Tile Overlap on the Prediction Performance of Convolutional Neural Networks Trained for Road Classification

by
Calimanut-Ionut Cira
1,*,
Miguel-Ángel Manso-Callejo
1,
Naoto Yokoya
2,3,
Tudor Sălăgean
4,5 and
Ana-Cornelia Badea
5,6
1
Departamento de Ingeniería Topográfica y Cartografía, E.T.S.I. en Topografía, Geodesia y Cartografía, Universidad Politécnica de Madrid, C/Mercator 2, 28031 Madrid, Spain
2
Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan
3
Geoinformatics Team, RIKEN Center for Advanced Intelligence Project (AIP), Mitsui Building 15th Floor, 1-4-1 Nihonbashi, Chūō-ku, Tokyo 103-0027, Japan
4
Department of Land Measurements and Exact Sciences, Faculty of Forestry and Cadastre, University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, 3-5 Mănăștur Street, 400372 Cluj-Napoca, Romania
5
Doctoral School, Technical University of Civil Engineering Bucharest, 122-124 Lacul Tei Blvd., Sector 2, 020396 Bucharest, Romania
6
Department of Topography and Cadastre, Faculty of Geodesy, Technical University of Civil Engineering Bucharest, 122-124 Lacul Tei Blvd., Sector 2, 020396 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(15), 2818; https://doi.org/10.3390/rs16152818
Submission received: 30 May 2024 / Revised: 20 July 2024 / Accepted: 30 July 2024 / Published: 31 July 2024
(This article belongs to the Special Issue Geospatial Big Data and AI/Deep Learning for the Sustainable Planet)

Abstract

:
Popular geo-computer vision works make use of aerial imagery, with sizes ranging from 64 × 64 to 1024 × 1024 pixels without any overlap, although the learning process of deep learning models can be affected by the reduced semantic context or the lack of information near the image boundaries. In this work, the impact of three tile sizes (256 × 256, 512 × 512, and 1024 × 1024 pixels) and two overlap levels (no overlap and 12.5% overlap) on the performance of road classification models was statistically evaluated. For this, two convolutional neural networks used in various tasks of geospatial object extraction were trained (using the same hyperparameters) on a large dataset (containing aerial image data covering 8650 km2 of the Spanish territory that was labelled with binary road information) under twelve different scenarios, with each scenario featuring a different combination of tile size and overlap. To assess their generalisation capacity, the performance of all resulting models was evaluated on data from novel areas covering approximately 825 km2. The performance metrics obtained were analysed using appropriate descriptive and inferential statistical techniques to evaluate the impact of distinct levels of the fixed factors (tile size, tile overlap, and neural network architecture) on them. Statistical tests were applied to study the main and interaction effects of the fixed factors on the performance. A significance level of 0.05 was applied to all the null hypothesis tests. The results were highly significant for the main effects (p-values lower than 0.001), while the two-way and three-way interaction effects among them had different levels of significance. The results indicate that the training of road classification models on images with a higher tile size (more semantic context) and a higher amount of tile overlap (additional border context and continuity) significantly impacts their performance. The best model was trained on a dataset featuring tiles with a size of 1024 × 1024 pixels and a 12.5% overlap, and achieved a loss value of 0.0984, an F1 score of 0.8728, and an ROC-AUC score of 0.9766, together with an error rate of 3.5% on the test set.

1. Introduction

Data-intensive artificial intelligence models have proven their potential in research and professional workflows related to computer vision for geospatial features’ detection and extraction. Here, aerial image data play a fundamental role, but due to computational requirements, researchers employ the division of the available imagery into smaller image tiles. Some of the most popular works in the field use tile data with sizes of 64 × 64, 128 × 128, 256 × 256, 512 × 512, or 1024 × 1024 pixels. The tile size (also referred as “image size/resolution” in some parts of this manuscript to represent the width × height dimensions of an image) represents the pixel count in an image, and it is important to note that higher tile sizes contain more scene information and provide more semantic context. Another key component is the tile overlap, which represents the amount (expressed in percentages) by which an image tile includes the area of an adjacent tile.
It can be considered that higher tile sizes and overlap levels could enhance the learning process of deep learning (DL) models. However, this aspect has not been properly explored, although additional scene information and continuity have the potential to increase the performance of the trained models. The overlap could be considered as a natural data augmentation technique, as it exposes the model to more aspects of the orthoimage tiles, while the additional scene information from higher tile sizes could impact the generalisation capacity of DL implementations by providing more learning context. This could be beneficial, especially for road classification models (continuous geospatial elements that are complex in nature), as they can learn from slightly different perspectives of the area, potentially improving their ability to generalise.
Therefore, the objective of this work is to study the effects of the tile size and overlap levels on the performance prediction of road classification models on novel test data and to identify the optimal combination of size and overlap that would enable a higher generalisation performance. The authors believe that this study could provide relevant insights applicable to the experimental designs of subsequent geospatial studies, as the identification of optimal tile size and tile overlap level can contribute to achieving a higher DL performance with a lower number of experiments, leading to a decrease in energy consumption required for training. The starting premise of this study is, “For the classification of continuous geospatial elements in aerial imagery with DL techniques, models trained on data with a higher tile size and overlap achieve a higher generalisation capacity”. This study involves the binary classification of aerial orthophotos divided into image tiles labelled ‘Road’ or ‘No Road (Background)’ using deep learning implementations.
The road classification task involves classifying aerial imagery dividing the tiles into ‘Road’ or ‘No Road’ classes (a supervised, binary classification task). In binary approaches based on supervised learning, n independent samples ( X 1 ,   Y 1 ) , , ( X n ,   Y n ) of ( X ,   Y ) × { 0 ,   1 } are observed. The feature X exists in an abstract space , while the labels Y { 0 ,   1 } represent the “Road”/“No_Road” classes. This rule (called a classifier), built to predict Y given X , is a function h   { 0 ,   1 } and a classifier with a low classification error, R ( h ) = P ( h ( X ) Y ) , is desired. Since Y { 0 ,   1 } , Y follows a Bernoulli distribution, but assumptions for the conditional distribution of Y given X cannot be made.
However, the regression function of Y onto X can be written as Y | X B e r ( η ( X ) ) , where η ( X ) = P ( Y = 1 | X ) = E [ Y | X ] . Y = η ( X ) + ε , where ε is the noise responsible for the fact that X may not contain enough information to predict Y perfectly. The presence of this noise means that the classification error R ( h ) cannot be driven to zero, regardless of what classifier h is used. However, if η ( X ) < 0.5 , it can be considered that X contains no information about Y , and that, if η ( X ) 0.5 , “1” is more likely to be the correct label. A Bayes classifier, h , can be used as a function defined by the rule in Equation (1).
  h x = 1   i f   η ( X ) ½   h x = 0   i f   η ( X ) < ½
This rule, although optimal, cannot be computed because the regression function η is not known. Instead, the algorithm has access to the input data ( X 1 ,   Y 1 ) , , ( X n ,   Y n ) , which contain information about η and, thus, information about h . The discriminative approach described in [1] states that assumptions on what image predictors are likely to perform correctly cannot be made—this allows for the elimination of image classifiers that do not generalise well. The measure of performance for any classifier h is its classification error, and it is expected that with enough observations, the excess risk, ε ( h ) = R ( h ) R ( h ) , of a classifier h , will approach zero (by getting as close as possible to h ). In other words, the classification error can be driven towards zero, as the size of the training dataset increases ( n ) (if n is too small, it is unlikely that a classifier with a performance close to that of the Bayes classifier h will be found). In this way, we expect to find a classifier that performs well in predicting the classes, even though a finite number of observations is available (and thus, partial knowledge of the distribution P X , Y ) (the mathematical description of the task was adapted from [2]).
Supervised learning tasks also enable the application of transfer learning techniques to reuse the weights resulting from the training of neural networks on large datasets (such as ImageNet Large Scale Visual Recognition Challenge or ILSVRC [3]). Transfer learning allows a model to start from pre-learned weights (instead of random weight initialisation) and to make use of the learned feature maps (in computer vision applications, earlier layers extract generic features such as edges, colours, and textures, while later layers contain more abstract features) [4].
A large dataset with high variability is fundamental for obtaining DL models that have a high generalisation capacity. The use of a high-quality dataset is also important for the statistical analysis of their performance. For this reason, the SROADEX data [5] (containing binary road information covering approximately 8650 km2 of the Spanish territory) were used to generate new datasets featuring tiles with sizes of 256 × 256, 512 × 512, or 1024 × 1024 pixels and 0% or 12.5% overlap. The DL models were trained under twelve scenarios based on the combination of different tile sizes and overlap levels and convolutional neural network (CNN) architectures. Except for these factors, the training of the road classification models was carried out under the same conditions (same hyperparameters, and data augmentation and transfer learning parameters), so that differences in performance metrics are mainly caused by the considered factors. The experiments were repeated three times to reduce the randomness of convergence associated with DL models and to enable the statistical analysis, as ANOVA is valid with as little as three samples (a higher number of repetitions would have resulted in unrealistic training times).
To evaluate the generalisation capability of the models, a test set containing new tiles from a single orthoimage from a north-western region of Spain (Galicia, unseen during training and validation) was generated. The test area covers approximately 825 km2 and can be considered highly representative of the Spanish geography. Afterwards, multiple descriptive, inferential, and main and interaction effect tests were applied to statistically analyse the performance and assess the impact of the tile size, tile overlap, and convolutional neural network (CNN) architecture on the computed metrics. The results show that a higher tile size and overlap enable the development of models that achieve improved road classification performances on the unseen data. The findings could guide future work in optimising the mentioned aspects for a better model performance.
The main contributions are summarised as follows:
  • The impact of the tile size and overlap levels on the binary classification of roads was studied on a very large-scale dataset containing aerial imagery covering approximately 8650 km2 of the Spanish territory. Two popular CNN models were trained on datasets with different combinations of tile sizes (256 × 256, 512 × 512, or 1024 × 1024 pixels) and tile overlaps (0% and 12.5%) to isolate their effect on performance. The evaluation was later carried out on a new orthoimage of approximately 825 km2 containing novel data;
  • An in-depth descriptive and inferential statistical analysis and evaluation was performed next. The main effects of tile size, tile overlap, and CNN architecture on the performance metrics obtained on testing data were found to be highly significant (with computed p-values lower than 0.001). Their joint two-way and three-way interaction effects on the performance had different levels of significance and varied from highly significant to non-significant;
  • Additional perspectives on the impact of these factors on the performance are provided through an extensive discussion, where additional insights and limitations are described and recommendations for similar geo-studies are proposed.
The rest of the manuscript is organised as follows: Section 2 presents similar studies that are found in the specialised literature. Section 3 describes the data used for training and evaluating the DL models. Section 4 details the training method applied. In Section 5, the performance metrics on the unseen data are reported and statistically analysed. The results are extensively discussed in Section 6. Finally, Section 7 presents the conclusions of this study.

2. Related Works

Given the expected rise of autonomous vehicles and their need for higher definition road cartography and better road decision support systems, road classification is becoming one of the more important geo-computer vision applications for public agencies. Nonetheless, roads, as continuous geospatial elements, present several challenges related to their different spectral signatures caused by varied materials used for pavement, the high variance of road types (highways, secondary roads, urban road, etc.), the absence of clear markings, and differences in widths, which make their classification in aerial imagery difficult. Furthermore, the analysis of remotely sensed images also presents associated challenges such as the presence of occlusions or shadows in the scenes. Therefore, the task of road classification can be considered complex.
Recent work on this topic takes the deep learning approach to model the input–output relations of the data and obtain a more complex classification function capable of describing road-specific features and achieving a higher generalisation capacity (indicated by a high performance on testing data that was not modelled during training).
In the specialised literature, authors such as Reina et al. [6] and Lee et al. [7], among others, identify the need to tile large scenes from medical or remote sensing images due to the memory limitations of GPUs (mainly for semantic segmentation tasks). It was observed that the tiling procedure introduces artefacts in the feature map learning of the models, and the analysis of optimal tile sizes becomes necessary.
After evaluating ten tiles size varying from 296 × 296 to 10,000 × 10,000 pixels, Lee et al. [7] conclude that the best tile sizes for lung cancer detection are between 500 and 1000. It is important to note that the number of images used in medical imaging rarely surpasses a few dozen, and training models like VGGNet [8] (featuring tens of millions of parameters) can be considered a strong indicator of overfitting (where DL models “memorize” the noise in the training data to achieve a higher performance). A higher occurrence of prediction errors near the borders of the tiles was also identified in relevant geo-studies, such as [9] or [10]. For these reasons, and considering the size of the dataset, we considered it to be appropriate to evaluate three popular tile sizes found in relevant geo-studies (namely, 256 × 256, 512 × 512, and 1024 × 1024 pixels).
Ünel et al. [11] recognised the benefit of image tiling in surveillance applications and proposed a PeleeNet model for the real-time detection of pedestrians and vehicles from high-resolution imagery. Similarly, Akyon et al. [12] proposed the Slicing Aided Hyper Inference (SAHI) framework for surveillance applications to detect small objects and objects that are far away in the scene.
In addition, relevant studies in the medical field [13] have also noted the convenience of having overlap between tiles in the training dataset. Some authors consider that an optimal overlap percentage of 50% [14] can be applied as a data augmentation technique to improve the performance of the models. Nonetheless, we only selected the 12.5% level of overlap for this study because it ensures that information near tile edges can be correctly processed during training and avoids lower data variability that would be introduced by generating tiles with a higher overlap (possibly leading to a biased model, as it would be exposed to many similar data points). In addition, a smaller level of overlap also avoids the processing of an excessive amount of information resulting from higher overlap levels.
Recently, Abrahams et al. [15] proposed “Flip-n-slide”, a preprocessing technique for tiling image data with a sliding window to ensure that each pixel location is represented in eight tiles by applying overlapping levels ranging from 0% to 75%. This tile strategy is combined with transformations at each tile (such as rotations or reflections within 0° to 270°) to capture features near tile edges more efficiently, and can help the learning procedure by providing multiple perspectives of the data with a reduced redundancy. These studies support the relevance of this work in the current geo-computer vision landscape.
Since our DL task is to identify parts of large high-resolution aerial images that contain road elements (at country level, for a subsequent semantic segmentation of the tiles that contain roads), the purpose of our research is to study the optimal tile size for division (tiling) and tile overlap strategies. We consider this aspect to be a topic of great interest for current geo-studies and projects.
As no additional references relevant to our study were identified, articles related to the use of CNNs on image data for road applications that were published after 2018 in peer-reviewed scientific journals will be described and commented on next. In this regard, one of the most discussed areas in the literature is the detection of road defects for safety and maintenance purposes. Chun and Ryu [16] proposed the use of CNNs and autoencoders to classify oblique images acquired by a circulating vehicle and identify asphalt defects that can cause accidents. Semi-supervised methods were applied to create a novel dataset and data augmentation techniques were used to train the models that demonstrated their effectiveness on 450 test datasets. Maeda et al. [17] identified the lack of datasets of road deficiencies that would allow road managers to be aware of the defects and evaluate their state for use or repair without compromising safety. The authors generated a dataset of approximately 9000 images and labelled approximately 14,000 instances with eight types of defects. Object detection models were trained afterwards to locate the defects in images, and additional tests were performed in various scenarios in Japan. Liang et al. [18] proposed the lightweight attentional convolutional neural network to detect road damage in real time on vehicles.
Rajendran et al. [19] proposed the use of a CNN to identify potholes and road cracks in images taken from a camera connected to a Universal Serial Bus (USB) to create an IoT system that informs the authorities responsible for such defects (so further actions or repair planning can be taken). Zhang et al. [20] benchmarked different CNN models based on AlexNet, ResNet, SqueezeNet, or ConvNet to detect faults and compare their performance. Fu et al. [21] proposed a CNN architecture (StairNet) and compared different trained network models based on EfficientNet, GoogLeNet, VGG, ResNet, and MobileNet to identify defects in the concrete pavement. For validation, a platform to run the algorithms was created and a proof of concept on the campus of Nanjing University was developed. Finally, Guzmán-Torres et al. [22] proposed an improvement in the VGGNet architecture to classify defects in road asphalt. The training was carried out on the dataset containing 1198 image samples that they had generated (HWTBench2023). Transfer learning was used during training to achieve accuracy and F1 score metrics of over 89%.
The works of He et al. [23], Fakhri and Shah-Hosseini [24], Zhu et al. [25], and Jiang Y. [26] use CNNs for the detection of roads from satellite or very high resolution images. In the first article, the authors seek to optimise the hyperparameters of the models. In the second work, in addition to the RGB (Red, Green, and Blue) images, prediction data obtained from a previous binary road classification with Random Forest are also incorporated as the input for the CNN models to achieve F1 score metrics of 92% on the Massachusetts dataset. In the third paper, qualitative improvements in the results are obtained by replacing the ReLU function in the fully connected network (FCN) with a Maximum Feature Mapping (MFM) function, so that the suppression of a neuron is not completed by threshold, but by a competitive relation. In the fourth case, the authors propose a post-processing of the results of the trained CNN network based on a wavelet filter to eliminate the noise of the areas without roads, obtaining as a result a binary “Road/No road” classification.
There are also studies aimed at identifying road intersections. Higuchi and Fujimoto [27] implemented a system that acquires information with a two-dimensional laser range finder (2D LRF), allowing the determination of the movement direction of the autonomous navigating robot to detect road intersections. Eltaher et al. [28] generated a novel dataset by labelling approximately 7550 road intersections in satellite images, and trained the EfficientDet object detection model to obtain the centre of the intersections with average accuracy and recall levels of 82.8% and 76.5%, respectively.
Many existing studies focus on differentiating the types of road surfaces. Dewangan and Sahu [29] used computer vision techniques to classify the road surface into five classes (curvy, dry, ice, rough, and wet) and obtained accuracies of over 99.9% on the Oxford RobotCar. Lee et al. [30] proposed a model based on signal processing using a continuous wavelet transform, acoustic sensor information, and a CNN to differentiate thirteen distinct types of pavements in real time. The model was trained on a novel dataset containing seven types of samples (with around 4000 images per category) and delivered an accuracy superior to 95%.
Another important task is the processing of aerial imagery with road information to assign a binary label [31] or a continuous value [32] to the tile. Cira et al. [33] proposed two frameworks based on CNNs to classify image tiles of size 256 × 256 pixels that facilitate the discrimination of image regions where no roads are present to avoid applying semantic segmentation to image tiles when roads are not expected. de la Fuente Castillo et al. [34] proposed the use of grammar-guided genetic programming to obtain new CNN networks for the binary classification of image tiles that achieve performance metrics similar to those achieved by other state-of-the-art models. In [32], CNNs were trained to process aerial tiles with road information to predict the orientation of straight arrows on marked road pavement.
In the literature review, it was noted that most existing works focus on processing reduced datasets that cover smaller areas and generally feature ideal scenes (where road elements are grouped into clearly defined regions [35]). Nonetheless, the use of a reduced dataset may not be suitable if models capable of large-scale classification are pursued (as also discussed in [36]). For this reason, data from the SROADEX dataset [5] (containing orthoimagery covering approximately 8650 km2 of the Spanish territory that was labelled with road information) were used in this study. This adds real-world complexity to the road classification task to avoid focusing on ideal study scenes and to achieve DL models with a high generalisation capacity.

3. Data

The data used for this study are RGB aerial orthoimages from the Spanish regions covered by the SROADEX dataset [5]; the data are binary labelled into the “Road” and “No road” classes. More details regarding the procedure applied for labelling the data and tile samples can be found in the SROADEX data paper [5]. As mentioned above, the orthoimages forming the SROADEX dataset cover approximately 8650 km2 of the Spanish territory.
The digital images within SROADEX have a spatial resolution of 0.5 m and are produced and openly provided by the National Geographical Institute of Spain through the National Plan of Aerial Orthophotography product (Spanish: “Plan Nacional de Ortofotografía Áerea”, or PNOA [37]). They are produced by Spanish public agencies that acquired the imagery in photogrammetric flights performed under optimal meteorological conditions. The resulting imagery was orthorectified to remove geometric distortions, radiometrically corrected to balance the histograms, and topographically corrected using terrestrial coordinates of representative ground points using the same standardised procedure defined by their producers.
Taking advantage of this labelled information, the full orthoimages were divided into datasets featuring tiles with (1) a size of 256 × 256 pixels and 0% overlap, (2) a size of 256 × 256 pixels and 12.5% overlap, (3) a size of 512 × 512 pixels and 0% overlap, (4) a size of 512 × 512 pixels and 12.5% overlap, (5) a size of 1024 × 1024 pixels and 0% overlap, and (6) a size of 1024 × 1024 pixels and 12.5% overlap. The tiling strategy applied involved a sequential division of the full orthoimage with the different combinations of tile overlap and tile size selected. To ensure correct training, tiles with road elements shorter than 25 m were deleted (in the case of tiles of 512 × 512 pixels, this means that the sets only contain tiles where roads occupy at least 50 pixels; while in the case of 1024 × 1024 pixels, they only contain tiles where roads occupy more than 21 pixels). Afterwards, each combination of tile size and tile overlap was split into training and validation sets by applying the division criterion of 95:5%. In this way, six training and six validation sets, corresponding to the combination of each tile size and tile overlap, were generated.
The test set was formed by approximately 825 km2 of binary road data from four novel regions that were divided into image tiles with 0% overlap at the three tile sizes considered. The test areas were selected because they contain diverse types of representative Spain scenery and enable the statistical validity of the tests applied to objectively evaluate the generalisation capacity of the models. Figure 1 shows the territorial distribution of the train and validation sets (SROADEX data, signalled with blue rectangles) and the test area (signalled with orange rectangles), while Table 1 shows the number of images and pixels used for training, validation, and testing sets across different tile sizes and overlaps considered.
In Table 1, it can be observed that the percentages of tiles containing road elements increases as the tile size increases, to the detriment of tiles that do not contain road elements. For instance, at a tile size of 256 × 256 pixels, the dataset is balanced in terms of the two binary classes (approximately 47.5% of data are labelled with the positive class and 52.5% are labelled with the negative class), whereas, at a tile size of 1024 × 1024 pixels, the data labelled with the “Road” class represent approximately 90% of the samples. This is to be expected, given that, as the scene area increases, the probability of a tile not containing a road decreases. As a result, the class imbalance between the “Road” and “No Road” classes increases as the tile size increases. This implies that the training procedure must incorporate balancing techniques to ensure correct training and prevent models that are biased towards the positive class.
Regarding the normality of the data, given the size of the sample data (approximately 16 billion pixels × 3 RGB channels, organised in approximately 527,000 images in SROADEX), and following the Central Limit Theorem [38] that states that a large, independent sample variable approximates to a normal distribution as a sample size becomes larger, regardless of the actual distribution shape of the population, it was assumed that the training and testing data follow a normal distribution. Therefore, given the large dataset size, instead of conducting an empirical test of normality, which would be computationally and practically challenging, we proceeded with the analysis based on the assumption of normality explained, which is a common practice in such scenarios.

4. Training Method

To carry out a comparative study that allows us to understand the effect on the performance of different neural network architectures trained for the same task, two classification models, VGG-v1 and VGG-v2 (proposed in Table 1 of [39]), which have demonstrated their appropriateness in relevant works related to geospatial object classification [10,40], were selected for training. Briefly, the models are based on the convolutional base of VGG16 [8], followed by a global average pooling layer, two dense layers (with [512, 512] units for VGG-v1 and [3072, 3072] units for VGG-v2) with ReLU [41] activations, a dropout layer with a ratio of 0.5 for regularisation, together with a final dense layer with one unit and sigmoid activation that enables the binary classification.
Table 2 shows that the road classification models were trained under twelve different scenarios, each with a different combination of CNN architecture (VGG-v1 and VGG-v2), size (256 × 256 pixels, 512 × 512 pixels, and 1024 × 1024 pixels), and overlap (0% and 12.5%). This approach enables a detailed understanding of how these factors interact and impact the performance of the trained models and which combinations deliver the best results.
To reduce the sources of uncertainty, the standard procedure for training DL models for classification was applied. In this regard, the pixel values of the orthoimage tiles from the training and validation sets were normalised (rescaled from the range [0, 255] to the range [0, 1]) to avoid calculations on large numbers. Afterwards, in-memory data augmentation techniques with small parameter values of up to 5–10% were applied to the training images in the form of random rotations, height and width shifts, or zooming inside tiles (if empty pixels resulted from these operations, they will be assigned with the pixel values from the nearest boundary pixel). Furthermore, random vertical and horizontal flips were applied to expose the convolutional models to more data aspects and ensure the control of the overfitting behaviour specific to models with a large number of trainable parameters. Given the structured approach to data collection, instead of a random weight initialisation approach, the weights were initialised by applying transfer learning from ILSVRC [3]. This enables the re-use of the features on the large-scale data for the road classification task.
In Section 3, it was discussed that the probability of a tile not containing a road decreases as the tile size increases (there is an inherently higher probability that a larger area contains at least one road), which resulted in a higher class imbalance in favour of the “Road” class. To tackle the class imbalance observed in Table 1 and the associated overfitting behaviour, a weight matrix was applied to penalise the road classification model when wrongly predicting the over-represented class. The weight matrix contains class weights that were computed with Equation (2).
w j = n / ( k × n j )
In Equation (2),   w j represents the weight for class j , n represents the total number of samples in the training set, k is the number of classes (in this case, k = 2 ), while the n j term represents the number of samples in class j . The formula ensures that the under-represented “No_Road” class will have a higher influence on the training evolution to balance the over-representation of the “Road” class at higher tile sizes.
The loss function is the binary cross-entropy and can be defined with the formula defined in Equation (3).
L ( y , y ^ ) = 1 N i = 1 N [ y i l o g ( y ^ i ) + ( 1 y i ) l o g ( 1 y ^ i ) ]
In the context of binary road classification, the loss L ( y , y ^ ) from Equation (3) measures the “closeness” between the expected “Road” and “No Road” labels and the predictions delivered by the road classification model. N represents the number of available samples in the training scenario, y i is the true label of the i -th sample ( y i is either 0 or 1), and y ^ i presents the predicted probability of the i -th sample being in the positive class (a value between 0 and 1 that represents the model’s confidence, so that the label of the i -th sample is “Road”; a decision limit of 0.5 is applied to infer the positive or negative class label). The dot symbol “ · ” indicates an element-wise multiplication between the corresponding vectors.
The resulting weighted loss function is minimised with the Adam optimiser [42] (with a learning rate of 0.001) by applying the stochastic gradient descent approach (as the selected loss function is differentiable), with the loss for each sample being scaled by the class weight defined earlier. Intuitively, a model that predicts the expected labels will achieve a low loss value.
In the experimental design, it was established that the DL model configurations associated with each of the training scenarios from Table 2 will be trained in three different iterations for thirty epochs over the entire dataset. It is important to note that although higher sizes provide more scene information, they also require more computational resources and, consequently, a smaller batch size. The batch size selected for each training scenario was the maximum allowed by the available graphics card. All training experiments were carried out on a Linux server with the Ubuntu 22.04 operating system that featured a dedicated NVIDIA V100-SXM2 graphical card with 16 gigabytes of video random access memory (VRAM). As for the software, the training and evaluation scripts were built with Keras version 2.2.4 [43] and TensorFlow version 1.14.0 [44], together with their required library dependencies. The code featuring the training and evaluation of the DL implementations, the test data, and the resulting road classification models are available in the Zenodo repository [45] and are distributed under a CC-BY 4.0 license.

5. Results

The performance metrics results of the road classification models, trained under the scenarios described in Table 2, are reported in Appendix A. The performance is expressed in terms of loss, accuracy, ROC-AUC score, as well as precision, recall, and F1 score for the training, validation, and test sets, for each of the three training iterations carried out. The decision threshold for the probability predicted by the model was 0.5 (as discussed in the “Introduction”)—a predicted probability higher or equal to 0.5 would be considered a positive sample, while tiles with a predicted value lower than the threshold are assigned to the negative class.
The loss is calculated with Equation (3). Accuracy is computed using the confusion matrix of the model (expressed in terms of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) predictions) and measures the proportion of correctly predicted samples over the total set size (Equation (4)). The precision (proportion of TP among the positive predictions of the model, Equation (5)), recall (number of TP among the actual number of positive samples, Equation (6)), and F1 score (harmonic mean of precision and recall, Equation (7)) metrics offer a more comprehensive perspective of the misclassified cases compared to the accuracy metric, as they consider both FP and FN predictions. The ROC-AUC score is an indicator of the capacity of a model to distinguish between the positive and negative classes, and it also measures the area under the receiver operating characteristic curve (a plot of the TP rate against the FP rate at various thresholds from (0,0) to (1,1)) by using the prediction scores and the true labels.
a c c u r a c y = ( T P + T N ) / ( T P + T N + F P + T N )
p r e c i s i o n = T P / ( T P + F P )
r e c a l l = T P / ( T P + F N )
F 1   s c o r e = 2 × T P / ( 2 × T P + F P + F N )
The metrics on the training set range from 0.0719 to 0.2270 in the case of the loss value, from 0.9072 to 0.9715 in terms of accuracy, from 0.8738 to 0.9190, 0.9073 to 0.9650, and 0.8223 to 0.9137 for the F1 score, precision, and recall metrics, respectively, and from 0.9703 to 0.9909 in the case of the ROC-AUC score. The metrics on the validation set range from 0.0878 to 0.2574 in terms of loss, from 0.8959 to 0.9673 for accuracy, from 0.8475 to 0.9030, 0.8930 to 0.9650, and 0.7939 to 0.9021 for the F1 score, precision, and recall, respectively, and from 0.9628 to 0.9853 in terms of the ROC-AUC score. The metrics on the test set range from 0.0947 to 0.4951 in terms of loss, from 0.8222 to 0.9763 in terms of accuracy, from 0.7924 to 0.8862, 0.8051 to 0.9764, and 0.7662 to 0.8360 in the case of the F1 score, precision, and recall, respectively, with the AUC-ROC score ranging from 0.8939 to 0.9840.
The values of these metrics also vary across different training scenarios and their experiment iterations. For instance, the validation loss in training scenario 3 ranges from 0.1978 to 0.2152, while for scenario 6 it ranges from 0.0941 to 0.1035. Other examples are the training F1 scores from scenario 8 (ranging from 0.9104 to 0.9133) and scenario 9 (with values ranging from 0.9051 to 0.9134), or the test ROC-AUC scores that range from 0.9700 to 0.9706 in training scenario 6 and from 0.9356 to 0.9517 in the case of training scenario 5.
Therefore, the values obtained present important differences across the training scenarios considered and across the experiment iterations, and suggest that the trained road classification models were learning at different rates, probably due to the difference in the model architectures and the tile overlap and tile size levels considered (given that the variability of the training and test sets is similar and that the test set covers the same area). This indicated that a more in-depth analysis is needed to better understand the performance metrics differences. This study is centred on statistically studying the performance metrics from Appendix A and uses the metrics obtained by the models on unseen test data to identify the factors that have the greatest effect on the generalisation capacity of the road classification models. The statistical analysis was performed with the SPSS software version 29.0.2.0 [46].

5.1. Mean Performance on Testing Data Grouped by Training Scenarios

First, to explore the relationship between the performance metrics and the training IDs, detailed descriptive statistics were obtained (including means and their standard deviations) and the statistical analysis of variance (ANOVA) was applied to analyse the differences between group means. The dependent variables are the performance metrics on the test set (loss, accuracy, F1 score, precision, recall, and ROC-AUC score), while the training scenarios act as fixed factors. The objective was to verify if statistical differences are present between the metrics grouped by training scenario ID (N = three samples within each training scenario).
The results are presented in Table 3 in terms of mean performance metrics and their standard deviations, as well as the ANOVA F1-statistics and their p-values. An F-statistic is the result of the ANOVA test applied to verify if the means between two populations are significantly different, and it represents the ratio of the variance of the means (between groups) over the mean of the variances (within groups). The associated p-value indicates the probability of the variance between the mean groups is random, with a p-value of lower than 0.05 being considered statistically significant. The Eta (η) and Eta squared (η2) measures of association are also provided. Eta (η) is a correlation ratio that measures the degree between a categorical independent variable and a continuous dependent variable, ranging from 0 (no association) to 1 (perfect association). Eta squared (η2) represents an ANOVA measure of effect size that represents the proportion of the total variance in the dependent variable that is associated with the groups defined by the independent variable.
In Table 3, it can be observed that all corresponding p-values of the F-statistic are smaller than 0.001 and indicate statistically significant differences between the loss, accuracy, F1 score, precision, recall, and ROC-AUC score obtained by the trained models and the different training scenario IDs. These values imply that, for all the studied performance metrics, the variation between groups (different training IDs) is much larger than the variation within groups (same training ID) and suggest that the training ID has a significant effect on the performance of the road classification model across all the metrics considered.
Regarding the η and η2 measures of association, the values are close to 1 and indicate that the training ID had a significant effect on all the metrics considered and that a considerable proportion of the variance in each metric can be explained by the training ID. The values from Table 3 indicate an extremely strong positive association between the accuracy and the training ID of the road classification model, a very strong positive association between the loss, the F1 score, precision, and ROC-AUC score, and a strong positive association between the recall and the training ID.
The F-statistics and their p-values do not reveal which training IDs are different from the others when there is a significant difference. To reduce the length of this study, the analysis of the boxplots of the performance metrics grouped by training ID was carried out on the loss value, F1 score, and ROC-AUC score. These metrics are considered appropriate for evaluating the performance on the test set with imbalanced data (in Table 1, it can be observed that the test set features a very different number of positive and negative images at higher sizes), as the F1 score represents the harmonic mean of precision and recall, and it ensures that a model is robust in terms of false positives and false negatives. The ROC-AUC score is based on the predicted probabilities and indicates the capability of a model in distinguishing between the classes and is widely used for imbalanced datasets. As an additional comment, the accuracy is more suitable when evaluating symmetric datasets, as it can lead to a misleading measure of the actual performance in class imbalance scenarios. A comparison of the training scenarios in terms of performance metrics can be found in Figure 2.
In Figure 2, the boxplots of the training scenarios with IDs 1 to 6 present the performance metrics obtained by the VGG-v1 trained for road classification on datasets featuring tiles with a size of 256 × 256 and 0% overlap (scenario 1) to 1024 × 1024 and 12.5% overlap (scenario 6), while scenarios 7–10 contain the same information for the VGG-v2 model. The performance of the configurations follows a similar pattern. The loss progressively decreases for scenarios 1 to 6 and 7 to 12, while the F1 and ROC-AUC scores seem to increase in the same way (indicating an increase in performance as higher size tiles are used). Also, from the overlap perspective, by comparing pairs of consecutive scenarios, the F1 and ROC-AUC scores seem to be higher in scenarios with an even training ID (that feature tiles with an overlap of 12.5%), while the loss values seem to be smaller (possibly indicating a higher performance in scenarios featuring a 12.5% overlap). The highest median F1 and ROC-AUC scores and lowest median loss values seem to belong to scenario 12, which is closely followed by scenarios with IDs 11 and 6. Training ID 12 features high variability in the F1 score but a low loss value computed on the unseen test data.
Next, given that the test sample sizes vary greatly (the test sets of higher sizes feature a lower number of images), the Scheffe test was applied to compare the performances in terms of the F1 and ROC-AUC scores and loss values and to identify the best performing ones. Scheffe’s method is a statistical test used for post hoc analysis after ANOVA, where a comparison is made between each pair of training ID means (it is more conservative in controlling the Type I error rate for all possible comparisons) using a t-test adjusted for the overall variability of the data, while maintaining the level of significance at 5%. It is used to make all possible contrasts between group means and outputs visible in the homogeneous subsets that can be obtained. All groups from this analysis feature a sample size N = 3 . In Table 4, the post hoc test results are presented in terms of homogeneous subsets of configurations for the F1 and ROC-AUC scores and loss metrics grouped by the training scenario ID after applying the Scheffe’s method, as described above.
The homogeneous subsets reported in Table 4 contain the proposed configurations whose performances are not significantly different from each other at a level of significance of 5%. For example, configurations 6, 11, and 12 do not have significantly different F1 and AUC ROC scores (highest values) and loss values (lowest values). These configurations are not common between the different homogeneous subsets obtained, implying a significantly different performance compared to the rest of the configurations, with the models obtained from training scenario 12 being the best performers. These post hoc test results support the observations of the boxplots from Figure 2.

5.2. Performance of the Best Model

The performance achieved by each of the trained models is presented in Appendix A. In Table 3, the computed metrics were grouped by scenario ID and their overall descriptive statistics on the test set were presented. It can be found that all road classification models achieved a high generalisation capacity, as their performance on the unseen data reaches mean levels of 0.3111, 0.9043, 0.8286, 0.8700, 0.8019, and 0.9288 in terms of loss, accuracy, F1 score, precision, recall, and ROC-AUC scores, respectively, with associated standard deviations of 0.1396, 0.0599, 0.0284, 0.0666, 0.0177, and 0.0292, respectively.
In the statistical analysis carried out in Section 5.1, a significant variability in the metrics was observed across different training iterations and subsets of the data and models. The best training scenario was 12, as it obtained the highest mean performance on the unseen data. The three models trained in this scenario achieved mean values of loss, accuracy, F1 score, precision, recall, and ROC-AUC score of 0.1018, 0.9749, 0.8751, 0.9659, 0.8195, and 0.9786, respectively. By cross-referencing this information with the data from Appendix A, it can be observed that the best training iteration from this scenario was the third one, where loss values of 0.0948, 0.0948, and 0.0984, F1 score values of 0.8871, 0.8871, and 0.8728, and ROC-AUC scores of 0.9808, 0.9808, and 0.9766, were achieved on the train, validation, and test set, respectively. In Figure 3, the confusion matrix of the best CNN model (VGG-v2, trained with images of 1024 × 1024, with an overlap of 12.5%) computed on the train, validation, and test set can be found.
By analysing the error rates from the confusion matrix obtained by the best VGG-v2 model on the test set, it can be found that the resulting model correctly classified 38,585 training samples (35,768 as the positive class and 2817 as the negative classes), while incorrectly predicting 266 negative samples as belonging to the “Road” class and 1015 “Road” samples as belonging to the negative class, with the error rate (ratio between incorrect predictions and total samples) on the training set being 3.2% (Figure 3a). On the validation set, the model correctly predicted 1886 positive and 140 negative samples, while it incorrectly predicted 11 negative samples as positives and 62 positive samples as negatives (as observed in Figure 3b). The corresponding error rate is 3.5%. On the test set, the best performance model correctly predicted 2917 positive and 126 negative samples, while it incorrectly predicted 6 negative samples as positive and 74 positive samples as belonging to the negative class (Figure 3c). The associated error rate is 2.6% and proves the high generalisation capability of CNN models, as the error rate is slightly lower when compared to the training and validation sets.
In Figure 2a, it can be observed that the best scenario (ID = 12) shows a higher variance in the F1 score. One cause could be the inherent randomness associated with the training process of DL models (where the weights have a random component in their initialisation, or the random selection of mini batches). This randomness can introduce variability in convergence and performance (even if the models are trained on the same representative, large-scale data) and result in F1 scores that are not entirely consistent across runs. In some experiments, the model might be making more accurate positive class predictions (“Road” class), while in others, it might be better at predicting the negative (“No road”) class, a behaviour conditioned by the precision–recall tradeoff. It is important to mention that performance metrics should be evaluated globally, and not at the level of a single metric. Scenario 12 is not one of those where higher variability is present in the loss metrics or the ROC-AUC score.

5.3. Mean Performance on Unseen Test Data Grouped by Tile Size, Overlap, and Neural Network Architecture

In this section, the selected fixed factors are the tile size, tile overlap, and the DL architecture trained. The loss, accuracy, F1 score, precision, recall, and AUC-ROC scores on the test set (performance metrics) act again as the dependent variables. ANOVA was applied to obtain the mean and standard deviation of the dependent variables and the inferential statistics (F-statistic and its p-value, together with η and η²). The results are grouped by tile size (N = 12 samples for each of the three tile sizes considered), tile overlap (0% and 12.5%, N = 18 samples for each group), and trained CNN architecture (VGG-v1 and VGG-v2, N = 18 for each group). The results are presented in Table 5.
In Table 5, it can be observed that the mean loss values decrease as the tile size increases (from 0.1415 to 0.32 and 0.1415, in the case of the 256 × 256, 512 × 512, and 1024 × 1024 tile sizes, respectively). The standard deviation of the loss values contains small values across each considered size (from 0.0222 to 0.0371). This behaviour is repeated in the case of the accuracy, precision, and ROC-AUC score. The F1 score and its recall component do not display this constant increase pattern. One of the more plausible explanations for this situation could be the significant class imbalance in the data. Given the class weights applied, it seems that the models trained on tiles with a size of 512 × 512 pixels favoured a higher precision (a good identification of the minority class) at the expense of recall metrics (where the majority class is frequently misclassified), resulting in a lower F1 score.
Nonetheless, the p-values in the ANOVA table (corresponding to the F-statistic for the tile size as a fixed factor) are smaller than 0.001 for all the dependent variables and indicate that the differences in the metrics across the tile sizes are statistically significant (the observed trends are unlikely to be random). The values of the measures of effect size (η and η²) suggest that tile size has a large effect on these metrics and a very strong positive association between the tile size and the performance metrics (values for η and η2 are superior to 0.90, and even approaching 0.99 in the cases of loss and accuracy), except for the recall, where the measures of effect indicate a strong positive association in the correlation ratio η (Eta) of approximately 0.8 and a substantial effect size of approximately 0.63 (implying that 63% of the variation in recall is attributable to the variation in tile size).
In relation to the overlap as a fixed factor, the mean performance metrics present a slight increase in the “12.5% overlap” level when compared to the “No overlap” group for all the metrics except for loss (where the average value decreases, which signals a better performance). This indicates a slight increase in the performance of the models trained on tiles featuring an overlap. The standard deviation of the “12.5% overlap” group indicates a slightly more variable performance when there is an overlap between adjacent images. As for the inferential statistics, the computed p-values are higher than 0.05 and indicate that the differences in performance metrics between the two levels of tile overlap are not statistically significant. The η values indicate a weak relationship between tile overlap and each of the performance metrics, while the η2 values show that only a very small proportion of the variance in each performance metric can be explained by tile overlap (for example, the η2 value of 0.015 for the F1 score implies that only 1.5% of the variance in the F1 score can be attributed to the level of tile overlap).
When considering the CNN architecture as a fixed factor, similarly to the “overlap” as an independent variable, the means of the performance metrics slightly increase for every metric, except for loss values (where a slight decrease can be observed), indicating a slight increase in performance for the VGG-v2 model trained for road classification. They suggest that, on average, the two models perform similarly. The standard deviations of performance metrics of VGG-v2 present slightly higher values when compared to those obtained by the VGG-v1 group and indicate slightly higher variability in the performance of VGG-v2 (for example, the standard deviation of accuracy is 0.0602 for VGG-v1 and 0.0613 for VGG-v2). All the p-values are higher than 0.05 and indicate that the differences in the mean metrics between the two models are not statistically significant. The η values are low and indicate a weak relationship between CNN architectures and the dependent variables (performance metrics). The η2 values are even lower and indicate that an insignificant proportion of the variance in the metrics can be attributed to the model (in the case of accuracy, it approaches zero).
However, the p-values from the ANOVA table do not reveal which groups of the fixed terms are different from the others when there is a significant difference. For this reason, the analysis of the boxplots of the performance metrics grouped by the tile size, tile overlap, and CNN architecture was carried out next for the loss, F1 score, and ROC-AUC score values (as illustrated in Figure 4).
When grouping the metrics by the tile size, an increase in the median performance metrics can be observed at higher tile sizes (together with a decrease in the loss, which is also attributed to a better performance), with the exception of the F1 score for the tile size of 512 × 512 pixels, which, as discussed before, could be caused by a class imbalance present in the data or by a generally more pronounced sensitivity of the CNN models to predicting the positive class at this particular size (a mean higher precision was observed in Table 5). The results are aligned with those obtained in similar works [10].
When grouping the performance metrics by the tile overlap, it can be observed that the median F1 and ROC-AUC scores increase in models trained on data with a 12.5% overlap (and subsequently, the loss values decrease) when compared to the boxplots featuring models trained on data with no overlap.
Finally, in the case of the boxplots grouped by the trained CNN model in Figure 4, it can be observed that, although the median F1 and ROC-AUC scores are slightly higher in the case of VGG-v1 (and the median loss value is smaller), the variability of the VGG-v2 is higher. The upper whiskers corresponding to the F1 and ROC-AUC score performance values reach a considerably higher value (and a considerably smaller loss value) when compared to VGG-v1. These values were all computed on unseen testing data; the boxplot results support the observations from Table 5. Post hoc tests are not performed in this section because of the reduced number of groups within the fixed factors.
Next, to quantify the impact of the independent variables on the performance, a factorial ANOVA was applied for analysing the main and interaction effects of the fixed factors on the metrics.

5.4. Main and Interaction Effects with Factorial ANOVA

In this section, a factorial ANOVA is applied to examine whether the means of F1 and ROC-AUC scores and loss metrics as dependent variables are significantly different across the groups from the training ID, CNN architecture (model), tile size, and overlap as fixed factors and whether there are significant interactions between two or more independent variables on the dependent variables. This type of analysis is applied to understand the influence of different categorical independent variables (fixed factors) on a dependent variable.
A factorial ANOVA studies the main effect of each factor (ignoring the effects of the other factors) and studies their interaction effect (combined effect of two or more factors, which could be different from the sum of their main effects) on each dependent variable. For the interaction effect, the null hypothesis states that “the effect of one independent variable on the dependent variable does not differ depending on the level of another independent variable”. A rejected null hypothesis (p-value < 0.05) indicates that significant differences exist between the means of two or more independent groups.
The results of the factorial ANOVA test are presented in Table 6. Table 6 is divided into the various sources of variation; each source of variation is tested against the three dependent variables (the performance metrics considered) at a significance level of 0.05. The assumptions of a factorial ANOVA have been met in this study, as the observations are independent, the residuals follow a normal distribution, and the variance of the observations is homogeneous.
“Corrected Model” and “Intercept” are statistical terms used in the context of regression analysis in “Between-Subjects” factorial ANOVA tables, and they provide details related to the relationship between the studied variables. In Table 6, “Corrected Model” (source ID = 1) refers to the sums of squares that can be attributed to all the effects in the model (fixed and random factors, covariates, and their interactions), excluding the intercept. The F-test for the corrected model indicates whether the model explains any variance in the dependent variable (whether the variation in the performance metrics can be explained by the independent variables). The p-values are lower than 0.001; therefore, the model is highly statistically significant. The “intercept” (source ID = 2) represents the mean value of the dependent variable when all independent variables are zero; the associated p-values for the three dependent variables (F1 score, ROC-AUC score, and loss value) are lower than 0.001, showing that the model intercepts are significantly different from zero.
The main effect null hypothesis studies the marginal effect of a factor when all other factors are kept at a fixed level, and it states that the effect is not significant on the dependent variables. As can be observed in Table 6, the effect of the fixed factors “Size” (source ID = 4) and “Overlap” (source ID = 5) on the performance metrics is statistically significant (p-values lower than 0.05 in all cases). This indicates that the tile size and tile overlap significantly explain the variation in the dependent variables (F1 and ROC-AUC scores and loss values) and that there is a highly significant difference in performance due to different tile sizes (p-values lower than <0.001 for each performance metric) and significant differences caused by tile overlap levels (p-values of 0.0038, <0.001, and 0.0055 for the respective dependent variables). As for the main effect of the CNN models (source ID = 3) on the dependent variables, the p-values for the F1 score and loss value are greater than 0.05, indicating that the effect of CNN architecture (“Model”) on these variables is not statistically significant. However, the effect of the CNN architecture on the ROC-AUC score is significant (p-value < 0.05).
As for the interaction effect between tile size and tile overlap (source ID = 6) on the performance metrics, the p-value for the F1 score is greater than 0.05, indicating that the interaction effect is not significant for this variable. However, the interaction effect is significant for the ROC-AUC score (p-value < 0.05). A similar behaviour (non-significant interaction effect for the F1 and loss metrics, but significant for the ROC-AUC score) is displayed by the interaction effect between the CNN model and the overlap (source ID = 8). The interaction effect between the CNN architecture and the size pair of fixed factors (source ID = 8) is statically significant for the ROC-AUC score, but not statistically significant for the loss value and F1 score as dependent variables (the computed p-values are higher than 0.05). Nonetheless, the p-value of approximately 0.06 for the F1 score is only slightly above the 0.05 threshold and can be considered to suggest a trend in the data.
In the case of the interaction effect between the three fixed factors (CNN architecture, tile size, and tile overlap—source ID = 9), the difference in performance is not significant (p-values of 0.8685, 0.0601, and 0.9805 for the F1 and ROC-AUC scores and loss values, respectively). Again, the p-value corresponding to 0.06 for the ROC-AUC score is only slightly above the 0.05 threshold and can suggest a trend in the data. In Table 6, “error” (source ID = 10) represents the unexplained variation in the dependent variables. Finally, “total” (source ID = 11) represents the total variation in the dependent variables, while “Corrected Total” (source ID = 12) represents the total variation in the dependent variables after removing the variation due to the model.
As a post hoc analysis following the factorial ANOVA, the Estimated Marginal Means (EMMs), or predicted marginal means, were computed to help interpret the results from Table 6. EMMs represent the means of the dependent variables across distinct levels of each factor, averaged over the other factors (to control the effects of other factors), and are useful for understanding the interaction effects of multiple fixed factors on the performance metrics. In this case, EMMs provide the mean performance metric (F1 score, ROC-AUC score, and loss value) at each level of the fixed factors averaged over the levels of the considered factors. For the two-way interaction between tile size and overlap (Size * Overlap, source ID = 7 in Table 6), the metrics are averaged over the levels of tile size (256 × 256, 512 × 512, and 1024 × 1024 pixels), and tile overlap (0% and 12.5%). For the three-way interaction between the tile size, overlap, and CNN architecture (VGG-v1 and VGG-v2), the metrics averaged over the levels of the three considered factors (Model * Size * Overlap, source ID = 9 in Table 6). The plot of the EMMs from Figure 5 illustrates the means for the interaction effects of the two and three fixed factors mentioned on the dependent variables. Appendix B presents the numerical values of the EMMs for the two-way interaction between the tile size and overlap (Size * Overlap) on the F1 and AUC-ROC scores and loss values. Appendix C presents the EMMs values of the three-way interaction effect between the CNN architecture, tile size, and overlap (Model * Size * Overlap) on the same performance metrics.
Subplots (a), (d), and (g) of Figure 5 present the EEMs of the two-way interaction between the tile size and tile overlap. The graphics suggest that the values of the mean F1 and AUC scores increase as the tile size and tile overlap increase, while the mean loss values decrease as the size increases (an indicator of a better performance). When accounting for the three-way interaction (the rest of Figure 5’s subplots), it can be observed that the VGG-v2 model displays a better performance when compared to VGG-v1 across all dependent variables for all tile sizes and overlaps. For both CNN architectures, the F1 score generally increases as the tile size increases from 256 × 256 to 1024 × 1024 pixels and it slightly improves when the tiles present a 12.5% overlap. This behaviour is also displayed for the ROC-AUC score metric; the value of the model trained on tiles of 1024 × 1024 pixels with a 12.5% overlap is considerably higher. Additionally, VGG-v2 achieved a lower loss when compared to the VGG-v1 model across all tile sizes and overlaps (a lower loss value is an indicator of a better performance). As found in Figure 5, the loss decreases as the tile size increases from 256 × 256 to 1024 × 1024 pixels for both models and is slightly lower on models trained with tiles featuring a 12.5% overlap.

6. Discussion

This work was focused on statistically studying the generalisation capacity of road classification models using unseen testing data, and it was centred on assessing the impact of different tile size and overlap levels on the performance metrics. The indicators of performance considered were the loss value, F1 score, and ROC-AUC score, as these metrics ensure robustness and class distinction (the accuracy may mislead scenarios of imbalanced data present at higher tile sizes).
In this study, a significance level of 0.05 was applied for testing the null hypotheses. If the p-value is lower than 0.05, it can be concluded that the corresponding result is statistically significant. For p-values lower than 0.001, the result is considered highly significant (the observed data have a less than 0.1% chance to occur under a correct null hypothesis). Conversely, a p-value > 0.05 (not significant) can be interpreted as there is no robust evidence to reject the null hypothesis. Finally, a p-value slightly above 0.05 can be considered indicative of a trend in the data.

6.1. On the Homogeneity of the Performance and Differences between Training, Validation, and Testing Results

The data distribution of the training, validation, and test sets can be found in Table 1, with the performance metrics being presented in Appendix A. The metrics from the “Train” columns of Appendix A indicate the performance of the models on the data they were trained on. As expected, the values are higher on this set, as these data were directly used to model the classification function of the road classification model. At the end of each training iteration, the model had access to its corresponding validation set to compute the loss and tune its training parameters—the performance on the validation data is generally lower than that on the training data but is expected to be close if the model generalises well. The testing data have not been processed during training or validation and provide an unbiased evaluation of the resulting model that reflects the real-world usage performance. A considerably better performance on the training data compared to the validation and test data would indicate overfitting. Underfitting would be indicated by low scores across all sets.
The results from Appendix A and Table 1 show a high degree of homogeneity in the metrics from the same scenario settings. The results do not present marked overfitting or underfitting behaviour—the results show a good performance and a consistent performance across all the sets and indicate well-fitted models. Overfitting would be indicated by a high training score but low validation and test scores. The performance generally decreases from the training set to the validation data and to the test data. The metrics are highest on the training set, slightly lower on the validation set, and lowest on the test set.
The loss value is higher for the test set compared to the train and validation sets across all experiments—this is expected as the model had no access to the test data during training. Nonetheless, the relatively low loss values suggest that the model can make reasonably accurate predictions on the test data. Across all experiments, the F1 score is consistently highest on the training set, followed by the validation set, and lowest on the test set. The F1 score and ROC-AUC score, which are measures of the performance and discriminative ability of a model, respectively, show similar trends to the loss values. The ROC-AUC scores are high for all the sets across all the experiments, indicating that the models have high discrimination capacity between classes. A higher performance on the training set is a common pattern (a model is best tuned to the data it was trained on), but the performance on the validation and test set is also high and suggests that the model is not overfitting.

6.2. On the Training Scenarios and the Best Model

The descriptive statistics from Table 3 show that the models present a reduced standard deviation within the same training scenario. For example, within the same scenario ID, the highest standard deviation value for the loss value is present in training ID 1 (±0.0322), in training ID 12 (±0.0101) for the F1 score, and in training ID 5 (±0.0082) for the ROC-AUC score. However, noticeable variations in performance were observed across different training scenarios. For example, training ID 12 yields the lowest mean loss value (0.1018), while training ID 8 yields the highest mean loss (0.4783—smaller values indicate better performances). For the F1 performance metric, the minimum mean value is present in training ID 3 (0.804), while the maximum mean value corresponds to training ID 12 (0.8751). As for the ROC-AUC metric, training ID 1 obtained the minimum mean value (0.8976), while the maximum mean value (0.9786) was obtained by training ID 12. Therefore, the training scenario significantly impacted the performance of a model, with the variation between groups (different training IDs) being much larger than the variation within groups (same training ID). This can also be observed in Figure 2. All the corresponding p-values are smaller than 0.001 and indicate highly significant statistical differences—the training scenario significantly impacted the ability of the model to generalise to the unseen data.
The η and η² measures of association from Table 3 have values close to 1 and indicate a very strong positive association between the loss value, the F1 score, and ROC-AUC scores and the training ID. The values imply that the training scenario had a significant effect on the performance and that a considerable proportion of the variance in each metric can be explained by the training ID.
The results of the post hoc Scheffe test (Table 4) revealed that training IDs 5, 6, 11, and 12 consistently performed better across the three metrics considered for the homogeneous sets, and suggested that these scenarios are likely the best-performing ones. Alternatively, the models from training IDs 1 and 2 appear to display worse performance across the considered metrics. Figure 2 shows that the highest median F1 and ROC-AUC scores and lowest median loss values are associated with scenario 12. The best-performing model was the VGG-v2 model trained in scenario 12 (on tiles of 1024 × 1024 pixels and 12.5% overlap), which achieved a loss value of 0.0984, an F1 score of 0.8728, and an ROC-AUC score of 0.9766, together with an error rate of 3.5% on the test set (as described in Section 5.3).
The loss values show some degree of homogeneity across different experiments and iterations, especially within the same training scenario. The F1 scores vary more significantly across different training scenarios and indicate that the precision and recall components were influenced by the training scenario ID. The ROC-AUC scores (indicating the ability of a model in distinguishing between classes) proved to be relatively consistent regardless of the specific experiment or iteration.

6.3. On the Tile Size and Tile Overlap

The increasing class imbalance between the positive and negative classes at higher tile sizes (from approximately 47.5:52.5% to 90:10% for the 256 × 256 and 1024 × 1024 tile sizes, respectively, as presented in Table 1) was tackled by applying a class weight matrix during training (as described in Section 4) to prevent models that are biased towards the over-represented class.
Table 5 shows that the loss performance for size levels ranges from 0.4717 (256 × 256 pixels) to 0.1415 (1024 × 1024 pixels). When analysing the F1 score metric, the mean values range between 0.8056 (512 × 512 pixels) and 0.8667 (1024 × 1024 pixels), while for the ROC-AUC metric, the mean values are between the value 0.9002 (256 × 256 pixels) and 0.9660 (1024 × 1024 pixels). It can also be highlighted that the maximum standard deviation for the loss metric (0.037) is observed for the 1024 × 1024 size. For the F1 and ROC-AUC scores, the standard deviations are generally lower than 0.01 across each size level and indicate a similar performance of the models across the training scenarios. The 1024 × 1024 pixel size obtained the highest mean performance on the unseen data for every considered metric. The dependent variables indicate a higher performance of the trained models at higher sizes. The results suggest that more semantic information from a scene helps the models in making more accurate predictions (considerably higher mean precision), but they might also make the correct identification of all actual positive cases more difficult (slightly lower mean recall).
In relation to the tile overlap levels, the analysis of the results shows that the mean values of the loss metric range from 0.3001 (12.5% overlap) to 0.3221 (no overlap). These mean values also increase from 0.8252 (no overlap) to 0.8320 (12.5% overlap) for the F1 score metric and from 0.9250 (no overlap) to 0.9326 (12.5% overlap) for the ROC-AUC score metric. The standard deviations indicate a slightly higher variability at higher overlaps. The differences in median performance between the two overlaps can also be identified in Figure 4. It can be considered that a tile overlap of 12.5% results in a better performance than a 0% overlap across all metrics. This might suggest that the use of overlapping tiles could help the models in making more accurate predictions due to more context and continuity provided.
When accounting for the results grouped by the CNN architecture levels (Table 5 and Figure 4), it can be observed that the VGG-v1 model performed slightly worse than VGG-v2 in terms of performance (mean loss, F1, and ROC-AUC score values of 0.3159 and 0.3062, 0.8272 and 0.8299, and 0.9261 and 0.9315, respectively). The variability in performance metrics is slightly higher for the better CNN architecture. Both VGG-v1 and VGG-v2 models perform better with higher tile sizes (1024 × 1024 pixels) across all performance metrics.
The η and η² measures of association indicate the strength and direction of the relationship between the independent variables and the performance metrics (values closer to one suggest a stronger relationship). For tile size, the η values (between 0.796 to 0.999), and the η² values (between 0.634 to 0.997) are very high and suggest that the tile size significantly affects the performance metrics. For tile overlap and CNN architecture, the η and η² values are relatively low, suggesting a weaker relationship. This indicates that, while tile overlap and the choice of model do have some effect on the performance metrics, their impact is less significant compared to the tile size. The trend is also encountered in the case of the η² values (which indicate the proportion of the variance in each performance metric that can be explained by the independent variable).
The results from Section 5.2 indicate that the use of higher size tiles leads to a better average road classification performance, with a highly significant p-value of less than 0.001 (the models trained on tiles of 1024 × 1024 pixels delivered the best results). The tile overlap of 12.5% slightly outperforms the 0% overlap, and VGG-v2 slightly outperforms VGG-v1. The p-values might suggest that the differences in the means of the performance metrics between the two levels of tile overlap (0% and 12.5%) and CNN architectures (VGG-v1 and VGG-v2) might not have a substantial impact on the performance and could be caused by randomness. However, it is important to note that statistical significance does not always equate to practical significance. In this case, given the reduced number of training repetitions at the scenario level (due to the high computational cost required), the results might imply that the significance cannot be identified by analysing the mean values alone. This aspect is also indicated by the median results from Figure 4, with better performances being achieved by a model with a higher number of trainable parameters at a higher overlap. For this reason, more statistical analyses were carried out to study the main and the interaction effect on these metrics by applying factorial ANOVA tests.

6.4. On the Main and Interaction Effects of Tile Size, Tile Overlap, and Neural Network Architecture

The null hypothesis can be rejected if the p value is <0.05 (it implies that a significant effect on the performance metrics can be observed when all other factors are kept at a fixed level). As found in Table 6, the main effect of the tile size is highly significant (p-value lower than 0.001). The main effect of the tile overlap is also highly significant for the ROC-AUC score (p-value less than 0.001) and significant for the F1 score and loss metrics (p-values of 0.0038 and 0.0055, respectively). The main effect of the CNN architecture as a fixed term on the performance proved to be non-significant for the F1 score and loss value as dependent variables (p-values higher than 0.05) and significant for the ROC-AUC score (p-value of 0.0014).
The effect on the performance metrics of the interactions between the combined factors analysed in this study was also evaluated. The p-value tests verify if the effect of the model on the dependent variables changes at different levels of the independent term. A significant p-value would suggest that the effect of one of the independent factors on the dependent variables depends on the level of a second independent factor, and vice versa. A p-value lower than 0.05 means that the combined effect on the performance is not significantly different from what would be expected based on their individual effects, and there is not enough evidence to reject the null interaction effect hypothesis.
The p-values for the interaction effect between model and tile size (Model * Size) are highly significant for the ROC-AUC score (p-value < 0.001), significant for the loss value (p-value of 0.0034), but not significant for the F1 score (p-value of 0.0649). These p-values test whether the effect of the model on the dependent variables changes at different tile sizes. The significant p-value suggests that the effect of the model on the loss and ROC-AUC metric depends on the tile size, and vice versa.
The interaction effects of the tile size and tile overlap factors on the performance and CNN architecture and tile overlap factors on the metrics are not significant for the F1 score (p value > 0.05), but are significant for the ROC-AUC score. This means that the effect of the tile size on the ROC-AUC score depends on the tile overlap level (in the case of the “Model * Overlap” interaction effect), and that the effect of the CNN architecture on the ROC-AUC metric changes at both overlap levels (for the “Model * Overlap” interaction effect), and vice versa.
The p-values for the three-way interaction among CNN architecture, tile size, and tile overlap (Model * Size * Overlap) are not statistically significant for any of the dependent variables (p > 0.05), but in the case of the ROC-AUC score, the p-value of 0.06 is low enough to indicate a trend. Nonetheless, the values imply that the combined effect of model, size, and overlap on the performance is not significantly different from what would be expected based on their individual effects and their two-way interactions.
Therefore, while the main effects of tile size and overlap are significant, their interaction effects with the model are not consistently significant across all performance metrics. These statistical interpretations suggest, nonetheless, that higher tile sizes and a small amount of overlap can improve the performance of these models. The graphics from Figure 5 support these findings.

6.5. A Ranking of the Contributions of the Factors to Performance

Although the statistical tests applied in this study do not rank the contributions of the tile size, tile overlap, and CNN model on the DL model performance, the experimental and analysis designs that provided the results from Table 3, Table 4, Table 5 and Table 6 and Figure 2, Figure 3, Figure 4 and Figure 5 offer significant insights into the effects of these factors on the performance of the models and enable a global, qualitative ranking of the importance of the factors. However, it is important to note that the ranking is based on the road classification results achieved in this study for the specific dataset, the tile size, and tile overlap levels, and the CNN models selected in our study. The relative contributions of these factors may vary for other tasks, datasets, or DL models, and further research is needed to generalise these findings (these aspects will be commented upon more in depth in Section 6.6).
In this work, the factor with the highest impact on the performance of road classification models proved to be the tile size. In Table 5 and Figure 4a–c, where the performance metrics were grouped by tile size, it can be observed that larger tile sizes consistently result in better performance metrics (lower loss value, and higher F1 score and ROC-AUC score). The difference between the tile size levels indicates that this factor could be considered the most influential factor for the analysis in this study. The results indicate that models trained on larger tiles (1024 × 1024 pixels) performed better than those trained on smaller tiles, and this is likely caused by the increased semantic context provided by larger tiles. Furthermore, the main effect of the tile size on performance proved to be highly statistically significant, with p-values < 0.001 for all metrics, as shown in Table 6 (source ID = 4).
The second most influential factor on the model performance can be considered as the tile overlap. In Table 5 and Figure 4d–f, it can be observed that, when the metrics were grouped by tile overlap, the models trained with a 12.5% overlap consistently outperform those trained without an overlap (although to a lesser degree). The main effect of the tile overlap on performance (source ID = 5 in Table 6) proved to be highly statistically significant for the ROC-AUC score metrics and statistically significant for the loss value and F1 score. Therefore, road recognition models trained with a 12.5% overlap outperformed those trained without any overlap and the results indicate that additional border context and continuity likely helps the learning process.
The CNN model architecture can be considered as the least influential factor for the performance achieved in this study. Data from subplots (g), (h), and (i) of Figure 4 and Table 5 show that performance differences between the different CNN architectures are less pronounced than the differences observed for tile size and tile overlap. In addition, the main effect of the CNN model (source ID = 3 in Table 6) on the metrics varies from significant for the ROC-AUC score to non-significant for the loss value and F1 score. Therefore, although the choice of CNN architectures considered in this study also affected the performance, the contribution of the factor might be considered less important than the tile size or tile overlap.
These insights are also supported by the EMMs plots from Figure 5 and by the data from Appendix B and Appendix C. Nonetheless, this qualitative ranking is related to the current study and the specific impact of the tile size, tile overlap, and CNN model factors, and they are conditioned to the dataset and training settings specific to this study. Please note that the addition of other CNN models and tile overlap, or tile size levels, could result in a significantly different impact on the performance.

6.6. On the Uncertainty of the Models, the Limitations of the Study, and Future Directions

To reduce overfitting and enable a high generalisation capacity of the road classification models, this study was conducted on the data from the SROADEX dataset (where 8650 km2 from the representative regions of Spain were labelled with binary road information). Given the scope of this study, six training and validation sets for each tile size and overlap combination were considered by applying a 95:5% split criterion (as detailed in Section 3). A novel test set featuring data unseen during training was labelled from a single, representative orthoimage covering approximately 825 km2 to assess the real-world generalisation performance of the resulting models. As the p-values are highly dependent on the sample size, the use of training datasets, with high data variability, ensures the statistical significance of the results.
It is also important to mention that studying the normality of the data (using statistical tests like Shapiro–Wilk or Kolmogorov–Smirnov) before applying a statistical analysis is important for images, as factors like lighting conditions, the characteristics of the cameras used for data capturing, or other specific features within images can result in data distributions that are not normal (i.e., skewed). If data do not follow a normal distribution (for example, if the images used have a lot of dark pixels, the distribution of pixel intensities could be skewed towards the lower end of the range), some form of transformation (like a log transformation) must be applied to approximate them more to the normal distribution (otherwise, non-parametric statistical methods that do not assume normality have to be applied). Nonetheless, in this study, given the considerable sample size and following similar statistical situations, the assumption of data normality was considered (as explained at the end of Section 3).
The standard procedure for training DL models for classification was applied and included the normalisation of the pixel values to the interval [0, 1], in-memory data augmentation techniques with small parameters, and the application of transfer learning. A weight matrix was applied to training to penalise the model when trying to predict the over-represented class. The same hyperparameters values were applied to all training scenarios.
The experiments were repeated three times for each training scenario presented in Table 2 to reduce the randomness associated with DL model convergence—a higher number of training repetitions can be considered for a future study, as more significant insights can be achieved by applying statistical tests on a higher number of training iterations (resulting in a higher number of degrees of freedom). Nonetheless, by conducting the experiments on a large dataset, it is expected that the effect of this drawback is reduced (as the training on large datasets helps the model converge and results in models with similar performances). Additionally, although there were only three repetitions at the training scenario ID level (so that the ANOVA analysis from Section 5.1 could be valid), this experimental design resulted in N = 12, N = 18, and N = 18 samples for each level of the groups analysed in Section 5.3 (the performance achieved on the test set when grouped by tile size, tile overlap, and trained CNN architecture). A higher number of training repetitions would have resulted in unfeasible computation times, as the current experiments lasted for around six months on the available computational infrastructure.
Tile size and overlap levels can significantly impact the memory footprint and computation times, as larger tiles involve the processing of more image data, and also generally lead to a larger memory usage (especially at higher overlap levels, where the same pixels will be loaded in memory multiple times). A trade-off between tile size, computational efficiency, and memory usage is that, although larger tiles can leverage the parallel processing capacities of modern GPUs, they require more memory to store the intermediate computations results, and it is important to consider this aspect when designing and training computer vision models. This can be a limitation when working with hardware that has limited processing capabilities. To find a balance between the performance and the computational resources required for achieving them (memory usage and computation times), a future work could explore strategies for optimising the tile size and overlap level by conducting a comprehensive monitoring and analysis of the training time to analyse how changes in the levels of the factors considered affect the temporal use of the available computational resources. Another interesting line of research is the optimisation of RAM usage by exploring techniques for efficient data batching (management of the GPU memory when processing larger tiles to automatically maximise the processing of larger batch sizes), or the use of sparse representations to reduce the memory footprint. These would broaden the applicability of the models for real-world scenarios.
Another limitation of this study is the reduced number of CNN architectures trained, or the reduced number of overlap levels and tile sizes considered. Nonetheless, these drawbacks were strongly conditioned by the available computational budget, as the introduction of new models or tile size and overlap levels (for example, multiples of 12.5%) would greatly increase the amount of training scenarios. Although it would be beneficial to further validate these findings with additional experiments, the computational cost required is significantly higher. To solve this, future studies could select an experimental design that enables a higher number of experiments by experimenting with a smaller dataset or by renting a sufficient computational infrastructure, with a large enough computational budget.
Another important aspect is that data from all tile sizes were processed with the same CNN models, and the relationship between the receptive field (RF) of the CNNs and the tile size level was not considered in the current study. The RF of a CNN is the region from the input space that can become a learned feature after convolutional processing, and it relates to the context that might be lost during processing. The RF size (defined in Equation (8)) is determined by the CNN architecture, specifically the number and size of the convolutional layers and pooling layers.
R F = 1 + 2 × ( n 1 ) × s
In Equation (8), “ R F ” is the receptive field size, “ n ” is the number of layers of the CNN (each convolutional layer with a 3 × 3 kernel increases the receptive field by two, while a convolutional layer with a 4 × 4 kernel increases the receptive field by three), while “ s ” represents the stride size. If the receptive field is too small, the network might lose some context as it cannot process enough neighbouring pixels for correct learning, while receptive fields that are too large might not be capable of effectively capturing the spatial dependencies in the data. One of the simpler ways to adjust the receptive field of a CNN is by adding or removing convolutional layers (adding layers results in deeper, more complex models that might require more computational resources for training), or adding pooling layers, or higher stride convolutions. Other ways to increase the receptive field are using dilated convolutions or depth-wise convolutions. In our study, since the same CNNs were trained for all tile sizes, the receptive field size was constant, while the tile size varied and this could be a source of error for tile sizes that do not match well with the receptive field size of the models. Deeper models or models that have different receptive field sizes could be particularly interesting at higher tile sizes, and a future research study could investigate this relation to provide more insights into the relationship between the optimal receptive field and tile size for different CNN architectures and tasks [47].
The lower interpretability of the models, intrinsic to deep learning implementations that model classification functions with millions of parameters, can also be mentioned as a challenge, as interpretability is sacrificed for high levels of performance. While the statistical analysis provides valuable insights, it is also important to consider other factors such as the practical implications of the results and the potential impact of false positives and false negatives.
Finally, statistical interpretations have always been associated with the possibility that the actual impact of tile overlap on model performance might depend on other factors not considered in this analysis. However, given the statistical significance level computed, the findings and insights of this study could be valuable for improving the performance of DL models that are trained in workflows that are relevant to the experiments. Nonetheless, this study is based on binary road data (continuous geospatial element) and might not be applicable to all geospatial classification works.
Finally, it is also recommended to explore the impact of the performance of additional tile division strategies such as the “Flip-n-slide” technique, proposed by Abrahams et al. [15] and described in Section 2, as, in this study, we used a standard, sequential division of the full orthoimage with the combinations of tile overlap and tile size presented and analysed. This standard approach allows for a systematic evaluation of the impact of tile size and overlap on model performance, but is different from the “Flip-n-Slide method” [15], which uses a sliding window to ensure that each pixel is represented in eight tiles. This novel approach can potentially capture and enhance the learning procedure by providing a different approach for the extraction of geospatial features near tile edges and richer potential representations.
We hope that more studies will focus on exploring the optimal sizes and overlaps for additional models to provide guidelines that will improve the experimental decisions taken by researchers and professionals in the field. By following statistically proven findings and optimal combinations, the number of training scenarios considered in future geo-studies could be reduced, leading to a decrease in the energy resources required for achieving optimal, accurate models.

7. Conclusions

This study was focused on statistically analysing the impact of the tile size and tile overlap on the performance of the CNN models trained for the road classification task. Real-world, aerial orthoimagery data labelled with binary road information that covered a large part of the Spanish territory were used to train and test the DL implementations. The aim was to objectively study the impact of the image size and overlap on the performance and identify the optimal combination of size and overlap levels that would enable a higher generalisation of DL road classification models.
A comprehensive statistical evaluation of the performance metrics was applied. The performance of the models on the validation and test sets was close to the performance on the training set and suggested that the models are robust (no underfitting or overfitting behaviour was detected). The results on the unseen data were statistically analysed. The performance was consistent across different training scenarios and iterations, suggesting a high generalisation capacity of the trained DL models. The VGG-v2 model trained on data with a tile size of 1024 × 1024 pixels and a tile overlap of 12.5% yields the best performance in terms of higher accuracy and F1 scores, and lower loss values.
The variation in performance metrics across different training scenarios indicated the relative importance of the fixed factors on the performance (i.e., that the levels of tile size led to a significant change in the performance metrics). The p-values of the main effects test for size and overlap as fixed factors were highly significant (p-value lower than 0.001) and demonstrated the important impact of these two independent variables on the road classification model performance. The “Model * Size” interaction effect was highly significant across each metric considered, while the “Size * Overlap” and “Model * Overlap” interaction effects were only significant for the ROC-AUC score. For the rest of the metrics, the two-way and the interaction effects were not significant (p-values higher than 0.05). The post hoc results support and assert the findings.
These results suggest that the tile size and overlap, as well as their interaction, play a significant role in the performance and show that higher tile sizes (1024 × 1024 pixels) and a small amount of overlap (12.5%) between adjacent image tiles can improve the performance of models trained for road classification. These findings show the benefit of additional scene information and additional continuity of the objects near the borders by providing more learning context, and they can guide the selection of model settings for optimal performance in future geospatial classification studies. This combination of tile size and overlap resulted in a higher generalisation capacity of the trained DL models. Future studies could consider additional tile overlap and tile size levels.
Nonetheless, more research on models with a larger number of trainable parameters and a higher number of training repetitions for each scenario could be carried out to further assess and understand the impact on the performance in more detail (the additional computational budget requirements made it unfeasible for this study). Future studies should also tackle the semantic segmentation of geospatial objects, given the importance of this DL operation for road cartography generation, and the extraction errors found near the borders that are often mentioned as a challenge in existing specialised works. Future studies could also approach the explainability and interpretability of CNN models by means of an analysis of the convolution kernels or the feature maps learned by the trained models, the exploration of more tiling strategies, or the study of the optimal relationship between the receptive field of CNNs and the tile size.

Author Contributions

C.-I.C.: conceptualisation, data curation, formal analysis, investigation, methodology, software, validation, visualisation, writing–original draft, and writing–review and editing; M.-Á.M.-C.: data curation, funding acquisition, investigation, project administration, resources, validation, visualisation, writing–original draft, and writing–review and editing; N.Y.: formal analysis, validation, visualisation, and writing–review and editing; T.S.: validation and writing–review and editing; A.-C.B.: validation and writing–review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from the “Deep learning applied to the recognition, semantic segmentation, post-processing, and extraction of the geometry of main roads, secondary roads and paths (SROADEX)” project (grant PID2020-116448GB-I00, funded by the AEI).

Data Availability Statement

The code featuring the training and evaluation of the implementation, the test data, and the resulting road classification models are available at the Zenodo repository (https://zenodo.org/records/10835684, accessed on 20 March 2024) and are distributed under an CC-BY 4.0 license. The training and validation sets are based on the SROADEX dataset (https://zenodo.org/records/6482346, accessed on 5 June 2022) that was re-split into tiles that feature the image sizes (256 × 256, 512 × 512, and 1024 × 1024 pixels) and image overlaps (0% and 12.5%) considered in this study. Due to the size on a disk of approximately 546 gigabytes, these data are only available upon request from the corresponding author.

Acknowledgments

The authors thank the anonymous reviewers for their suggestions that improved the analyses and for recommending interesting future lines of work.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. Performance metrics (mean loss, accuracy, F1 score, precision, recall, and ROC-AUC score) obtained by the road classification models trained in the twelve training scenarios (the experiments were three repetitions) presented in Table 2 on the training, validation, and test sets.
Table A1. Performance metrics (mean loss, accuracy, F1 score, precision, recall, and ROC-AUC score) obtained by the road classification models trained in the twelve training scenarios (the experiments were three repetitions) presented in Table 2 on the training, validation, and test sets.
Experiment No.Training Scenario IDIteration No.LossAccuracyF1 ScorePrecisionRecallROC-AUC Score
TrainValidationTestTrainValidationTestTrainValidationTestTrainValidationTestTrainValidationTestTrainValidationTest
1110.22170.24760.43650.90990.89710.83280.90970.89680.81570.90960.89690.81780.90990.89670.81370.97100.96280.9038
220.21480.24240.48910.91230.89980.82660.91210.89940.81100.91240.90000.80940.91180.89910.81270.97150.96370.8950
330.22190.24710.49510.90970.89960.82220.90960.89950.80220.90940.89920.80760.91030.90000.79780.97220.96450.8939
4210.22700.25740.48240.90720.89580.83030.90710.89570.81430.90730.89600.81390.90810.89670.81470.97150.96380.8962
520.21610.24580.46840.91170.89970.82960.91140.89950.81570.91170.89970.81220.91120.89930.82010.97150.96340.8973
630.22250.24740.42530.90840.89850.83760.90830.89830.82460.90820.89820.82070.90830.89850.82950.97030.96320.9078
7310.16240.19780.31100.93280.91700.90900.91390.89340.80430.92190.90250.83020.90670.88250.78390.98120.97180.9216
820.17980.21520.35330.92420.91280.91220.90000.88400.80490.92370.91080.84610.88210.86470.77580.98040.97290.9156
930.16900.19840.32280.92870.91580.90900.90850.89160.80400.91740.90160.83100.90060.88300.78290.97910.97140.9211
10410.17710.22190.29980.92670.90980.91250.90440.88240.81550.91570.89300.83410.89470.87340.79990.97750.96530.9272
1120.16640.20660.29390.93180.91400.90990.91180.88910.80350.91900.89500.83570.90530.88370.77940.98010.96850.9234
1230.17280.21690.31410.92900.91140.91140.90550.88190.80480.92760.90400.84190.88850.86520.77780.98140.96960.9224
13510.08900.11250.19340.96360.95680.97090.88550.86230.84700.95630.94000.97640.83720.81210.77720.98880.98190.9517
1420.10270.13090.19450.96040.95310.97180.87380.84750.85510.95300.93580.96930.82230.79390.78930.98510.97590.9356
1530.07990.10530.19160.96930.96550.97250.90910.89780.86550.94580.93480.94410.87930.86790.81290.98990.98510.9462
16610.10210.10210.12690.96380.96380.97280.88160.88160.86440.94620.94620.95660.83620.83620.80610.98160.98160.9700
1720.10350.10350.15720.96570.96570.97370.88950.88950.86730.94560.94560.97110.84830.84830.80430.98260.98260.9704
1830.09410.09410.13130.96470.96470.97370.88590.88590.87030.94460.94460.95780.84340.84340.81360.98260.98260.9706
19710.22180.24540.46060.90990.89790.82300.90970.89760.80830.90950.89760.80510.91000.89770.81230.97150.96430.8989
2020.21360.24140.47690.91180.90160.82750.91150.90130.81120.91160.90150.81090.91150.90100.81160.97210.96420.9007
2130.21800.24600.49130.91110.89830.82720.91100.89800.81020.91080.89800.81090.91120.89800.80960.97190.96340.9004
22810.22170.25280.46400.91060.89970.83830.91050.89960.82160.91030.89940.82380.91080.90000.81970.97200.96430.9086
2320.21170.24380.47840.91340.90180.83030.91330.90170.81450.91310.90150.81380.91370.90210.81530.97400.96610.9004
2430.21690.25030.49250.91060.89900.82470.91040.89880.81230.91040.89880.80710.91040.89880.82030.97140.96330.8998
25910.16190.20480.33900.93300.91690.91100.91340.89190.80600.92620.90720.83820.90250.87940.78180.98230.97260.9237
2620.17770.20690.29350.92710.91260.90590.90510.88550.79240.92130.90420.82880.89200.87090.76620.97910.96940.9157
2730.17050.21090.35090.92950.91550.90950.90880.88980.81020.92170.90620.82650.89800.87670.79620.98000.97050.9152
281010.18660.23560.31880.92270.90990.90990.89740.88030.80360.91790.90060.83560.88140.86460.77960.97700.96430.9144
2920.17770.23090.34750.92590.90970.91060.90080.87860.80400.92650.90500.83870.88170.85950.77850.98150.96830.9201
3030.16910.21440.29550.93050.91330.91440.90890.88680.81400.92290.89830.84530.89710.87700.79020.98070.96740.9225
311110.09140.10780.16310.96360.95930.97280.88680.87430.86360.95230.93320.95980.84100.83220.80380.98940.98520.9686
3220.07190.08780.11320.97150.96730.97370.91900.90300.87320.93640.94210.94600.90320.87160.82290.99090.98530.9720
3330.07990.09170.12180.97060.96360.97340.91540.89450.86840.93880.92050.95740.89490.87230.81110.98920.98500.9709
341210.10110.10110.11230.96570.96570.97310.88540.88540.86640.96500.96500.97500.83280.83280.80860.98050.98050.9753
3520.11330.11330.09470.96620.96620.97630.89500.89500.88620.93030.93030.95780.86630.86630.83600.97420.97420.9840
3630.09480.09480.09840.96520.96520.97440.88710.88710.87280.94770.94770.96490.84360.84360.81400.98080.98080.9766
Note: The model with best performance on the testing set (Experiment 36) is represented in bold.

Appendix B

Table A2. Estimated Marginal Means (EMMs) for the interaction between the tile size and tile overlap as fixed factors (Size * Overlap) on the performance metrics (F1 score, ROU-AUC score, and loss value) as dependent variables.
Table A2. Estimated Marginal Means (EMMs) for the interaction between the tile size and tile overlap as fixed factors (Size * Overlap) on the performance metrics (F1 score, ROU-AUC score, and loss value) as dependent variables.
Dependent VariableTile Overlap (%)Tile Size
(Pixels × Pixels)
MeanStd. Error95% Confidence Interval
Lower BoundUpper Bound
F1 score02560.80980.00270.80420.8153
5120.80360.00270.79810.8092
10240.86210.00270.85660.8677
12.52560.81720.00270.81160.8227
5120.80760.00270.80200.8131
10240.87120.00270.86570.8768
ROC-AUC score02560.89880.00300.89260.9050
5120.91880.00300.91260.9250
10240.95750.00300.95130.9637
12.52560.90170.00300.89550.9079
5120.92170.00300.91540.9279
10240.97450.00300.96830.9807
Loss02560.47490.01050.45350.4963
5120.32840.01050.30700.3498
10240.16290.01050.14150.1844
12.52560.46850.01050.44710.4899
5120.31160.01050.29020.3330
10240.12010.01050.09870.1416

Appendix C

Table A3. Estimated Marginal Means (EMMs) for the interaction between the CNN architecture, tile size, and tile overlap as fixed factors (Model * Size * Overlap) on the performance metrics (F1 score, ROU-AUC score, and loss value) as dependent variables.
Table A3. Estimated Marginal Means (EMMs) for the interaction between the CNN architecture, tile size, and tile overlap as fixed factors (Model * Size * Overlap) on the performance metrics (F1 score, ROU-AUC score, and loss value) as dependent variables.
Dependent VariableModelTile Size
(Pixels × Pixels)
Tile Overlap (%)MeanStd. Error95% Confidence Interval
Lower BoundUpper Bound
F1 scoreVGG-v125600.80960.00370.80200.8172
12.50.81820.00370.81060.8258
51200.80440.00370.79680.8120
12.50.80790.00370.80030.8155
102400.85590.00370.84830.8635
12.50.86730.00370.85970.8749
VGG-v225600.80990.00370.80230.8175
12.50.81610.00370.80850.8237
51200.80290.00370.79530.8105
12.50.80720.00370.79960.8148
102400.86840.00370.86080.8760
12.50.87510.00370.86750.8827
ROC-AUC scoreVGG-v125600.89760.00260.89220.9030
12.50.90040.00260.89500.9058
51200.91940.00260.91400.9248
12.50.92430.00260.91890.9297
102400.94450.00260.93910.9499
12.50.97030.00260.96490.9757
VGG-v225600.90000.00260.89460.9054
12.50.90290.00260.89750.9083
51200.91820.00260.91280.9236
12.50.91900.00260.91360.9244
102400.97050.00260.96510.9759
12.50.97860.00260.97320.9840
LossVGG-v125600.47360.01250.44780.4993
12.50.45870.01250.43290.4845
51200.32900.01250.30330.3548
12.50.30260.01250.27680.3284
102400.19320.01250.16740.2189
12.50.13850.01250.11270.1642
VGG-v225600.47630.01250.45050.5020
12.50.47830.01250.45250.5041
51200.32780.01250.30200.3536
12.50.32060.01250.29480.3464
102400.13270.01250.10690.1585
12.50.10180.01250.07600.1276

References

  1. Rigollet, P. 18.657: Mathematics of Machine Learning; Massachusetts Institute of Technology: Cambridge, MA, USA, 2015; Volume 7, Available online: https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/ (accessed on 19 April 2020).
  2. Cira, C.-I. Contribution to Object Extraction in Cartography: A Novel Deep Learning-Based Solution to Recognise, Segment and Post-Process the Road Transport Network as a Continuous Geospatial Element in High-Resolution Aerial Orthoimagery. Ph.D. Thesis, Universidad Politécnica de Madrid, Madrid, Spain, 2022. Available online: http://oa.upm.es/70152 (accessed on 30 March 2022).
  3. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  4. Cira, C.-I.; Alcarria, R.; Manso-Callejo, M.-Á.; Serradilla, F. Evaluation of Transfer Learning Techniques with Convolutional Neural Networks (CNNs) to Detect the Existence of Roads in High-Resolution Aerial Imagery. In Applied Informatics; Florez, H., Leon, M., Diaz-Nafria, J.M., Belli, S., Eds.; Springer International Publishing: Cham, Switerland, 2019; Volume 1051, pp. 185–198. ISBN 978-3-030-32474-2. [Google Scholar]
  5. Manso-Callejo, M.-Á.; Cira, C.-I.; González-Jiménez, A.; Querol-Pascual, J.-J. Dataset Containing Orthoimages Tagged with Road Information Covering Approximately 8650 Km2 of the Spanish Territory (SROADEX). Data Brief 2022, 42, 108316. [Google Scholar] [CrossRef]
  6. Reina, G.A.; Panchumarthy, R.; Thakur, S.P.; Bastidas, A.; Bakas, S. Systematic Evaluation of Image Tiling Adverse Effects on Deep Learning Semantic Segmentation. Front. Neurosci. 2020, 14, 65. [Google Scholar] [CrossRef]
  7. Lee, A.L.S.; To, C.C.K.; Lee, A.L.H.; Li, J.J.X.; Chan, R.C.K. Model Architecture and Tile Size Selection for Convolutional Neural Network Training for Non-Small Cell Lung Cancer Detection on Whole Slide Images. Inform. Med. Unlocked 2022, 28, 100850. [Google Scholar] [CrossRef]
  8. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  9. Cira, C.-I.; Alcarria, R.; Manso-Callejo, M.-Á.; Serradilla, F. A Deep Learning-Based Solution for Large-Scale Extraction of the Secondary Road Network from High-Resolution Aerial Orthoimagery. Appl. Sci. 2020, 10, 7272. [Google Scholar] [CrossRef]
  10. Cira, C.-I.; Manso-Callejo, M.-Á.; Alcarria, R.; Bordel Sánchez, B.B.; González Matesanz, J.G. State-Level Mapping of the Road Transport Network from Aerial Orthophotography: An End-to-End Road Extraction Solution Based on Deep Learning Models Trained for Recognition, Semantic Segmentation and Post-Processing with Conditional Generative Learning. Remote Sens. 2023, 15, 2099. [Google Scholar] [CrossRef]
  11. Unel, F.O.; Ozkalayci, B.O.; Cigla, C. The Power of Tiling for Small Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–20 June 2019; pp. 582–591. [Google Scholar]
  12. Akyon, F.C.; Onur Altinuc, S.; Temizel, A. Slicing Aided Hyper Inference and Fine-Tuning for Small Object Detection. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16 October 2022; pp. 966–970. [Google Scholar]
  13. Zeng, G.; Zheng, G. Holistic Decomposition Convolution for Effective Semantic Segmentation of Medical Volume Images. Med. Image Anal. 2019, 57, 149–164. [Google Scholar] [CrossRef] [PubMed]
  14. An, Y.; Ye, Q.; Guo, J.; Dong, R. Overlap Training to Mitigate Inconsistencies Caused by Image Tiling in CNNs. In Artificial Intelligence XXXVII; Bramer, M., Ellis, R., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switerland, 2020; Volume 12498, pp. 35–48. ISBN 978-3-030-63798-9. [Google Scholar]
  15. Abrahams, E.; Snow, T.; Siegfried, M.R.; Pérez, F. A Concise Tiling Strategy for Preserving Spatial Context in Earth Observation Imagery. arXiv 2024, arXiv:2404.10927. [Google Scholar]
  16. Chun, C.; Ryu, S.-K. Road Surface Damage Detection Using Fully Convolutional Neural Networks and Semi-Supervised Learning. Sensors 2019, 19, 5501. [Google Scholar] [CrossRef] [PubMed]
  17. Maeda, H.; Sekimoto, Y.; Seto, T.; Kashiyama, T.; Omata, H. Road Damage Detection Using Deep Neural Networks with Images Captured Through a Smartphone. arXiv 2018, arXiv:1801.09454. [Google Scholar]
  18. Liang, H.; Lee, S.-C.; Seo, S. Automatic Recognition of Road Damage Based on Lightweight Attentional Convolutional Neural Network. Sensors 2022, 22, 9599. [Google Scholar] [CrossRef] [PubMed]
  19. Rajendran, T.; Mohamed Imtiaz, N.; Jagadeesh, K.; Abdul Kareem, D. Road Obstacles Detection Using Convolution Neural Network and Report Using IoT. In Proceedings of the 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 January 2022; pp. 22–26. [Google Scholar]
  20. Zhang, T.; Wang, D.; Lu, Y. Benchmark Study on a Novel Online Dataset for Standard Evaluation of Deep Learning-Based Pavement Cracks Classification Models. KSCE J. Civ. Eng. 2024, 28, 1267–1279. [Google Scholar] [CrossRef]
  21. Fu, R.; Cao, M.; Novák, D.; Qian, X.; Alkayem, N.F. Extended Efficient Convolutional Neural Network for Concrete Crack Detection with Illustrated Merits. Autom. Constr. 2023, 156, 105098. [Google Scholar] [CrossRef]
  22. Guzmán-Torres, J.A.; Morales-Rosales, L.A.; Algredo-Badillo, I.; Tinoco-Guerrero, G.; Lobato-Báez, M.; Melchor-Barriga, J.O. Deep Learning Techniques for Multi-Class Classification of Asphalt Damage Based on Hamburg-Wheel Tracking Test Results. Case Stud. Constr. Mater. 2023, 19, e02378. [Google Scholar] [CrossRef]
  23. He, L.; Peng, B.; Tang, D.; Li, Y. Road Extraction Based on Improved Convolutional Neural Networks with Satellite Images. Appl. Sci. 2022, 12, 10800. [Google Scholar] [CrossRef]
  24. Fakhri, S.A.; Shah-Hosseini, R. Improved Road Detection Algorithm Based on Fusion of Deep Convolutional Neural Networks and Random Forest Classifier on VHR Remotely-Sensed Images. J. Indian. Soc. Remote Sens. 2022, 50, 1409–1421. [Google Scholar] [CrossRef]
  25. Zhu, Y.; Yan, J.; Wang, C.; Zhou, Y. Road Detection of Remote Sensing Image Based on Convolutional Neural Network. In Image and Graphics; Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switerland, 2019; Volume 11902, pp. 106–118. ISBN 978-3-030-34109-1. [Google Scholar]
  26. Jiang, Y. Research on Road Extraction of Remote Sensing Image Based on Convolutional Neural Network. J. Image Video Proc. 2019, 2019, 31. [Google Scholar] [CrossRef]
  27. Higuchi, R.; Fujimoto, Y. Road and Intersection Detection Using Convolutional Neural Network. In Proceedings of the 2020 IEEE 16th International Workshop on Advanced Motion Control (AMC), Kristiansand, Norway, 14 September 2020; pp. 363–366. [Google Scholar]
  28. Eltaher, F.; Miralles-Pechuán, L.; Courtney, J.; Mckeever, S. Detecting Road Intersections from Satellite Images Using Convolutional Neural Networks. In Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, Tallinn, Estonia, 27–31 March 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 495–498. [Google Scholar]
  29. Dewangan, D.K.; Sahu, S.P. RCNet: Road Classification Convolutional Neural Networks for Intelligent Vehicle System. Intel. Serv. Robot. 2021, 14, 199–214. [Google Scholar] [CrossRef]
  30. Lee, S.-K.; Yoo, J.; Lee, C.-H.; An, K.; Yoon, Y.-S.; Lee, J.; Yeom, G.-H.; Hwang, S.-U. Road Type Classification Using Deep Learning for Tire-Pavement Interaction Noise Data in Autonomous Driving Vehicle. Appl. Acoust. 2023, 212, 109597. [Google Scholar] [CrossRef]
  31. Cira, C.I.; Alcarria, R.; Manso-Callejo, M.Á.; Serradilla, F. A Deep Convolutional Neural Network to Detect the Existence of Geospatial Elements in High-Resolution Aerial Imagery. Proceedings 2019, 19, 17. [Google Scholar] [CrossRef]
  32. Cira, C.-I.; Díaz-Álvarez, A.; Serradilla, F.; Manso-Callejo, M.-Á. Convolutional Neural Networks Adapted for Regression Tasks: Predicting the Orientation of Straight Arrows on Marked Road Pavement Using Deep Learning and Rectified Orthophotography. Electronics 2023, 12, 3980. [Google Scholar] [CrossRef]
  33. Cira, C.-I.; Alcarria, R.; Manso-Callejo, M.-Á.; Serradilla, F. A Framework Based on Nesting of Convolutional Neural Networks to Classify Secondary Roads in High Resolution Aerial Orthoimages. Remote Sens. 2020, 12, 765. [Google Scholar] [CrossRef]
  34. de la Fuente Castillo, V.; Díaz-Álvarez, A.; Manso-Callejo, M.-Á.; Serradilla García, F. Grammar Guided Genetic Programming for Network Architecture Search and Road Detection on Aerial Orthophotography. Appl. Sci. 2020, 10, 3953. [Google Scholar] [CrossRef]
  35. Alshaikhli, T.; Liu, W.; Maruyama, Y. Automated Method of Road Extraction from Aerial Images Using a Deep Convolutional Neural Network. Appl. Sci. 2019, 9, 4825. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Zhang, X.; Sun, Y.; Zhang, P. Road Centerline Extraction from Very-High-Resolution Aerial Image and LiDAR Data Based on Road Connectivity. Remote Sens. 2018, 10, 1284. [Google Scholar] [CrossRef]
  37. Centro Nacional de Información Geográfica, Instituto Geográfico Nacional Plan Nacional de Ortofotografía Aérea. Available online: https://pnoa.ign.es/ (accessed on 10 March 2024).
  38. Fischer, H. A History of the Central Limit Theorem: From Classical to Modern Probability Theory; Springer: New York, NY, USA, 2011; ISBN 978-0-387-87856-0. [Google Scholar]
  39. Manso-Callejo, M.A.; Cira, C.-I.; Alcarria, R.; Gonzalez Matesanz, F.J. First Dataset of Wind Turbine Data Created at National Level with Deep Learning Techniques from Aerial Orthophotographs with a Spatial Resolution of 0.5 m/Pixel. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7968–7980. [Google Scholar] [CrossRef]
  40. Manso-Callejo, M.-Á.; Cira, C.-I.; Arranz-Justel, J.-J.; Sinde-González, I.; Sălăgean, T. Assessment of the Large-Scale Extraction of Photovoltaic (PV) Panels with a Workflow Based on Artificial Neural Networks and Algorithmic Postprocessing of Vectorization Results. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103563. [Google Scholar] [CrossRef]
  41. Agarap, A.F. Deep Learning Using Rectified Linear Units (ReLU). arXiv 2018, arXiv:1803.08375. [Google Scholar]
  42. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  43. Chollet, F. Keras. Available online: https://github.com/fchollet/keras (accessed on 14 May 2020).
  44. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, Savannah, GA, USA, 2–4 November 2016; p. 21. [Google Scholar]
  45. Manso Callejo, M.A.; Cira, C.-I.; Iturrioz, T.; Serradilla Garcia, F. Train and Evaluation Code, Road Classification Models and Test Set of the Paper “Impact of Image Resolution and Image Overlap on the Prediction Performance of Convolutional Neural Networks Trained for Road Classification”. 2024. Available online: https://zenodo.org/records/10835684 (accessed on 20 March 2024).
  46. IBM Corp IBM SPSS Statistics for Macintosh. Available online: https://www.ibm.com/support/pages/ibm-spss-statistics-29-documentation (accessed on 18 March 2024).
  47. Luo, W.; Li, Y.; Urtasun, R.; Zemel, R.S. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain, 5–10 December 2016; Lee, D.D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; pp. 4898–4906. [Google Scholar]
Figure 1. A map with the distribution of the regions covered by the orthophotos used in this study. Note: Each number within the map represents the official zone nomenclature found in the 1:50,000 National Topographic Map that is produced by the National Geographical Institute of Spain.
Figure 1. A map with the distribution of the regions covered by the orthophotos used in this study. Note: Each number within the map represents the official zone nomenclature found in the 1:50,000 National Topographic Map that is produced by the National Geographical Institute of Spain.
Remotesensing 16 02818 g001
Figure 2. Boxplots of the performance metrics obtained by the road classification models grouped by their training IDs in terms of (a) F1 scores, (b) ROC-AUC scores, and (c) loss values.
Figure 2. Boxplots of the performance metrics obtained by the road classification models grouped by their training IDs in terms of (a) F1 scores, (b) ROC-AUC scores, and (c) loss values.
Remotesensing 16 02818 g002
Figure 3. Confusion matrices obtained by the model that achieved the highest mean metrics in scenario ID 12 (VGG-v2 network trained with tiles featuring a size of 1024 × 1024 pixels and an overlap of 12.5%) on (a) the train set containing n = 39,866 , (b) the validation set containing n = 2099 tiles, and (c) on the unseen data (test set containing n = 3123 tiles). Note: The ratios of true and false predictions between sets are similar and indicate a good performance (lack of underfitting) and a lack of overfitting behaviour.
Figure 3. Confusion matrices obtained by the model that achieved the highest mean metrics in scenario ID 12 (VGG-v2 network trained with tiles featuring a size of 1024 × 1024 pixels and an overlap of 12.5%) on (a) the train set containing n = 39,866 , (b) the validation set containing n = 2099 tiles, and (c) on the unseen data (test set containing n = 3123 tiles). Note: The ratios of true and false predictions between sets are similar and indicate a good performance (lack of underfitting) and a lack of overfitting behaviour.
Remotesensing 16 02818 g003
Figure 4. Boxplots of the F1 score, ROC-AUC scores, and loss values grouped by the levels of (ac) tile size, (df) tile overlap, and (gi) road classification model, respectively.
Figure 4. Boxplots of the F1 score, ROC-AUC scores, and loss values grouped by the levels of (ac) tile size, (df) tile overlap, and (gi) road classification model, respectively.
Remotesensing 16 02818 g004
Figure 5. Estimated Marginal Means (EMMs) of the two-way interaction between the tile size and overlap (Size * Overlap) on the (a) F1 score, (d) ROC-AUC score, and (g) loss value together with the EMMs of the three-way interaction effect between the CNN architecture, the tile size, and tile overlap as fixed factors (Model * Size * Overlap) on the (b,c) F1 score, (e,f), ROC-AUC score, and (h,i) loss value.
Figure 5. Estimated Marginal Means (EMMs) of the two-way interaction between the tile size and overlap (Size * Overlap) on the (a) F1 score, (d) ROC-AUC score, and (g) loss value together with the EMMs of the three-way interaction effect between the CNN architecture, the tile size, and tile overlap as fixed factors (Model * Size * Overlap) on the (b,c) F1 score, (e,f), ROC-AUC score, and (h,i) loss value.
Remotesensing 16 02818 g005
Table 1. The distribution of the image tiles for the road classification task in the train, validation, and test sets.
Table 1. The distribution of the image tiles for the road classification task in the train, validation, and test sets.
Tile Size (Pixels)Tile Overlap
(%)
SetClass (No. Images)
RoadNo Road
256 × 2560%Train237,919262,879
Validation12,52313,826
Percentage of data47.51%52.49%
12.5%Train312,092340,567
Validation16,42617,925
Percentage of data47.82%52.18%
Test set (novel area and no overlap)33,58418,255
Percentage of data64.79%35.21%
512 × 5120%Train90,47534,085
Validation47621794
Percentage of data72.64%27.36%
12.5%Train118,07842,448
Validation62152287
Percentage of data73.53%26.47%
Test set (novel area and no overlap)10,9161871
Percentage of data85.37%14.63%
1024 × 10240%Train27,7053124
Validation1457165
Percentage of data89.86%10.14%
12.5%Train36,0343,832
Validation1897202
Percentage of data90.39%9.61%
Test set (novel area and no overlap)2923200
Percentage of data93.60%6.40%
Notes: (1) The data are organised by the tile sizes, namely 256 × 256, 512 × 512, and 1024 × 1024 pixels and by the percentages of tile overlap (0% and 12.5% overlap). (2) The training and validation sets were obtained by applying a splitting criterion of 95:5% on binary data labelled with road information from an area of approximately 8650 km2. The test set contains novel binary road data covering approximately 825 km2, which were divided into tiles of studied sizes with no overlap. (3) The spatial resolution of the image tiles is 0.5 m. Therefore, a tile of 256 × 256 pixels covers a land area of approximately 0.016 km2, a tile of 512 × 512 pixels covers approximately 0.065 km2, while a tile of 1024 × 1024 pixels covers approximately 0.262 km2.
Table 2. Training scenarios considered for the road classification with convolutional neural networks.
Table 2. Training scenarios considered for the road classification with convolutional neural networks.
Training
Scenario ID
Deep Learning ModelTile Size (Pixels)Tile Overlap (%)
1VGG-v1256 × 2560
212.5
3VGG-v1512 × 5120
412.5
5VGG-v11024 × 10240
612.5
7VGG-v2256 × 2560
812.5
9VGG-v2512 × 5120
1012.5
11VGG-v21024 × 10240
1212.5
Note: The training of each scenario was repeated three times to enable the statistical analysis (as ANOVA is valid with as little as three samples) and to control the randomness effect associated with the DL models’ convergence.
Table 3. Means and their standard deviations, F-statistics, and their p-value, together with Eta (η) and Eta squared (η2) association measures (ANOVA results) of the loss, accuracy, F1 score, precision, recall, and ROC-AUC score as dependent variables and the training scenarios as fixed factors.
Table 3. Means and their standard deviations, F-statistics, and their p-value, together with Eta (η) and Eta squared (η2) association measures (ANOVA results) of the loss, accuracy, F1 score, precision, recall, and ROC-AUC score as dependent variables and the training scenarios as fixed factors.
Independent VariableCategory
(Training
Scenario ID)
Statistical MeasureLossAccuracyF1 ScorePrecisionRecallROC-AUC Score
Training Scenario ID
(Road Classification)
1Mean0.47360.82720.80960.81160.80810.8976
Std. Deviation0.03220.00530.00690.00540.00890.0054
2Mean0.45870.83250.81820.81560.82140.9004
Std. Deviation0.02980.00440.00560.00450.00750.0064
3Mean0.32900.91010.80440.83580.78090.9194
Std. Deviation0.02180.00180.00050.00900.00440.0033
4Mean0.30260.91130.80790.83720.78570.9243
Std. Deviation0.01040.00130.00660.00410.01230.0025
5Mean0.19320.97170.85590.96330.79310.9445
Std. Deviation0.00150.00080.00930.01700.01820.0082
6Mean0.13850.97340.86730.96180.80800.9703
Std. Deviation0.01640.00050.00300.00800.00490.0003
7Mean0.47630.82590.80990.80900.81120.9000
Std. Deviation0.01540.00250.00150.00330.00140.0010
8Mean0.47830.83110.81610.81490.81840.9029
Std. Deviation0.01430.00680.00490.00840.00270.0049
9Mean0.32780.90880.80290.83120.78140.9182
Std. Deviation0.03030.00260.00930.00620.01500.0048
10Mean0.32060.91160.80720.83990.78280.9190
Std. Deviation0.02600.00240.00590.00500.00650.0042
11Mean0.13270.97330.86840.95440.81260.9705
Std. Deviation0.02670.00050.00480.00740.00960.0017
12Mean0.10180.97460.87510.96590.81950.9786
Std. Deviation0.00930.00160.01010.00860.01450.0047
Inferential StatisticsF-statistic130.3381115.40460.938216.7217.412130.648
p-value<0.001<0.001<0.001<0.001<0.001<0.001
η0.9920.9990.9830.9950.8790.992
η2 0.9840.9980.9650.9900.7730.984
Total (Descriptive Statistics)Mean0.31110.90430.82860.87000.80190.9288
Std. Deviation0.13960.05990.02840.06660.01770.0292
Note: (1) The F-statistics and their corresponding p-values and the measures of association are obtained from an ANOVA test applied on the mean values to verify if there are significant differences in the performance metrics means (the fixed factor being the training ID), at a significance level of 0.05. (2) The training scenario with the best performance and the statistically significant ANOVA results on the mean performance metrics are represented in bold.
Table 4. Homogeneous subsets obtained by applying the Scheffe’s post hoc test in terms of F1 and ROC-AUC scores and loss grouped by training IDs at a significance level of 0.05.
Table 4. Homogeneous subsets obtained by applying the Scheffe’s post hoc test in terms of F1 and ROC-AUC scores and loss grouped by training IDs at a significance level of 0.05.
F1 ScoreROC-AUC ScoreLoss
Training IDSubsetTraining IDSubsetTraining IDSubset
121234561234
90.8029 10.8976 120.1018
30.8044 70.90000.9000 110.13270.1327
100.8072 20.90040.9004 60.13850.1385
40.8079 80.90290.90290.9029 5 0.1932
10.8096 9 0.91820.91820.9182 4 0.3026
70.8099 10 0.91900.9190 10 0.3206
80.8161 3 0.91940.9194 9 0.3278
20.8182 4 0.9243 3 0.3290
5 0.85595 0.9445 2 0.4587
6 0.86736 0.97031 0.4736
11 0.868411 0.97057 0.4763
12 0.875112 0.97868 0.4783
p-value0.6500.314p-value0.9970.0510.1070.9901.0000.910p-value0.9460.4260.9961.000
Notes: (1) Training IDs that are found in the same homogeneous subset display a generalisation performance that is not significantly different from each other. (2) If a configuration is not common in two homogeneous subsets, it suggests that there is a statistically significant difference in performance between that training ID and the ones in the other subset. (3) The mean squared error was 4.07−5 for the F1 score, 2.043−5 for the ROC-AUC score, and 0 for loss, respectively. (4) The homogeneous subsets with the best mean performance are represented in bold.
Table 5. An ANOVA analysis of the mean road classification metrics across the various levels of tile size, overlap, and CNN architecture.
Table 5. An ANOVA analysis of the mean road classification metrics across the various levels of tile size, overlap, and CNN architecture.
Independent VariableCategoryStatistical MeasureLossAccuracyF1 ScorePrecisionRecallROC-AUC Score
Tile Size
(pixels × pixels)
256Mean0.47170.82920.81350.81280.81480.9002
Std. Deviation0.02220.00510.00590.00560.00760.0046
512Mean0.32000.91040.80560.83600.78270.9202
Std. Deviation0.02280.00210.00590.00630.00910.0041
1024Mean0.14150.97330.86670.96130.80830.9660
Std. Deviation0.03710.00130.00960.01040.01490.0140
Inferential StatisticsF-statistic411.7475730.323246.4511283.26428.559174.008
p-value<0.001<0.001<0.001<0.001<0.001<0.001
η0.9810.9990.9680.9940.7960.956
η2 0.9610.9970.9370.9870.6340.913
Tile Overlap
(%)
0Mean0.32210.90280.82520.86750.79790.9250
Std. Deviation0.13390.06160.02780.06770.01670.0266
12.5Mean0.30010.90570.83200.87260.80600.9326
Std. Deviation0.14810.06000.02940.06740.01810.0320
Inferential StatisticsF-statistic0.2190.0210.5100.0501.9480.599
p-value0.6430.8860.4800.8250.1720.444
η0.0800.0250.1220.0380.2330.132
η2 0.0060.0010.0150.0010.0540.017
Model
(CNN architecture)
VGG-v1Mean0.31590.90440.82720.87090.79950.9261
Std. Deviation0.12880.06020.02610.06780.01700.0263
VGG-v2Mean0.30620.90420.82990.86920.80430.9315
Std. Deviation0.15320.06130.03130.06730.01840.0324
Inferential StatisticsF-statistic0.04200.0800.0060.6540.307
p-value0.8390.9950.7790.9410.4240.583
η0.0350.0010.0480.0130.1370.095
η2 0.00100.00200.0190.009
Notes: (1) The F-statistics and their corresponding p-values and the measures of association are obtained from an ANOVA test on the mean values applied to verify if there are significant differences in the performance metrics means (the fixed factor being the tile size, the overlap, and the trained CNN architecture), at a significance level of 0.05. (2) The levels of the independent variables with the best performance and their statistically significant ANOVA results on the mean performance metrics are represented in bold.
Table 6. An analysis of the main and interaction effect of the size, overlap, and model as fixed factors and loss, F1, and ROC-AUC scores as dependent variables (by means of the “between-subjects table”).
Table 6. An analysis of the main and interaction effect of the size, overlap, and model as fixed factors and loss, F1, and ROC-AUC scores as dependent variables (by means of the “between-subjects table”).
IDSourceDependent VariableType III Sum of SquaresdfMean SquareFp-Value
1Corrected ModelF1 score0.0273 a110.002560.94<0.001
ROC-AUC score0.0294 b110.0027130.65<0.001
Loss value0.6706 c110.0610130.34<0.001
2InterceptF1 score24.7158124.7158606,928.45<0.001
ROC-AUC score31.0576131.05761,519,926.45<0.001
Loss value3.483813.48387,448.61<0.001
3ModelF1 score6.615−516.6151−51.620.2147
ROC-AUC score0.000310.000313.060.0014
Loss value0.000810.00081.800.1920
4SizeF1 score0.026520.0133325.37<0.001
ROC-AUC score0.027320.0136667.29<0.001
Loss value0.655520.3278700.78<0.001
5OverlapF1 score0.000410.000410.250.0038
ROC-AUC score0.000510.000525.29<0.001
Loss value0.004410.00449.320.0055
6Size * OverlapF1 score4.1602−522.0801−50.510.6064
ROC-AUC score0.000420.00029.74<0.001
Loss value0.002120.00112.250.1269
7Model * SizeF1 score0.000320.00013.070.0649
ROC-AUC score0.000720.000316.30<0.001
Loss value0.006820.00347.290.0034
8Model * OverlapF1 score9.8178−619.8178−60.240.6279
ROC-AUC score0.000110.00015.780.0243
Loss value0.000910.00091.920.1786
9Model * Size * OverlapF1 score1.1549−525.7744−60.140.8685
ROC-AUC score0.000126.4747−53.170.0601
Loss value1.8477−529.2386−60.020.9805
10ErrorF1 score0.0010244.0723−5
ROC-AUC score0.0005242.0434−5
Loss value0.0112240.0005
11TotalF1 score24.744136
ROC-AUC score31.087436
Loss value4.165636
12Corrected TotalF1 score0.028335
ROC-AUC score0.029935
Loss value0.681835
Notes: (1) The “df” column indicates the degrees of freedom. (2) “Corrected Model” shows the variation explained by the model for each dependent variable; its adjusted R2 values are 0.950, 0.976, and 0.976 for the F1 and ROC-AUC score, and loss value, respectively (corresponding to “a”, “b”, and “c” annotations of Table 6). (3) “Intercept” is the value of the dependent variable when all independent variables are zero. (4) “Model” represents the variation explained by the specific CNN architecture trained. (5) “Size”, “Overlap”, and “Model” are the main factors. “Size * Overlap”, “Model * Size”, and “Model * Overlap” represent their two-way interaction effects. “Model * Size * Overlap” represents their three-way interaction. (6) In terms of statistical significance, a p-value < 0.05 is considered significant while a p-value > 0.05 indicates that there is not enough evidence of influence on the dependent variables beyond the main effects on the individual factors. The fixed factors and the interactions with a statistically significant effect on the performance are represented in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cira, C.-I.; Manso-Callejo, M.-Á.; Yokoya, N.; Sălăgean, T.; Badea, A.-C. Impact of Tile Size and Tile Overlap on the Prediction Performance of Convolutional Neural Networks Trained for Road Classification. Remote Sens. 2024, 16, 2818. https://doi.org/10.3390/rs16152818

AMA Style

Cira C-I, Manso-Callejo M-Á, Yokoya N, Sălăgean T, Badea A-C. Impact of Tile Size and Tile Overlap on the Prediction Performance of Convolutional Neural Networks Trained for Road Classification. Remote Sensing. 2024; 16(15):2818. https://doi.org/10.3390/rs16152818

Chicago/Turabian Style

Cira, Calimanut-Ionut, Miguel-Ángel Manso-Callejo, Naoto Yokoya, Tudor Sălăgean, and Ana-Cornelia Badea. 2024. "Impact of Tile Size and Tile Overlap on the Prediction Performance of Convolutional Neural Networks Trained for Road Classification" Remote Sensing 16, no. 15: 2818. https://doi.org/10.3390/rs16152818

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop