Next Article in Journal
Correction: Tanaka et al. How Do Eco-Labels for Everyday Products Made of Recycled Plastic Affect Consumer Behavior? Sustainability 2024, 16, 4878
Previous Article in Journal
Utilization of Solid Waste as Building Materials (USB): Review of Chinese Policies
Previous Article in Special Issue
Stability of Steel Columns with Concrete-Filled Thin-Walled Rectangular Profiles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Based Temperature Forecasting for Sustainable Climate Change Adaptation and Mitigation

Vocational School of Technical Sciences, Mus Alparslan University, 49100 Muş, Turkey
Sustainability 2025, 17(5), 1812; https://doi.org/10.3390/su17051812
Submission received: 1 January 2025 / Revised: 23 January 2025 / Accepted: 6 February 2025 / Published: 21 February 2025

Abstract

:
In this study, temperature estimation was achieved by utilizing artificial neural network (ANN) and machine learning models (linear model, support vector machine, K-nearest neighbor, random forest) to assist with sustainable environmental planning and climate change adaptation solutions. The research compared monthly humidity, wind speed, precipitation, and temperature data of the Istanbul province from 1950 to 2023. Estimates with 96% accuracy were achieved with the ANN model, and amongst the machine learning models, the random forest (RF) model demonstrated the highest performance. Generalization capability of the models was enhanced by the k-fold cross-validation method. The analysis found input variables (humidity, wind, precipitation) to be negatively associated with temperature. The current results show that the application of artificial intelligence/machine learning techniques is a useful instrument in the context of sustainable climate monitoring and temperature estimation. This study achieves sustainability targets through certain reliable methodologies for climate change evaluation, sustainable energy design, and agricultural adaptation plans. The methodology is transferable to other regional climate analyses and has the potential to underpin evidence-based, decision making for sustainable development and climate resilience.

1. Introduction

Climate change has become the most popular topic of recent times. Temperature rise, sea level rise, excessive melting of glaciers, the acidification of the oceans, loss of biodiversity, the degradation of forests, drought, and extreme weather events are all affected. Different methods have been developed to examine this problem, which is also the subject of research in many sciences. Artificial intelligence methods that have entered our lives make our lives much easier by serving many disciplines. Artificial neural networks and machine learning methods are used in many fields. These methods are especially successful in the prediction of hydrological and meteorological variables. In addition, the fact that meteorological forecasts are close to reality is the basis of ecological and hydrological research [1]. Temperature, a meteorological variable, has also increased globally compared to the past. Rapid melting of glaciers not only changes the salt concentration of water but also threatens the diversity of life that lives there. Melting also raises the water level in the seas and oceans, posing a serious threat to people living in coastal areas. Each country, region, and basin is affected differently. Drought and desertification will be inevitable if the human-induced temperature increase is not intervened with and adequate measures are not taken. Human beings, who have intervened in nature both gradually and rapidly, have caused our world to warm rapidly by replacing forested areas with concrete settlements and increasing carbon dioxide emissions. Gasses such as CO2, CFC, O3, N2O, CH4, N2O, and CH4, which accumulate in the atmospheric layer, keep our world at a certain temperature. While CO2, one of these gasses, was 280 ppm (parts per million) before industry, this value approached 417 ppm in 2021 [2]. In addition, the excessive increase in these gasses due to industry and other factors has made the temperature increase in the earth unavoidable. The industrial revolution, which started in the 1760s, spread to almost all countries in the 1800s and 1900s and significantly increased global warming. Temperature forecasts play an important role in many sectors, especially in agriculture, transportation, energy, and air transportation. There is also a relationship between the temperature parameter and precipitation, humidity, and many meteorological variables. Temperature is the determining factor affecting the productivity of a region in terms of agriculture [3]. In fact, temperature is a parameter that directly or indirectly affects almost every living thing and sector. In particular, it has a great impact on human life. As a matter of fact, people may face health problems in an inappropriate air temperature [4]. On the other hand, real forecasts of air temperatures are crucial for energy policy, human activities, and business development planning [5]. In addition, the results obtained close to reality are very important for the cultivation of plants. Since each plant has a certain growing range at a certain temperature, temperature forecasting in agricultural areas is important for crop cultivation and yield. Although instant weather forecasts are easy, it is very difficult to make long-term forecasts. Therefore, modeling should be performed with minimum error while forecasting. The success of the model is very important here. The best model should be preferred for the best result. Unfortunately, warming continues gradually. The most important step here is to minimize warming. Numerous studies on this subject are published every year. Although each prediction method can calculate differently from each other, the chaotic structure of natural events makes the results very difficult to use. Predicting random events in nature is very difficult with physics-based deterministic methods. This is because the internal limitations of these methods prevent the model from being successful. Therefore, black-box methods are used to predict uncertain events.
In the literature, the use of artificial neural networks and machine learning methods for temperature predictions is becoming widespread, but studies comparing different methods are limited. Most studies focus on a specific region or time period, creating a need to present a general methodology and demonstrate its applicability to different regions. In particular, comprehensive temperature prediction studies that include meteorological variables such as humidity, wind, and precipitation for Istanbul are lacking in the literature. This study compares the temperature prediction performance of artificial neural networks (ANNs) and different machine learning models (linear model, support vector machine, K-nearest neighbor, random forest) using monthly meteorological data for the province of Istanbul between 1950 and 2023. The generalization ability of the models is evaluated with the k-fold cross-validation technique, and a comparative performance analysis of different algorithms is presented for the literature. The study aims to fill the gap in the literature by aiming to increase the accuracy of predictions made with meteorological data in a period when the effects of climate change are increasing. This study provides a comprehensive overview of the literature by comparing the performance of both ANN and different machine learning methods on the same dataset and provides a reliable methodology that can be used in regional climate analyses.
The primary objective of this study is to develop a highly accurate model for temperature forecasting in Istanbul by leveraging artificial neural networks (ANNs) and a variety of machine learning models. This study stands out for its focus on integrating diverse meteorological variables—namely, humidity, wind, and precipitation—into temperature prediction models, which not only enhances the precision of regional analyses but also captures the intricate interdependencies among these factors. At a time when the impacts of climate change and rising global temperatures are becoming increasingly pronounced, this work provides a timely and significant contribution by addressing the challenges associated with accurate long-term meteorological forecasting.
What sets this study apart is its comprehensive comparative evaluation of multiple machine learning techniques alongside ANN, providing a robust assessment of their relative strengths and weaknesses in climate modeling. By utilizing an extensive dataset spanning several decades and employing advanced validation techniques, the study achieves reliable and generalizable insights. Furthermore, its novel methodological framework demonstrates the practical applicability of artificial intelligence in climate science, serving as a foundational reference for future research in similar contexts. Ultimately, this research not only enriches the growing body of knowledge on AI-driven climate forecasting but also underscores the critical role of innovative data-driven approaches in mitigating the adverse effects of climate variability.

2. Literature Review

ANN forecasting models can be used in many studies such as temperature forecasts, daily precipitation forecasts, wind speed forecasts, rainfall–runoff modeling, evaporation forecasts, erroneous time series forecasting, and river flow forecasts. In the literature review, estimation studies similar to our study are examined and given below.
Ashibek et al. (2012) revealed that it is possible to test the usability of a feedforward backpropagation neural network (FFNN) to predict from Canada’s daily maximum weather analyses between 1999 and 2009. According to the study, they stated that the ANN with the tan-sigmoid transfer function is quite successful in predicting the weather [6]. Almazroui et al. (2013) examined the impact of altitude, population, and marine temperature on the air temperature trend using data from 24 cities in Saudi Arabia. They observed that the national temperature trend was 0.60 or 0.51 °C/decade. Additionally, they found no significant correlation between the increase in temperature and both population growth and altitude variation [7]. Bendre et al. (2017) analyzed the minimum and maximum temperature, precipitation, and humidity of the Maharashtra city at a 95% confidence interval. They stated that iterative linear regression and iterative polynomial regression were used in the prediction, and iterative polynomial regression gave more successful results than the other regression [8]. Musashi et al. (2018) analyzed spatial data of temperature in the Malang region using natural neighborhood interpolation and inverse distance-weighted interpolation methods. According to the statistical analyses made between the two methods, they stated that the inverse distance-weighted interpolation method gave more successful results [9]. Ayeong et al. (2018) analyzed precipitation and instantaneous temperature data measured on 23 September, 24 December, 31 March, and 21 June in South Korea using IDW, Kriging, and Co-Kriging methods. Accordingly, they found that the IDW method was more successful in predicting precipitation and Kriging methods were more successful in predicting temperature data [10]. Li et al. (2019) made a half-hour temperature forecast using the LSTM method. In addition to the study, they compared the LSTM method with random forest (RF) and deep neural network (DNN) methods [11]. Using the Genetic Algorithm (GA) method, Tran et al. (2020) optimized the hyperparameters of multilayer LSTM, RNN, and ANN models. Hybrid models were used to predict the maximum air temperature at the Cheongju station in South Korea. Accordingly, they stated that the performance of GA-LSTM is higher than other models in predicting air temperature in the long term [12]. Paul and Roy (2020) compared polynomial regression, linear regression, and support vector regression to determine the temperature of Bangladesh in 100 years and stated that the most appropriate model for the study was the third-degree polynomial regression [13]. Fang et al. (2021) also used the LSTM method for multi-regional temperature forecasting. In the study, where different input variables were used, multiple-step forward forecasting was aimed for and they stated that the LSTM model gave successful results in short-term forecasts [14]. Şevgin and Ali (2024) calculated the monthly average temperature values and temperature increase rates of 28 provinces in the east, west, north, and south of Turkey between 1950 and 2022 and made a comparison between the provinces. Using SSA and LSPF methods in their study, the authors calculated Erzincan as the province with the highest rate of a temperature increase and analyzed Antakya as the province with the lowest rate [15].
Price et al. introduced a new machine learning-based weather forecasting model called GenCast in their study. As a model trained from atmospheric data, GenCast provides 15-day global weather forecasts faster and with higher accuracy, and they stated that it predicts extreme weather events and wind energy production better than traditional methods [16]. Wang et al. used a data-driven approach for the combustion optimization of coal-fired boilers under variable load conditions. The developed CS-CNN-based prediction model considered multiple objectives such as boiler efficiency, NOx emission, and wall temperature, while the decision-making agent created with the TD3 algorithm optimized the boiler performance using this model. The simulation results provided a 0.411% increase in thermal efficiency, a 17.701 mg/m3 decrease in NOx emissions, and the maintenance of safe wall temperature [17]. In their work, Yuan et al. developed a transformer-based model, TianXing, which is augmented with physical orientation, and presented an effective and efficient approach to global weather forecasting. TianXing consumes less GPU resources compared to previous models, makes only a small compromise in accuracy, and increases the forecasting ability thanks to the mechanisms developed with physical insights. The model surpasses the performance of previous data-driven models and operational systems in the fields such as Z500 and T850, and is especially notable for its success in forecasting extreme weather events [18]. Zhong et al. focused on the difficulties in predicting subseasonal variations in East Asian winter temperatures and developed the Subseasonal Predictability Mode Analysis (S-PMA) method, which combines an S-EOF analysis with the PMA approach. In the study, three basic modes representing East Asian winter temperatures were identified and significant forecasting success was achieved by constructing physically based empirical forecast models for these modes. This method offers the potential to increase seasonal forecasting skills in East Asia as well as the entire Asian continent [19]. In their study, Bochenek et al. examined 500 scientific articles on machine learning methods published since 2018 and analyzed research trends in the fields of climate and numerical weather prediction. As a result of the research, the most studied meteorological fields, methods, and countries were determined; it was predicted that machine learning will play an important role in weather prediction in the future [20]. Rakhee et al. developed a Genetic Algorithm-Optimized Artificial Neural Network (GA-ANN) model for the seasonal prediction of air temperature, which is an important factor in agriculture. In the analysis of weekly weather data from Hyderabad and New Delhi, the GA-ANN model achieved higher accuracy rates (R2 = 0.937, 0.910) compared to MLR and classical ANN approaches. The study shows that the GA-ANN model is a reliable and effective method for seasonal temperature prediction [21]. In their study, Pala et al. compared various R-based time series models for the prediction of long-term meteorological variables (e.g., atmospheric pressure, wind speed, and surface evaporation) in sensitive regions of Turkey, such as the Van Lake Basin. The results revealed that the AUTO.ARIMA model performed better than other models, contributing to the development of more reliable long-term forecasts that can be used in regional resource management and climate-related decision-making processes [22]. In their study, Pala et al. proposed a new multi-hybrid model method that combines statistical and deep learning models for the more accurate prediction of natural gas consumption. In the analysis performed on the US natural gas vehicle fuel (NG-VFC) and industrial consumption (NG-IC) datasets, the best MAPE values were obtained as 5.40% and 3.19%, respectively. The study reveals that the proposed multi-hybrid model outperforms most of the existing methods with high prediction accuracy [23]. This study differs and presents innovations in several important aspects from other studies in the literature. First, it provides a comprehensive evaluation in terms of methodology by comparatively examining both artificial neural networks (ANNs) and other machine learning methods (linear model, support vector machines, K-nearest neighbor, random forest) in temperature prediction for the Istanbul province. While most studies focus on only one method or region, this research adopts a broader approach to evaluate the performance of different models and analyze their effectiveness in Istanbul’s climate data.
In addition, the inclusion of meteorological variables such as humidity, wind, and precipitation in temperature prediction stands out as an innovation that increases the accuracy of the models. By examining the effect of these factors in detail, the general generalization capabilities of the models are strengthened. The study not only aims to provide high accuracy in temperature prediction, but also serves as a guide that highlights the potential applications of such artificial intelligence and machine learning techniques in areas such as climate change, energy planning, and agriculture.

3. Material and Methods

The analysis in this study is based on two different approaches: The first approach is the use of artificial neural networks based on deep learning, and the second approach is the use of machine learning models in forecasting.

3.1. Artificial Neural Network

In this study, the artificial neural network (ANN) method is used in the first stage for temperature prediction with humidity, wind, and precipitation data. ANNs are relatively new computational tools that have found widespread use in solving many complex real-world problems. The attractiveness of ANNs is mainly due to their remarkable information processing properties related to non-linearity, high parallelism, fault and noise tolerance, learning, and generalization capabilities [24]. ANN is a mathematical model that attempts to simulate the structure and functions of biological neural networks. The artificial neuron is the basic building block of every artificial neural network [25]. The similarities in design and functionality of biological and artificial neurons can be seen in Figure 1.
Here, the left side of the figure represents a biological neuron with its soma, dendrites, and axon, while the right side of the figure represents an artificial neuron with its inputs, weights, bias, transfer function, and outputs. An artificial neural network (ANN) is a mathematical model that attempts to simulate the structure and functions of biological neural networks. The basic building block of every artificial neural network is an artificial neuron, i.e., a simple mathematical model (function). Such a model has three simple rule sets: multiplication, addition, and activation. At the input of the artificial neuron, the inputs are weighted, which means that each input value is multiplied by separate weights. In the middle part of the artificial neuron, there is an aggregation function that sums all weighted inputs and bias. At the output of the artificial neuron, the sum of the previously weighted inputs and bias passes through the activation function, also called the transfer function. The mathematical equation of the artificial neuron model is given in Equation (1).
y k = F i = 0 m w i k · x i k + b
  • Variables given in the equation:
  • xi (k): input value at time sequence k, where i goes from 0 to m;
  • wi (k): weight value at time sequence k, where i goes from 0 to m;
  • b: bias;
  • F: transfer function;
  • y(k): output value at time sequence k.
From an artificial neuron model and as seen in Equation (1), the main unknown variable of the model is the transfer function. The transfer function defines the characteristics of the artificial neuron and can be any mathematical function. It is chosen based on the problem that the artificial neuron (artificial neural network) needs to solve. Artificial neural networks are composed of multiple artificial neurons. A single artificial neuron is usually insufficient to solve real-life problems, but artificial neural networks can solve complex problems by operating in a non-linear, distributed, parallel, and localized structure. The working principle of these networks is defined by topology, which determines how neurons are connected. Topology refers to the connection pattern of the neural network and is divided into two main classes: feedforward topology and recurrent topology. Feedforward topology is an acyclic structure in which information flows only from input to output. This type of network is generally used in simple classification and regression problems. Recurrent topology is a structure with a feedback mechanism in which information flows from input to output and vice versa. This topology is especially suitable for problems based on historical data such as time series and language modeling [25]. In this study, a feedforward topology model is used. Figure 2 shows the feedforward topology and recurrent topology model.
Artificial neural networks (ANNs) have some disadvantages. First, these models usually require large amounts of data and may have difficulty generalizing on small datasets. In addition, the selection of hyperparameters and the configuration of the network are quite complex, which can directly affect the performance of the model. The computational costs are high and require powerful hardware, especially for training large and deep networks. ANNs are often considered “black boxes”, which makes it difficult to understand and explain the decision-making process of the model. Finally, due to the risk of overfitting, the model is likely to overfit the training data, which can lead to poor performance on new data.
In this study, ANN was simulated in the MATLAB R2012a environment. The ANN model used in the Matlab environment is given in Figure 3.
In the ANN model used in the Matlab environment, a feedforward model was used as a network with 3 inputs, 20 hidden layers, and 1 output layer. The Levenberg–Marquardt algorithm (trainlm) was used in the model. This algorithm is a frequently preferred optimization method in ANN training and is known for its fast convergence, especially when working with small- and medium-sized datasets. This algorithm combines the strengths of the Gauss–Newton method with gradient descent to minimize the error function and reduces the computational cost by approximating the second derivative matrix (Hessian matrix) in the process. An adaptive approach to parameter updates ensures that the algorithm converges both quickly and stably. In addition, the ANN model utilizes the mean squared error (MSE) and R-squared metrics, a basic error measure used to evaluate the performance of the network. MSE is defined as the squared average of the deviation between the predicted values of a model and the actual target values. The goal is to optimize the prediction performance of the model by minimizing this value. During training, optimization methods such as the Levenberg–Marquardt algorithm aim to minimize the MSE value [26].
In this study, the dataset of monthly humidity, wind, precipitation, and temperature data for the Istanbul province between 1950 and 2023 is given in Figure 4. The dataset used here was supplied by the Muş Province Meteorology Directorate. The dataset used was obtained from the past to the present by measuring with devices calibrated by an official state institution as robust and reliable. Despite everything, before using the dataset, it was investigated whether there were any empty values and records containing such values were not used in the study.

3.2. Machine Learning Models

In the second phase of this study, machine learning models were used [27]. Let us first briefly explain these models:
The linear model (LM) used in R is a modeling technique based on the linear regression method. This model is used to explain and predict the linear relationship between a dependent variable (response/output) and independent variables (predictors/inputs). The basic lm () function in R is the main function used to construct and analyze this linear model. LM is based on estimating the coefficients by Ordinary Least Squares (OLS). The objective is to minimize the sum of squares of the difference (residuals) between the predicted values and the actual values. This model can be used to estimate a continuous dependent variable, to examine the effects of factor variables on the dependent variable, and to assess the statistical significance of relationships. It is computationally fast and simple. It can give effective results even in small datasets. Model results can be easily interpreted. However, its performance decreases if there is no linear relationship between the independent variables and the dependent variable. Multicollinearity in the dataset may reduce the reliability of the model.
The (SVM) in R is a powerful and flexible machine learning algorithm for classification and regression problems. SVM aims to classify or predict data using a hyperplane. One of the most widely used packages for SVM applications in R is e1071. As a flexible and powerful tool, SVM in R offers a wide range of applications in classification and regression problems. It can provide high performance, especially with good data preprocessing and parametric optimization [28,29].
The (KNN) algorithm in R is used in both classification and regression problems as a supervised machine learning method. KNN does not build any statistical model to learn the data; instead, it stores the training data and makes decisions based on the training data when making predictions. This is why it is also called a lazy learner. The KNN algorithm can be used especially when there is not much information about the structure of the dataset. However, it should be applied with caution as performance problems can occur with large datasets [30].
Random forest (RF) is a powerful and flexible machine learning algorithm widely used in the R environment. It is used for classification and regression problems. Random forest consists of an ensemble of decision trees and applies the ensemble (learning together) method. RF has a low risk of overfitting. It works well with large datasets and high-dimensional data. It can work with incomplete data. However, it requires more computational power and memory. Also, the results are often less interpretable (e.g., not as intuitive as decision trees) [31].

3.3. Evaluation Metrics

Mean Squared Error (MSE): MSE measures the mean squared error of the predictions and is used to understand how close the model’s predicted values are to the true values. It penalizes large errors more because it increases the error values squared [32].
M S E = 1 N i = 1 N y t y ^ t 2
where y t is the actual value, y ^ t is the predicted value, and N is the number of samples.
Mean Absolute Error (MAE): MAE averages the absolute values of the differences between predicted and actual values. It is a simpler and more interpretable error metric, but does not penalize large errors as severely as MSE [33].
M A E = 1 N i = 1 N y t y ^ t
Root Mean Squared Error (RMSE): RMSE is the square root of MSE and expresses the prediction error of the model in original units. It is sensitive to high values as it penalizes errors according to their magnitude [34].
R M S E = 1 N 1 N y t y ^ t 2
R-squared: R-squared indicates how much the model explains the variance of the dependent variable. Values close to 1 indicate a good fit and values close to 0 indicate a poor fit [29].
R 2 = 1 t = 1 N y t y ^ t 2 t = 1 N y t y ¯ t 2
Here, y ¯ t represents the average of the actual values.

4. Findings

4.1. Prediction with ANN

In this study, a feedforward artificial neural network (ANN) is used to predict temperature based on humidity, wind speed, and precipitation inputs. In the ANN model, a network with 20 hidden layers was selected. The data were normalized with the map minmax function and scaled to the range 0, 1. The artificial neural network was created with a feedforward net and the Levenberg–Marquardt algorithm (trainlm) was used for training. During the training of the model, the training process was optimized by adjusting parameters such as the number of epochs, learning rate, and target error. These parameters are as follows: number of epochs—1000, learning rate—0.001, and target error—1 × 10−6. After training, the actual and predicted temperature values were visualized and the accuracy was evaluated by a regression analysis. Figure 5 shows the regression plot.
In the ANN regression graph, the relationship between the target and predicted values of the neural network was calculated as R2 = 0.9625 and the model predicted the target values with high accuracy. The predictions were concentrated around the ideal Y = T line, exhibiting a low error rate and strong generalization ability. However, some points deviate from the line, indicating that the model has some partial difficulties in adapting to certain outliers. Nevertheless, the close-to-96% linear relationship between predicted and target values proves that the model accurately captures the general trend of the data during the training process and achieves high performance without overlearning. These results show that the model has a successful prediction capacity and can be further optimized with minor improvements. The ANN training performance result graph is given in Figure 6.
In the ANN training performance graph, the performance of the neural network on the training, validation, and test datasets is evaluated through the mean squared error (MSE). While the error value is high in the first epoch, it decreases rapidly in the following epochs and reaches a stable level at approximately the sixth epoch. The model weights were saved at the point where the validation error was the lowest, thus optimizing the generalization success. The fact that the training, validation, and test error values are close to each other shows that the model does not overlearn (overfitting) and successfully reaches the target error level. These results show that the model undergoes a fast and effective learning process and its generalization ability is satisfactory. The temperature prediction result graph of ANN training is given in Figure 7.
The graph evaluates the performance of the model by comparing the neural network’s temperature predictions (predicted values) and actual values (actual values). The predictions are generally in fairly close agreement with the actual values, indicating that the model accurately captures the underlying trends and has a strong generalization capability. However, at some data points, the forecasts diverge from the true values, suggesting that the model may be in error in certain instances. Despite the intense fluctuations in the data, the general parallelism between predictions and actual values proves that the model offers a successful prediction performance. Table 1 shows the actual and predicted temperature values based on humidity, wind, and precipitation values.
As can be seen in the table, the temperature estimate was provided with a high estimate rate. It was seen that the model gave extremely successful results.

4.2. Prediction with Machine Learning Models

In the second stage of the analysis, machine learning models such as the linear model (LM), support vector machine (SVM), K-nearest neighbor (KNN), and random forest (RF) were used in the R-Studio environment. Before moving on to the estimation processes, the effect of each input variable on the output was investigated in order to understand how the relationship between the input variables and the output of the models is. This situation is given in Figure 8.
Humidity and temperature relationship: The graph of humidity shows the effect of the humidity variable on temperature. The negative relationship between temperature and humidity is clearly seen in the graph. In other words, as humidity increases, temperature decreases. The distribution points confirm this general trend and mostly follow a linear trend.
Wind and temperature relationship: In the graph of wind examining the relationship between wind speed and temperature, a negative trend is again striking. In other words, as wind speed increases, temperature decreases. Although the distribution points appear to be spread over a wider area, this inverse relationship is generally observed.
Rainfall and temperature relationship: The graph of temperature examines the relationship between rainfall amount and temperature. Here, again, a negative correlation is seen; as rainfall increases, temperature decreases. The distribution points exhibit a non-linear, more complex structure.
When using machine learning models, the cross-validation technique was used to make the dataset prediction results more accurate and realistic. For the data number of approximately 650, k-fold values were taken as 130, 65, 50, 25, 10, and 5, respectively. According to the data in Table 2, four different machine learning models (LM, SVM, KNN, and RF) were used to evaluate the prediction performance for different k-fold values; MAE, RMSE, and R-squared values were examined as evaluation metrics.
Effect of k-fold number: As the number of k-folds increases (e.g., 130 k-folds), the models exhibit higher accuracy (lower MAE and RMSE, higher R-squared). This shows that the training and test data are better generalized with more folds. As the number of k-folds decreases (e.g., 5 k-folds), the prediction performance is observed to decrease. This is because with fewer k-folds, the model overfits the training data and its overall performance decreases.
LM exhibits poor performance at simple and low k-fold values. However, for LM, a very good R-squared (0.998) and low MAE/RMSE value were obtained at high k-fold values (130 k-folds). SVM generally shows a fairly stable and good performance at low and medium k-fold values. Especially, MAE and RMSE values are lower than LM. The performance of KNN varies depending on the number of k-folds. While it gives quite good results at high k-fold numbers, its performance decreases slightly at low k-fold numbers.
The RF model is generally the model that exhibits the most balanced performance. It achieved quite high accuracy at high k-fold numbers (R-squared: 0.997) and showed good results even at low k-fold numbers. R-squared values generally increase as k-fold increases. This shows that the models explain the data better. RF models and LMs provide the highest R-squared values. At low k-fold values, SVM and RF provided a more stable R-squared performance compared to other models.
The RF model has the lowest values for MAE and RMSE. This means that the relevant model makes the least errors in temperature predictions. The LM and SVM models have slightly higher errors at low k-fold values. KNN generally showed better performance at lower k-fold values, but it is not as effective as RF. In general, the RF model will be the best choice in cases where high accuracy is required (e.g., critical forecasting applications). For a faster and simpler solution, LM or SVM with high k-fold can also be preferred. The RF model has shown superior performance compared to other models with both low error (MAE/RMSE) and high explanatory power (R-squared). However, the performance of the models is quite sensitive to the number of k-folds. While generalization success increases with more k-folds, the generalization ability of the models decreases at low k-fold values. In addition, the graphs obtained depending on the metric values to show the effect of the k-fold parameter are given in Figure 9.
Figure 9 shows that as the number of k-folds increases (for example, 130 k-folds), there are improvements in the performance metrics of the models (decrease in MAE and RMSE, increase in R-squared). Especially in the RF models and LMs, quite good results were obtained at high k-fold values (130 and 65). RF and KNN models show more stable performance when the k-fold length changes. As the k-fold length increases, the generalization success of the models also increases. This indicates that the model can make better generalization instead of overfitting the data. The MAE and RMSE values of the RF model remain at lower levels compared to the other models at each k-fold value. In general, the graph in Figure 9 shows that the RF model is the most successful model in temperature prediction. This model offers both low error rates (MAE and RMSE) and high explanatory power (R-squared). However, the performance of the models is quite sensitive to the k-fold length used. Higher k-fold values allow models to perform better.

5. Conclusions

This study introduces a novel framework for temperature prediction in Istanbul by utilizing artificial neural networks (ANNs) alongside various machine learning models, including linear models, support vector machines, K-nearest neighbors, and random forests. Differently from usual research, this paper combines meteorological data (1950–2023) and several important factors, including humidity, wind, and precipitation, to make the prediction accuracy and regional characteristics more effective.
The ANN model achieved outstanding prediction accuracies, around 96%, indicating its capability in modeling heterogeneous nonlinearity in meteorological information. Nonetheless, the application of ANN models must also take into account that they can have several, for instance, computational requirements and the risk of overfitting unless appropriate regularization is adopted. Moreover, the “black-box” property of ANN increases the difficulty of interpreting its predictions over certain other model types.
In machine learning approaches, the random forest model outperformed the others in terms of good prediction accuracy and stability and with smaller computational requirements. The k-fold cross-validation method integration also confirmed the reliability of all models and generalized them across datasets.
By a systematic comparison between these approaches and tailoring them to the particular nature of the Istanbul climate, this paper sets a new benchmark for subsequent regional temperature forecasting research. Results also show the applicability of these approaches to play an important role in critical issues such as mitigating climate change, energy planning, and crop optimization. Future studies can continue the development of these models by gathering more of the public data and, specifically, addressing some issues that these models are facing as well as exploring new hybrid approaches. Although this study has shown the usability of ANN and RF models for temperature predictions in Istanbul, it was concluded that this methodology could also be applied to other regional climate analyses. Future studies can focus on optimizing the model’s hyperparameters and further increasing the prediction accuracy by using different input variables. In addition, comparing deep learning models with different network structures and algorithms is among the potential research areas. Such studies will contribute to our better understanding of the effects of climate change and taking effective measures.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hunter, R.D.; Meentemeyer, R.K. Climatologically aided mapping of daily precipitation and temperature. J. Appl. Meteorol. 2005, 44, 1501–1510. [Google Scholar] [CrossRef]
  2. Derbyshire, J.; Morgan, J. Is seeking certainty in climate sensitivity measures counterproductive in the context of climate emergency? The case for scenario planning. Technol. Forecast. Soc. Change 2022, 182, 121811. [Google Scholar] [CrossRef]
  3. Mendelsohn, R. The impact of climate change on agriculture in developing countries. J. Nat. Resour. Policy Res. 2008, 1, 5–19. [Google Scholar] [CrossRef]
  4. Schulte, P.; Bhattacharya, A.; Butler, C.; Chun, H.; Jacklitsch, B.; Jacobs, T.; Kiefer, M.; Lincoln, J.; Pendergrass, S.; Shire, J.; et al. Advancing the framework for considering the effects of climate change on worker safety and health. J. Occup. Environ. Hyg. 2016, 13, 847–865. [Google Scholar] [CrossRef]
  5. Smith, D.M.; Cusack, S.; Colman, A.W.; Folland, C.K.; Harris, G.R.; Murphy, J.M. Improved surface temperature prediction for the coming decade from a global climate model. Science 2007, 317, 796–799. [Google Scholar] [CrossRef]
  6. Abhishek, K.; Singh, M.P.; Ghosh, S.; Anand, A. Weather Forecasting Model using Artificial Neural Network. Procedia Technol. 2012, 4, 311–318. [Google Scholar] [CrossRef]
  7. Almazroui, M.; Islam, M.N.; Jones, P.D. Urbanization effects on the air temperature rise in Saudi Arabia. Clim. Change 2013, 120, 109–122. [Google Scholar] [CrossRef]
  8. Bendre, M.R.; Manthalkar, R.R.; Thool, V.R. Modeling and predicting weather in agro-climatic scarcity zone using iterative approach. Decision 2017, 44, 51–67. [Google Scholar] [CrossRef]
  9. Musashi, J.P.; Pramoedyo, H.; Fitriani, R. Comparison of Inverse Distance Weighted and Natural Neighbor Interpolation Method at Air Temperature Data in Malang Region. CAUCHY J. Mat. Murni Dan. Apl. 2018, 5, 48–54. [Google Scholar] [CrossRef]
  10. Jo, A.; Ryu, J.; Chung, H.; Choi, Y.; Jeon, S. Applicability of various interpolation approaches for high resolution spatial mapping of climate data in Korea. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2018, 42, 703–710. [Google Scholar] [CrossRef]
  11. Li, C.; Zhang, Y.; Zhao, G. Deep Learning with Long Short-Term Memory Networks for Air Temperature Predictions. In Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing, AIAM 2019, Dublin, Ireland, 16–18 October 2019; pp. 243–249. [Google Scholar] [CrossRef]
  12. Tran, T.T.K.; Lee, T.; Kim, J.S. Increasing neurons or deepening layers in forecasting maximum temperature time series? Atmosphere 2020, 11, 1072. [Google Scholar] [CrossRef]
  13. Paul, S.; Roy, S. Forecasting the Average Temperature Rise in Bangladesh: A Time Series Analysis. J. Eng. Sci. 2020, 11, 83–91. [Google Scholar] [CrossRef]
  14. Fang, Z.; Crimier, N.; Scanu, L.; Midelet, A.; Alyafi, A.; Delinchant, B. Multi-zone indoor temperature prediction with LSTM-based sequence to sequence model☆. Energy Build. 2021, 245, 111053. [Google Scholar] [CrossRef]
  15. Şevgin, F.; Öztürk, A. Variation of temperature increase rate in the Northern Hemisphere according to latitude, longitude and altitude: The Turkey example. Sci. Rep. 2024, 14, 18207. [Google Scholar] [CrossRef]
  16. Price, I.; Sanchez-Gonzalez, A.; Alet, F.; Andersson, T.R.; El-Kadi, A.; Masters, D.; Ewalds, T.; Stott, J.; Mohamed, S.; Battaglia, P.; et al. Probabilistic weather forecasting with machine learning. Nature 2024, 637, 84–90. [Google Scholar] [CrossRef]
  17. Wang, Z.; Xue, W.; Li, K.; Tang, Z.; Liu, Y.; Zhang, F.; Cao, S.; Peng, X.; Wu, E.Q.; Zhou, H. Dynamic combustion optimization of a pulverized coal boiler considering the wall temperature constraints: A deep reinforcement learning-based framework. Appl. Therm. Eng. 2024, 259, 124923. [Google Scholar] [CrossRef]
  18. Yuan, S.; Wang, G.; Mu, B.; Zhou, F. TianXing: A Linear Complexity Transformer Model with Explicit Attention Decay for Global Weather Forecasting. Adv. Atmos. Sci. 2024, 42, 9–25. [Google Scholar] [CrossRef]
  19. Zhong, W.; Wu, Z. Forecasting East Asian winter temperature via subseasonal predictable mode analysis. Clim. Dyn. 2024, 62, 277–297. [Google Scholar] [CrossRef]
  20. Bochenek, B.; Ustrnul, Z. Machine Learning in Weather Prediction and Climate Analyses—Applications and Perspectives. Atmosphere 2022, 13, 180. [Google Scholar] [CrossRef]
  21. Rakhee; Hoda, M.N.; Bansal, S. Seasonal temperature forecasting using genetically tuned artificial neural network. Int. J. Inf. Technol. 2024, 16, 315–319. [Google Scholar] [CrossRef]
  22. Pala, Z.; Şevgin, F. Statistical modeling for long-term meteorological forecasting: A case study in Van Lake Basin. Nat. Hazards 2024, 120, 14101–14116. [Google Scholar] [CrossRef]
  23. Pala, Z. Comparative study on monthly natural gas vehicle fuel consumption and industrial consumption using multi-hybrid forecast models. Energy 2023, 263, 125826. [Google Scholar] [CrossRef]
  24. Basheer, I.A.; Hajmeer, M. Artificial neural networks: Fundamentals, computing, design, and application. J. Microbiol. Methods 2000, 43, 3–31. [Google Scholar] [CrossRef] [PubMed]
  25. Seo, J.; Park, S. Optimizing model parameters of artificial neural networks to predict vehicle emissions. Atmos. Environ. 2023, 294, 119508. [Google Scholar] [CrossRef]
  26. Bishop, C.M. Pattern Recognition and Machine Learning; Springer Science + Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  27. Pala, Z.; Özkan, O. Artificial Intelligence Helps Protect Smart Homes against Thieves. DÜMF Mühendislik Derg. 2020, 11, 945–952. [Google Scholar] [CrossRef]
  28. Chen, C.; Wu, Z.; Sun, S.; Ban, P.; Ding, Z.; Xu, Z. Forecasting the ionospheric f 0 F 2 parameter one hour ahead using a support vector machine technique. J. Atmos. Sol. Terr. Phys. 2010, 72, 1341–1347. [Google Scholar] [CrossRef]
  29. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
  30. Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med. 2016, 4, 218. [Google Scholar] [CrossRef]
  31. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  32. Pala, Z. Examining EMF Time Series Using Prediction Algorithms with R. IEEE Can. J. Electr. Comput. Eng. 2021, 44, 223–227. [Google Scholar] [CrossRef]
  33. Pala, Z.; Pala, A.F. Perform Time-series Predictions in the R Development Environment by Combining Statistical-based Models with a Decomposition-based Approach. J. Muş Alparslan Univ. Fac. Eng. Archit. 2020, 1, 55–63. [Google Scholar]
  34. Pala, Z.; Atici, R. Forecasting Sunspot Time Series Using Deep Learning Methods. Sol. Phys. 2019, 294, 50. [Google Scholar] [CrossRef]
Figure 1. Similarities in design and functionality of biological and artificial neurons [25].
Figure 1. Similarities in design and functionality of biological and artificial neurons [25].
Sustainability 17 01812 g001
Figure 2. Feedforward topology and recurrent topology model [25].
Figure 2. Feedforward topology and recurrent topology model [25].
Sustainability 17 01812 g002
Figure 3. ANN model used in Matlab environment.
Figure 3. ANN model used in Matlab environment.
Sustainability 17 01812 g003
Figure 4. Dataset graphs.
Figure 4. Dataset graphs.
Sustainability 17 01812 g004
Figure 5. ANN regression graph.
Figure 5. ANN regression graph.
Sustainability 17 01812 g005
Figure 6. ANN training performance graph.
Figure 6. ANN training performance graph.
Sustainability 17 01812 g006
Figure 7. Temperature prediction result of ANN training.
Figure 7. Temperature prediction result of ANN training.
Sustainability 17 01812 g007
Figure 8. Visual relationship between input variables and output variables.
Figure 8. Visual relationship between input variables and output variables.
Sustainability 17 01812 g008
Figure 9. Graphs of metric values obtained depending on the k-fold parameter.
Figure 9. Graphs of metric values obtained depending on the k-fold parameter.
Sustainability 17 01812 g009
Table 1. Actual and predicted temperature values based on humidity, wind, and precipitation values.
Table 1. Actual and predicted temperature values based on humidity, wind, and precipitation values.
HumidityWindPrecipitationActual TemperatureForecast TemperatureDifference
60.003.4022.907.607.61−0.01
62.004.50172.104.504.52−0.02
68.003.1040.107.907.92−0.02
72.003.20122.6010.9010.900.00
73.603.3027.9016.4016.390.01
62.403.807.0022.7022.74−0.04
63.704.2034.8023.4023.400.00
67.505.308.1022.2022.140.06
67.604.500.0021.5021.52−0.02
69.503.4058.8014.7014.71−0.01
72.504.60124.0013.2013.170.03
74.804.2097.8010.8010.82−0.02
78.503.90115.106.906.91−0.01
71.003.108.7010.3010.300.00
71.003.90103.109.009.000.00
71.703.1032.9014.3014.300.00
64.803.3030.6017.6017.600.00
59.203.0017.6021.5021.500.00
61.603.500.0025.4025.400.00
66.303.4057.1025.3025.290.01
66.903.8018.3020.8020.790.01
70.402.3035.6020.2019.910.29
74.103.1074.1016.2016.21−0.01
72.203.70140.309.709.700.00
74.003.80134.605.105.100.00
73.003.0049.204.004.01−0.01
74.802.9043.707.507.490.01
68.702.9059.2012.6012.64−0.04
69.302.8022.6017.4017.41−0.01
60.003.4022.9021.2021.180.02
Table 2. MAE, RMSE, and R-squared metric values obtained for each model depending on the cross-validation k-fold parameter values.
Table 2. MAE, RMSE, and R-squared metric values obtained for each model depending on the cross-validation k-fold parameter values.
Model NameCV k-FoldMAERMSER-Squared
LM1309.7210.100.998
657.388.690.897
506.698.250.783
255.797.230.653
105.286.280.492
54.815.810.397
SVM13010.2810.680.998
658.008.690.946
506.387.920.860
255.586.510.728
104.815.940.530
54.475.660.487
KNN1308.969.930.999
657.508.860.950
506.617.540.803
255.696.690.778
105.056.150.514
54.715.780.493
RF1308.919.730.997
657.518.740.925
506.428.030.845
255.596.690.715
104.926.030.489
54.715.720.465
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sevgin, F. Machine Learning-Based Temperature Forecasting for Sustainable Climate Change Adaptation and Mitigation. Sustainability 2025, 17, 1812. https://doi.org/10.3390/su17051812

AMA Style

Sevgin F. Machine Learning-Based Temperature Forecasting for Sustainable Climate Change Adaptation and Mitigation. Sustainability. 2025; 17(5):1812. https://doi.org/10.3390/su17051812

Chicago/Turabian Style

Sevgin, Fatih. 2025. "Machine Learning-Based Temperature Forecasting for Sustainable Climate Change Adaptation and Mitigation" Sustainability 17, no. 5: 1812. https://doi.org/10.3390/su17051812

APA Style

Sevgin, F. (2025). Machine Learning-Based Temperature Forecasting for Sustainable Climate Change Adaptation and Mitigation. Sustainability, 17(5), 1812. https://doi.org/10.3390/su17051812

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop