*2.2. Trend Analysis*

Time series of annual and seasonal rainfall were subjected to the Mann–Kendall test to detect possible trends over the period of 1961–2016. It is the most widely used test for trend analysis in climatological time series [32].

The Mann–Kendall test is a non-parametric statistical test to detect the presence of a monotonic increasing or decreasing trend within a time series [33,34]. The advantage of the non-parametric tests over the parametric tests is that they are robust and more suitable for non-normally distributed data with missing and extreme values, frequently encountered in environmental time series [35].

The Mann–Kendall test statistics *S* is calculated as:

$$S = \sum\_{k=1}^{n-1} \sum\_{j=k+1}^{n} \text{sign}(\mathbf{x}\_j - \mathbf{x}\_k),\tag{1}$$

where *n* is the number of data points, *xi* and *xk* are the data values in the time series *j* and *k* (*j* > *k*), respectively, and sign (*xj* − *xi*) is the sign function as follows:

$$\text{sign}(\mathbf{x}\_{\circ} - \mathbf{x}\_{k}) = \begin{pmatrix} 1 \text{ if } (\mathbf{x}\_{\circ} - \mathbf{x}\_{k}) > 0 \\ 0 \text{ if } (\mathbf{x}\_{\circ} - \mathbf{x}\_{k}) = 0 \\ -1 \text{ if } (\mathbf{x}\_{\circ} - \mathbf{x}\_{k}) < 0 \end{pmatrix} . \tag{2}$$

The variance is computed as:

$$VAR(S) = \frac{n(n-1)(2n+5) - \sum\_{i=1}^{P} t\_i(t\_i - 1)(2t\_i + 5)}{18},\tag{3}$$

where *n* is the number of data points, *P* is the number of tied groups, the summary sign (*P*) indicates the summation over all tied groups, and *ti* is the number of data values in the *P*th group. In case of no tied groups, this summary process can be ignored. A tied group is a set of sample data having the same value. In the case where the sample size *n* > 30, the standard normal test statistic *Z* is estimated by:

$$Z = \begin{pmatrix} \frac{S-1}{\sqrt{VAR(S)}} \text{ if } S > 0\\ 0 \text{ if } S = 0\\ \frac{S+1}{\sqrt{VAR(S)}} \text{ if } S < 0 \end{pmatrix} \tag{4}$$

Positive values of *Z* indicate increasing trends, whereas negative *Z* values indicate decreasing trends. Trend testing is done at a specific significance level. When |*Z*| > *Z*1−a/2, the null hypothesis is rejected and a significant trend exists in the time series. The value of *Z*1−a/2 is obtained from the standard normal distribution table. In this study, the significance level a = 0.05 was used. At the 5% significance level, the null hypothesis of no trend is rejected if |*Z*| > 1.96.

Furthermore, a change-point analysis approach was applied, using the Change-Point Analyzer (CPA) [36]. This method iteratively uses a combination of cumulative sum charts (CUSUM) and bootstrapping to detect whether a change in the mean of the rainfall time series has taken place. A sudden change in the direction of the CUSUM indicates a sudden shift or change in the average. Additionally, trend magnitudes were computed by employing the Theil–Sen approach (TSA) [37,38], which is based on slope β, often referred to as Sen's slope [38]. It is preferable to linear regression, because it limits the influence of outliers on the slope [39].
