**4. Error and Convergence Analysis**

In this section, we discuss the error and convergence analysis of the proposed numerical method for Equation (1). We prove the error and convergence analysis analytically with the help of the following lemma and theorems.

**Lemma 2** ([32])**.** *The truncation error* R*<sup>r</sup> defined by Equation (21) satisfies*

$$\mathcal{R}\_r \le \left[ \frac{1}{8w\_r \Gamma(1-\gamma)} + \frac{\gamma}{2w\_r \Gamma(3-\gamma)} \right] \max\_{t \le \gamma \le t\_r} |\mathcal{U}'(\eta)\mathcal{L}|\Delta t^{2-\gamma},\tag{29}$$

*where* U(*η*) *is the approximating function, wr is the weight function at node tr for r* = 1, 2 ...M*, and* L *is the Lipschitz constant on the interval* [*tl*−1, *tl*]*.*

**Proof.** For the detailed proof of this lemma, we refer to [32].

**Theorem 1.** *The error in approximation of the function v*(*x*) *by the first m terms of the series in Equation (16) is bounded by the sum of the absolute values of all the neglected coefficients in the series, i.e.,*

$$\mathcal{E}\_{\tau}(\mathcal{N}) = |v(\mathbf{x}) - v\_{\mathcal{N}}(\mathbf{x})| \le \sum\_{i=m+1}^{\infty} |c\_i| \,\tag{30}$$

∀ *v*(*x*), ∀*m*, *and x* ∈ [0, 1].

**Proof.** The proof is trivial since |J *<sup>α</sup>*,*<sup>β</sup> <sup>i</sup>* (*x*)| ≤ 1, ∀*x* ∈ [0, 1] and *i* ≥ 0.

**Theorem 2.** *Let v*(*x*) *be the square integrable function defined on* [0, 1] *and* |*v*(*x*)|≤M1*, where* M<sup>1</sup> *is constant. Then, the v*(*x*) *can be expanded with an infinite sum of Jacobi polynomials, and the infinite series converges to v*(*x*) *uniformly, i.e.,*

$$v(\mathbf{x}) = \sum\_{i=0}^{\infty} c\_i \mathcal{J}\_{\mathbf{u}}^{a,\beta}(\mathbf{x}),\tag{31}$$

*where*

$$|c\_i| \le \frac{\mathcal{M}\_1 \Gamma(1+\beta)}{\Gamma(2+\kappa+\beta)} \frac{1}{i^3}, \ i > 1, \dots$$

*and* E*<sup>m</sup>* → *0.*

**Proof.** From Equations (16) and (18), we have

$$v\_m(\mathbf{x}) = \sum\_{i=0}^{m} c\_i \mathcal{J}\_i^{\mathbf{a}\_r \emptyset}(\mathbf{x}),\tag{32}$$

where *ci* are the unknown coefficients. Furthermore, from Equation (17), we obtain

$$\begin{split} c\_{i} &= \frac{1}{\mathcal{H}\_{i}^{\mathcal{A}}} \int\_{0}^{1} (\mathbf{x})^{\beta} (1-\mathbf{x})^{a} \mathbf{v}(\mathbf{x}) \mathcal{I}\_{i}^{\mathcal{A},\beta}(\mathbf{x}) d\mathbf{x}, \\ |c\_{i}| &= \left| \frac{1}{\mathcal{H}\_{i}^{\mathcal{A}}} \int\_{0}^{1} (\mathbf{x})^{\beta} (1-\mathbf{x})^{a} \mathbf{v}(\mathbf{x}) \mathcal{I}\_{i}^{\mathcal{A},\beta}(\mathbf{x}) \right| d\mathbf{x}, \\ &\leq \frac{\mathcal{M}\_{1}}{\mathcal{H}\_{i}^{\mathcal{A},\beta}} \int\_{0}^{1} \Big| (\mathbf{x})^{\beta} (1-\mathbf{x})^{a} \mathcal{I}\_{i}^{\mathcal{A},\beta}(\mathbf{x}) \Big| d\mathbf{x}, \\ &\leq \frac{\mathcal{M}\_{1}}{\mathcal{H}\_{i}^{\mathcal{A},\beta}} \frac{\Gamma(a+i+1)}{\Gamma(a+\beta+i+1)} \sum\_{m=0}^{i} \binom{i}{m} \frac{\Gamma(a+\beta+i+m+1)}{\Gamma(a+m+1)} \int\_{0}^{1} |\mathbf{x}|^{\beta} (1-\mathbf{x})^{a} (\mathbf{x}-1)^{m} |d\mathbf{x}, \\ &\leq \frac{\mathcal{M}\_{1} (2i+1+a+\beta)\Gamma(i+1+a+\beta)\Gamma(1+\beta)}{\Gamma(i+1+\beta)\Gamma(2+a+\beta)} \frac{1}{i^{4}}, \\ &\leq \frac{\mathcal{M}\_{1} \Gamma(1+\beta)}{\Gamma(2+a+\beta)} \frac{1}{i^{3}}. \end{split}$$

Hence, the series *vm*(*x*) converges to *v*(*x*) uniformly.

**Theorem 3.** *Let h*(*t*) *be* N *times differentiable function defined on interval* [0, *τ*]*. Let <sup>v</sup>*<sup>N</sup> (*t*) = <sup>∑</sup><sup>N</sup> *<sup>j</sup>*<sup>1</sup> *cj*1<sup>J</sup> *<sup>α</sup>*,*<sup>β</sup> <sup>j</sup>*<sup>1</sup> (*t*) *be the approximation of h*(*t*)*, then*

$$\left\|\left\|h(t) - v\_{\mathcal{N}}(t)\right\|\right\| \leq \frac{\mathcal{M}\mathcal{S}^{n+1}}{\left( (\mathcal{N}+1)! \right)!} \sqrt{\mathcal{B}\_{\text{\textpi}}(1+\mathfrak{a}, \ 1+\beta)},\tag{34}$$

*where* <sup>M</sup> <sup>=</sup> max*t*∈[0,*τ*] *<sup>h</sup>*<sup>N</sup> <sup>+</sup>1(*t*)*,* <sup>S</sup> <sup>=</sup> max{*<sup>τ</sup>* <sup>−</sup> *<sup>t</sup>*0, *<sup>t</sup>*0} *and* <sup>B</sup>*τ*(<sup>1</sup> <sup>+</sup> *<sup>α</sup>*, 1 <sup>+</sup> *<sup>β</sup>*) *denote the incomplete Beta function. At τ* = 1*, it reduces to the standard Beta function.*

**Proof.** By the Taylor series expansion, we have

$$h(t) = h(t\_0) + h'(t\_0)(t - t\_0) + \dots + h^{N}(t\_0)\frac{(t - t\_0)^{N}}{N!} + h^{N+1}(t)\frac{(t - t\_0)^{N+1}}{(N+1)!},\tag{35}$$

where *t*<sup>0</sup> ∈ [0, *τ*] and *ζ* ∈ [*t*0, *t*]. Let

$$\mathcal{P}\_{\mathcal{N}}(t) = f(t\_0) + f'(t\_0)(t - t\_0) + \dots + \frac{f^{\mathcal{N}}(t\_0)(t - t\_0)^{\mathcal{N}}}{\mathcal{N}!},\tag{36}$$

then

$$|h(t) - \mathcal{P}\_{\mathcal{N}}(t)| = \left| h^{\mathcal{N}+1}(t) \frac{(t - t\_0)^{\mathcal{N}+1}}{(\mathcal{N} + 1)!} \right|. \tag{37}$$

Since, we assume that *<sup>v</sup>*<sup>N</sup> (*t*) is the best square approximation of *<sup>h</sup>*(*t*), we have

$$\begin{split} \|h(t) - v\_{\mathcal{N}}(t)\|^2 &\le \|h(t) - \mathcal{P}\_{\mathcal{N}}(t)\|^2, \\ &= \int\_0^\tau w(t) [h(t) - \mathcal{P}\_{\mathcal{N}}(t)]^2 dt, \\ &= \int\_0^\tau \Big[ h^{\mathcal{N}+1}(t) \frac{(t-t\_0)^{\mathcal{N}+1}}{(\mathcal{N}+1)!} \Big]^2 dt, \\ &\le \frac{\mathcal{M}^2}{((\mathcal{N}+1)!)^2} \int\_0^\tau (t-t\_0)^{2n+2} w(t) dt, \\ &\le \frac{\mathcal{M}^2 \mathcal{S}^{2n+2}}{((\mathcal{N}+1)!)^2} \int\_0^\tau (t)^{\beta} (1-t)^a dt, \\ &= \frac{\mathcal{M}^2 \mathcal{S}^{2n+2}}{((\mathcal{N}+1)!)^2} \mathcal{B}\_\tau(1+a, 1+\beta). \end{split} \tag{38}$$

Hence,

$$\|h(t) - v\_{\mathcal{N}}(t)\| \le \frac{\mathcal{M}\mathcal{S}^{n+1}}{((\mathcal{N}+1)!)} \sqrt{\mathcal{B}\_{\text{\textpi}}(1+a, \ 1+\beta)}.\tag{39}$$

**Theorem 4.** *Let v*(*x*, *t*) *be a continuous function satisfying the conditions (2) for any t, and g*(*x*, *t*) *is continuous. Assuming <sup>v</sup>*<sup>N</sup> (*x*, *tr*) = *<sup>v</sup><sup>r</sup>* <sup>N</sup> (*x*) = <sup>∑</sup><sup>N</sup> *<sup>s</sup>*1=<sup>0</sup> *cs*<sup>1</sup> (*tr*)<sup>J</sup> *<sup>α</sup>*,*<sup>β</sup> <sup>s</sup>*<sup>1</sup> (*x*) *is the numerical approximation of the scheme (27), then the scheme (27) is unconditionally stable, and for any r* ≥ 0*, it holds that*

$$\|\|v\_{\mathcal{N}}^r(\mathbf{x})\|\|\_{L^2} \le \frac{w\_0}{w\_r} \|v\_{\mathcal{N}}^0(\mathbf{x})\|\|\_{L^2} + \sum\_{l=1}^{r-1} \frac{h\_l}{w\_r} \|\|\mathbf{g}^l\|\|\_{L^2} + \frac{\eta\_r}{q\_r w\_r} \|\|\mathbf{g}^r\|\|\_{L^2} \tag{40}$$

*where g*(*x*, *tr*) = *gr*(*x*)*, and hl* = <sup>1</sup> *ql* <sup>−</sup> <sup>1</sup> *ql*+<sup>1</sup> *ηl*, *l* = 1, 2, . . .*r* − 1*.*

**Proof.** The proof of this theorem is similar to the Theorem 1 of [32]. In [32], the authors proved the stability for the time-fractional KdV equation. Here, we extend the proof for the time-fractional diffusion equation. To prove Theorem 4, we first rewrite Equation (26) over the summation up to time step *tr*−<sup>1</sup> in discrete form. Thus, we have

$$\frac{1}{\eta\_{\rm r}}q\_{\rm r}w\_{\rm r}v(\mathbf{x},t\_{\rm r}) = \frac{1}{\eta\_{\rm r}} \left[ \sum\_{l=1}^{r-1} (q\_{l+1} - q\_l)w\_{\rm l}v(\mathbf{x},t\_{\rm l}) + q\_1 w\_0 v(\mathbf{x},t\_0) \right] + v''(\mathbf{x},t\_{\rm r}) + g(\mathbf{x},t\_{\rm r}), \tag{41}$$

where *r* = 1, 2, ..M, and *η<sup>l</sup>* = *wl*Γ(2 − *γ*).

Let *<sup>w</sup>α*,*β*(*x*) = *<sup>x</sup>β*(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*)*α*, and *<sup>u</sup><sup>r</sup>* N −2(*x*) = *<sup>u</sup>*N −2(*x*, *tr*) is the polynomial of N − <sup>2</sup> degree satisfying *v<sup>r</sup>* <sup>N</sup> (*x*) = *<sup>u</sup><sup>r</sup>* N −2(*x*)*wα*,*β*(*x*). Multiplying both sides of Equation (41) by *<sup>u</sup>*N −2(*xi*, *tr*)*wα*,*β*(*xi*), and taking the summation on *<sup>i</sup>* from 0 to <sup>N</sup> , we have

$$\begin{split} \sum\_{i=0}^{N} \left[ \frac{1}{\eta\_{r}} q\_{r} w\_{l} v(\mathbf{x}\_{i}, t\_{l}) \right] u\_{N-2}(\mathbf{x}\_{i}, t\_{r}) w^{a\_{i}\boldsymbol{\theta}}(\mathbf{x}\_{i}) &= \sum\_{i=0}^{N} \frac{1}{\eta\_{r}} \left[ \sum\_{l=1}^{r-1} (q\_{l+1} - q\_{l}) w\_{l} v(\mathbf{x}\_{i}, t\_{l}) + q\_{1} w\_{0} v(\mathbf{x}\_{i}, t\_{0}) \right] \\ &+ \frac{1}{\eta\_{r}} \left[ v''(\mathbf{x}\_{i}, t\_{r}) + g(\mathbf{x}\_{i}, t\_{r}) u\_{N-2}(\mathbf{x}\_{i}, t\_{r}) w^{a\_{i}\boldsymbol{\theta}}(\mathbf{x}\_{i}) \right], \end{split} \tag{42}$$

where *wα*,*β*(*xi*) is the corresponding weight function. Since the degree of *v<sup>r</sup>* <sup>N</sup> (x) does not exceed N + 1, then from Equation (11),

$$(v\_{\mathcal{N}'}^r u\_{\mathcal{N}-2}^r)\_{w^{\mu,\beta}(x)} = (v\_{\mathcal{N}'}^r v\_{\mathcal{N}}^r). \tag{43}$$

It can be easily shown that

$$\int\_{0}^{1} \boldsymbol{\upsilon}''(\mathbf{x}, t\_{r}) \boldsymbol{u}\_{\mathcal{N}-2}(\mathbf{x}, t\_{r}) \boldsymbol{w}^{a, \emptyset}(\mathbf{x}, t\_{r}) \, d\mathbf{x} = \int\_{0}^{1} \boldsymbol{\upsilon}''(\mathbf{x}, t\_{r}) \boldsymbol{\upsilon}(\mathbf{x}, t\_{r}) \boldsymbol{w}^{a, \emptyset}(\mathbf{x}, t\_{r}) \, d\mathbf{x} = 0. \tag{44}$$

Now, the discrete form of the Equation (42) at the nodes *xi* can be rewritten as

$$\|q\_{l'}w\_{l'}\|v\_{\mathcal{N}}^r(\mathbf{x})\|\_{L^2} \le \sum\_{l=1}^{r-1} (q\_{l+1} - q\_l)w\_l \|v\_{\mathcal{N}}^l(\mathbf{x})\|\_{L^2} + q\_1w\_0 \|v\_{\mathcal{N}}^0(\mathbf{x})\|\_{L^2} + \eta\_l \|g^r(\mathbf{x})\|\_{L^2},\tag{45}$$

by using the Cauchy–Schwartz inequality and Lemma (1). The remaining part of the proof can be completed following similar steps to those shown in Theorem 1 of [32].
