forward model
for k in range(nt):
ub[:,k+1] = RK4(rhs,ub[:,k],dt,*args)
#backward adjoint
k = ind_m[-1]
fk[:,-1] = (JObsOp(ub[:,k])).T @ Ri @ (w[:,-1]-ObsOp(ub[:,k]))
lam[:,k] = fk[:,-1] #lambda_N=f_N
km = len(ind_m)-2
for k in range(ind_m[-1],0,-1):
DM = JRK4(rhs,Jrhs,ub[:,k-1],dt,*args)
lam[:,k-1] = (DM).T @ lam[:,k]
if k-1 == ind_m[km]:
fk[:,km] = (JObsOp(ub[:,k-1])).T @ Ri @(w[:,km]-ObsOp(ub[:,k-1]))
lam[:,k-1] = lam[:,k-1] + fk[:,km]
```

```
km = km - 1
dJ0 = -lam[:,0]
return dJ0
```
The gradient ∇*J*(**u**(*t*0)) should be used in a minimization algorithm to update the initial condition for the next iteration. One simple algorithm is the simple gradient descent where an updated value of the initial state is computed as **<sup>u</sup>**(*t*0))*new* <sup>=</sup> **<sup>u</sup>**(*t*0))*old* <sup>−</sup> *<sup>β</sup>n*∇*J*(**u**(*t*0)*old*), where *<sup>β</sup><sup>n</sup>* is some step parameter. This can be normalized as **<sup>u</sup>**(*t*0))*new* <sup>=</sup> **<sup>u</sup>**(*t*0))*old* <sup>−</sup> *<sup>β</sup>* <sup>∇</sup>*J*(**u**(*t*0)*old*) ∇*J*(**u**(*t*0)*old*) . The value of *β* might be predefined, or more efficiently updated at each iteration using an additional optimization algorithm (e.g., line-search). For the sake of completeness, we present a line-search routine in Listing 7 using the Golden search algorithm. This is based on the definition of the cost functional in Listing 8.

✝ ✆

**Listing 7.** A line-search Python function using the Golden search method. ✞ ☎

```
def GoldenAlpha(p,rhs,ObsOp,t,ind_m,u0,w,R,opt,*args):