*1.1. Main Idea*

The proposed work explores an alternate approach based on differentiating the optimal solution with respect to the problem parameters, hereafter referred to as the Argmin differenti-

**Citation:** Srikanth, S.; Babu, M.; Masnavi, H.; Kumar Singh, A.; Kruusamäe, K.; Krishna, K.M. Fast Adaptation of Manipulator Trajectories to Task Perturbation by Differentiating through the Optimal Solution. *Sensors* **2022**, *22*, 2995. https://doi.org/10.3390/s22082995

Academic Editor: Gregor Klancar

Received: 1 March 2022 Accepted: 6 April 2022 Published: 13 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

ation [4]. To understand this further, consider the following constrained optimization problem over variable *ξ* (e.g., joint angles) and parameter vector **p** (e.g., end-effector position).

$$\mathcal{J}^\*(\mathbf{p}) = \arg\min f(\mathbf{\mathcal{J}}, \mathbf{p}) \tag{1}$$

$$g\_i(\mathfrak{F}\_\prime \mathfrak{p}) \le 0, \forall i = 1, 2, \dots, n \tag{2}$$

$$h\_j(\mathfrak{J}, \mathfrak{p}) = 0, \forall j = 1, 2, \dots, m \tag{3}$$

The optimal solution *ξ*∗ satisfies the following Karush–Kuhn Tucker (KKT) conditions.

$$
\nabla f(\boldsymbol{\xi}^\*, \mathbf{p}) + \sum\_{l} \lambda\_l \nabla \varrho\_l(\boldsymbol{\xi}^\*, \mathbf{p}) + \sum\_{j} \mu\_j \nabla h\_j(\boldsymbol{\xi}^\*, \mathbf{p}) = 0 \tag{4a}
$$

$$g\_l(\mathfrak{F}^\*, \mathfrak{p}) \le 0, \forall i \tag{4b}$$

$$h\_{\rangle}(\mathfrak{F}^\*, \mathfrak{p}) = 0 \tag{4c}$$

$$
\lambda\_l \ge 0, \lambda\_l \varrho\_l(\mathfrak{F}^\*, \mathfrak{p}) = 0, \forall l. \tag{4d}
$$

The gradients in (4a) are taken with respect to *ξ*. The variables *λi*, *μ<sup>j</sup>* are called the Lagrange multipliers. Now, consider a scenario where the optimal solution *ξ*∗ for the parameter **p** needs to be adapted for the perturbed set **p** = **p** + Δ**p**. As mentioned earlier, one possible approach is to resolve the optimization with *ξ*∗ as the warm-start initialization. Alternately, for Δ**p** with a small magnitude, an analytical perturbation model can be constructed. To be precise, we can compute the first-order differential of the r.h.s. of (4a)–(4d) to obtain analytical gradients in the following form [5–7].

$$(\nabla\_{\mathbf{p}}\mathbb{f}^{\bullet}, \nabla\_{\mathbf{p}}\lambda\_{l}, \nabla\_{\mathbf{p}}\mu\_{l}^{\*}) = \mathbf{F}(\mathbb{f}^{\bullet}, \mathbf{p}, \lambda\_{l}, \mu\_{l}) \tag{5}$$

Multiplying the gradients with Δ**p** gives us an analytical expression for the new solution and Lagrange multipliers corresponding to the perturbed parameter set [7].
