*1.2. Contribution*

**Algorithmic Contribution:** A critical bottleneck in using the gradient map of the form (5) to compute perturbed solutions is that the mapping between Δ**p** and *λ<sup>i</sup>* is highly discontinuous. In other words, even a small Δ**p** can lead to large changes in the so-called active-set of the inequality constraints. Thus it becomes necessary to develop additional active-set prediction mechanisms [7]. In this paper, we bypass this complication by instead focusing on the parametric optimization with only bound constraints on the variable set. Argmin differentiation of such problems has a simpler structure, which we leverage to develop a line-search based algorithm to incrementally adopt joint trajectories to larger changes in the parameter/tasks. To give some example of "large perturbation", our algorithm can adapt the joint trajectories of Franka–Panda arm to a perturbation of up to 30 cm in the goal position. This is almost 30% of the workspace of the Franka arm.

**Application Contribution**: For the first time, we apply the Argmin differentiation concept to the problem of joint trajectory optimization for the manipulators under end-effector task constraints. We consider a diverse class of cost functions to handle (i) perturbations in joint configurations or (ii) end-effector way-points in orientation-constrained end-effector trajectories. We present an extensive benchmarking of our algorithm's performance as a function of the perturbation magnitude. We also show that our algorithm outperforms the warm-start trajectory optimization approach in computation time by several orders of magnitude while achieving similar quality as that measured by task residuals and smoothness of the resulting trajectory.

#### *1.3. Related Works*

The concept of Argmin differentiation has been around for a few decades, although often under the name of sensitivity analysis [8,9]. However, of late it has seen a resurgence, especially in the context of end-to-end learning of control policies [10,11]. Our proposed work is more closely related to those that use Argmin differentiation for motion planning or feedback control. In this context, a natural application of Argmin differentiation is

in bi-level trajectory optimization where the gradients of the optimal solution from the lower level are propagated to optimize the cost function at the higher level. This technique has been applied to both manipulation and navigation problems in existing works [6,12]. Alternately, Argmin differentiation can also be used for the correction of prior-computed trajectories [7,13].

To the best of our knowledge, we are not aware of any work that uses Argmin differentiation for the adaptation of task-constrained manipulator joint trajectories. The closest to our approach is [5] that uses it to accelerate the inverse kinematics problem. Along similar lines, [7] considers a very specific example of perturbation in the end-effector goal position. In contrast to these two cited works, we consider a much more diverse class of task constraints. Furthermore, our formulation also has important distinctions with [7] at the algorithmic level. Authors in [7] use the log-barrier function for including inequality constraints as penalties in the cost function. In contrast, we note that in the context of the task-constrained trajectory optimization considered in this paper, the joint angle limits are the most critical. The velocity and acceleration constraints can always be satisfied through time-scaling based pre-processing [14]. Thus, by choosing a way-point parametrization for the joint trajectories, we formulate the underlying optimization with just box constraints on the joint angles. This, in turn, allows us to treat this constraint through simple projection (Line 4 in Algorithm 1) without disturbing the structure of the cost function and the resulting Jacobian and Hessian matrices obtained by Argmin differentiation.

**Algorithm 1** Line-Search Based Joint Trajectory Adaptation to Task Perturbation

1: Initialize *<sup>k</sup>ξ*<sup>∗</sup> as the solution for the prior parameter *<sup>k</sup>***p**, the Hessian *<sup>k</sup>*∇<sup>2</sup> *<sup>ξ</sup> <sup>f</sup>*(*kξ*, **<sup>p</sup>**), the gradient <sup>∇</sup>*ξ*,*pi <sup>f</sup>*(*kξ*, **<sup>p</sup>**), and *<sup>k</sup>*Δ**<sup>p</sup>** <sup>=</sup> **<sup>p</sup>** <sup>−</sup> *<sup>k</sup>***<sup>p</sup>**

2: **while** *η* > 0 **do**

max *η* (6a)

$$f(\prescript{k}{}{\mathfrak{F}}^\*(\mathbf{p} + \eta \Delta \mathbf{p}), \mathbf{p} + \Delta \mathbf{p}) \le f(\prescript{k}{}{\mathfrak{F}}^\*, \mathbf{p} + \Delta \mathbf{p})\tag{69}$$

3:

*<sup>k</sup>*+1*ξ*<sup>∗</sup> <sup>=</sup> *<sup>k</sup>ξ*<sup>∗</sup> <sup>+</sup> *<sup>η</sup>*∇**p***ξ*∗Δ*k***<sup>p</sup>** (7)

4:

*<sup>k</sup>*+1*ξ*<sup>∗</sup> <sup>=</sup> *Project*(*ξlb*, *<sup>ξ</sup>ub*) (8)

5: Update *<sup>k</sup>*+1**p** = *ForwardRoll*(*k*+1*ξ*∗) 6: Update *<sup>k</sup>*+1Δ**<sup>p</sup>** <sup>=</sup> **<sup>p</sup>** <sup>−</sup> *<sup>k</sup>*+1**p**.

