- What is the Inverse Problem?
- A theoretic framework for solving PDECO
- The eikonal equation

Solving a physical problem requires simulation

\mathcal{S}(x ; \theta) = 0

Given parameters, \(\theta\), what is a configuration, \(x\), that satisfies the above equation?

Examples of \(\theta\):

- Surface shape for FEM
- Initial conditions for the simulation
- Resistance parameter for traveling waves

Configuration could refer to the heat value at a particular time, or the position at a time value.

"I want to find the parameters, \(\theta\), that will admit the configuration, \(x\), that I observed/desire"

\underset{\theta}{\text{min}} {\displaystyle \int} \left(x - x_{observed} \right)^2

s.t. ~~\mathcal{S}(x; \theta) = 0

Examples include:

- Shape Optimization
- Stress
- Fluid Control
- etc.

- Seismic Imaging
- Wave modeling
- Traveltime modeling

Each evaluation of the loss function requires a full simulation. To calculate a gradient, finite difference would require multiple runs of the same simulation.

PDEs are described in terms of function spaces. Consequently, the inverse problem is also in terms of function spaces rather than discrete vector spaces

The optimization will be effected by our choice of discretization

\lVert \nabla \Phi \rVert = \eta

There's no one-size-fits-all solution!

\frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u} \cdot \nabla) \mathbf{u} - \nu \,\nabla^2 \mathbf{u} = - \nabla w + \mathbf{g}

\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2}

\underset{\theta}{\text{min}}~ f(x)

s.t. ~~\mathcal{S}(x; \theta) = 0

We need \(d_{\theta}f\) in order to do any gradient based optimization

\underset{\theta}{\text{min}}~ f(x(\theta))

s.t. ~~\mathcal{S}(x(\theta); \theta) = 0

d_\theta f(x(\theta)) = d_x f \cdot d_\theta x

d_\theta \mathcal{S}(x(\theta); \theta) = 0

d_\theta x = - d_x \mathcal{S}^{-1} \cdot d_\theta \mathcal{S}

d_\theta f(x(\theta)) = d_x f \cdot - d_x \mathcal{S}^{-1} \cdot d_\theta \mathcal{S}

d_\theta f(x(\theta)) = d_x f \cdot - d_x \mathcal{S}^{-1} \cdot d_\theta \mathcal{S}

(d_x \mathcal{S})^T \lambda = -(d_x f)^T

Derivative of dynamics

(application specific)

Derivative of cost functional

(closed form usually)

\lambda^T

}

(d_x \mathcal{S})^T \lambda = -(d_x f)^T

\(\lambda\) is known as the adjoint state

\(\lambda\) is the same dimension as the configuration, \(x\).

Solving for the adjoint state is no harder than running the forward simulation; it is often a linear solve

The adjoint state can be derived from Lagrange multipliers.

s.t. ~~\mathcal{S}(x; \theta) = 0

\underset{\theta}{\text{min}}~ f(x)

\underset{\theta}{\text{min}}~ f(x^*) + \lambda^* \mathcal{S}(x; \theta)

where \(x^*\) and \(\lambda^*\) are critical points.

One can view the adjoint state method as an extension of discrete Lagrange multipliers to functionals, where \(\lambda\) is a function instead of a coefficient.

In the PDECO literature, the KKT conditions are referenced a lot in the context of functional analysis.

From images of a volume, and I want to invert the image formation model to obtain the volume. Our equation of interest is the eikonal equation

Ground Truth

Initialization

Adjoint

Ground Truth

Initialization

Step 3

Final

Out of the box optimization gives good results!

- PDE Constrained Optimization is Lagrange Multipliers in disguise
- The broad theory for the adjoint state method can be used to tackle inverse problems
- Generally, the adjoint is faster to solve than the forward simulation itself

- Nice explanation of the derivation:
- Presentation on Cea's Method for shape optimization:
- Generally, the adjoint is faster to solve than the forward simulation itself