Exploring the GRIN Lens Design Space

\frac{\sin{\theta_2}}{\sin{\theta_1}} = \frac{\eta_1}{\eta_2}

https://www.thorlabs.com

\eta = \sqrt{2 - \left(\frac{r}{R}^2\right)}

GRIN Lenses have many more degrees of freedom over traditional lenses.

What sort of effects can we achieve in this space? What are the limitations?

What can GRIN Lenses do that traditional refractive lenses cannot? Vice Versa?

Pretty ill-defined question, so instead...

We know some (Luneburg, Maxwell, etc.), but what about other objectives?

- Forward Tracing
- Inverse Problem
- Solve via Autodiff
- Adjoint Method
- Experiments

\begin{aligned}
\frac{dx}{ds} &= v \\
\frac{dv}{ds} &= \eta(x) \nabla \eta (x)
\end{aligned}

\begin{aligned}
\dot{x} &= v \\
\dot{v} &= \eta(x) \nabla \eta (x)
\end{aligned}

\begin{aligned}
x_i &= v_i \Delta s + x_{i-1} \\
v_i &= \eta(x_{i-1}) \nabla \eta (x_{i-1}) \Delta s + v_{i-1}
\end{aligned}

\begin{aligned}
\dot{x} &= v \\
\dot{v} &= \eta(x) \nabla \eta (x)
\end{aligned}

Assuming our desired inputs, what refractive field generates the desired output?

\begin{bmatrix}
x_o \\
v_o \\
\lambda
\end{bmatrix}

\begin{bmatrix}
x_f \\
v_f \\
\lambda
\end{bmatrix}

\eta(x)

(We will ignore wavelength)

\underset{\eta}{\min} ~~ \mathcal{C} \left(x_f, v_f \right)

s.t. \\
\dot{x} = v \\
\dot{v} = \eta \nabla \eta

The cost objective can be anything as long as its a function of \(x\) and \(v\)

Initial Conditions

\(x_o, v_o\)

Refractive Field

\(\eta\)

Eikonal Tracer

Cost Objective

Backprop

Gradient

\mathcal{C}(x_f, v_f) = \underset{c\in \text{sensors}}{\sum} \lVert \hat{I_c}- I_c(x_f, v_f) \rVert^2

An example is an image formation model, where each ray carries constant irradiance

The goal is to have multiple images based on direction

Collimated Sources

Sensors

\(\eta\)

Ground

Truth

Iter 0

Iter 20

Iter 300

- 128x128 pixel target images
- 10 samples per pixel
- 128^3 resolution cube
**~5s**per iteration-
**~12GB**GPU memory usage- forward trace + backprop

PyTorch Implementation:

\begin{bmatrix}
x_o \\
v_o \\
\eta
\end{bmatrix}

\{

\begin{bmatrix}
x_f \\
v_f
\end{bmatrix}

Every integration step is a "layer" in the graph

\(\Delta s\)

\(\Delta s\)

\(\Delta s\)

Each time step in the simulation adds to the computation graph.

Halving the step size effectively doubles the computation graph!

Too low of a step size means the volume itself will be undersampled.

A method for computing the derivative of interest via a set of partial differential equations

\begin{gathered}
\dot{\mu} = -\lambda \\
\dot{\lambda} = \left(\nabla \eta \nabla \eta^T + \eta \nabla \nabla \eta \right) \mu
\end{gathered}

\dot{x} = v \\
\dot{v} = \eta \nabla \eta

\frac{d\mathcal{C}}{d\eta} = \left(\eta \nabla d\eta + d\eta \nabla \eta \right) \mu

Calculating the adjoint state \(\mu\) gives us the value needed for the derivative of interest

\underset{\eta}{\min} ~~ \mathcal{C} \left(x_f, v_f \right)

s.t. \\
\dot{x} = v \\
\dot{v} = \eta \nabla \eta

With this value, we can use a gradient based optimizer like Adam, SGD, BFGS, etc.

\begin{gathered}
\dot{\mu} = -\lambda \\
\dot{\lambda} = \left(\nabla \eta \nabla \eta^T + \eta \nabla \nabla \eta \right) \mu
\end{gathered}

\(\mu\) can be thought of as the error vector at the boundary being parallel transported back through the volume

We still need the path for the adjoint state, since its defined on the path.

Naively, this means that we have to save the whole path, which has the same problem as before.

The optical paths of the rays are "symplectic" or reversible.

\begin{aligned}
x_i &= v_i \Delta s + x_{i-1} \\
v_i &= \eta(x_{i-1}) \nabla \eta (x_{i-1}) \Delta s + v_{i-1}
\end{aligned}

\begin{aligned}
x_{i-1} &= x_{i} - v_i \Delta s \\
v_{i-1} &= v_{i} - \eta(x_{i-1}) \nabla \eta (x_{i-1}) \Delta s
\end{aligned}

Ground

Truth

Autodiff

Adjoint

AD Implementation:

**~5s**per iteration**~18GB**GPU mem

- 128x128 pixel target images
- 10 samples per pixel
- 128^3 resolution cube

Adjoint Implementation:

**~1s**per iteration**~2GB**GPU mem

AD can also achieve these results, it would just take much longer and use more memory.

\begin{bmatrix}
x_o \\
v_o \\
\eta
\end{bmatrix}

\begin{bmatrix}
x_f \\
v_f
\end{bmatrix}

Adjoint

Tracer

Collimated Source

Near-Field Sensor

\(\eta\)

Far-Field Sensor

Optimization

Ground Truth

Near

Far

Same near-far setup, but use a geometric cost objective instead

\mathcal{C}(x, v) = \lVert \text{SDF}_x(x_f) \rVert^2 + \lVert \text{SDF}_v(v_f) \rVert^2

desired image

signed distance field

Near Field

Far Field

Ground Truth

Volume

Near Field

Far Field

The energy distribution isn't uniform. That isn't a constraint in the optimization.

As long as the ray reaches the circle, the loss will go to zero.

The Luneburg lens can be defined by a geometric property rather than an index profile.

\mathcal{C}(x, v) = \underset{i \in \mathcal{S}^2}{\int} \lVert \mathcal{T}(x_o) - p_i \rVert^2

It works! We can match the actual Luneburg profile with just the geometric description.

- Can't handle discrete interfaces.
- Don't account for real world fabrication constraints
- Extensions include:
- polarization
- wavelength dependence

What sort of things would be cool to see?

What other experiments should I try?