Computing event sensitivity#

In this example, we will show how it is possible to use heyoka.py’s event detection system to compute the first-order sensitivity of an event’s trigger time. For the sake of simplicity, we will consider here a dynamical system with a single parameter and we will consider only the sensitivity with respect to this parameter. The approach can easily be generalised to the computation of the sensitivities with respect to multiple parameters and/or the initial conditions.

In order to illustrate the method, we will be focusing on an analytically-solvable system of ODEs, so that we will be able to determine an explicit expression for the sensitivity which we will then use to validate the numerical computation.

The analytical approach#

We consider the ODE system for the harmonic oscillator:

\[\begin{split} \begin{cases} x^\prime &= v \\ v^\prime &= -kx \end{cases}, \end{split}\]

where \(k>0\) is the spring constant. We fix the following initial conditions:

\[\begin{split} \begin{cases} x\left( 0 \right) &= 0 \\ v\left( 0 \right) &= 1 \end{cases}. \end{split}\]

The analytical solution for this simple initial-value problem is:

\[\begin{split} \begin{cases} x\left( t \right) &= \frac{1}{\sqrt{k}}\sin\left(\sqrt{k}t\right) \\ v\left( t \right) &= \cos\left(\sqrt{k}t\right) \end{cases}. \end{split}\]

Let us now suppose that we want to detect when the oscillation reaches the maximum amplitude. We can do so by defining the simple event equation

\[ g\left( x, v \right) \equiv v = 0. \]

I.e., the maximum amplitude in the harmonic oscillator is reached when the velocity is zero. We can substitute the solution \(v\left( t \right)\) into the event equation, yielding the time-dependent equation

\[ \cos\left( \sqrt{k} t \right) = 0. \]

Solving this equation for \(t\) gives us an analytical expression for the trigger time of the event, i.e., the time of maximum amplitude \(t_M\):

\[ t_M = \frac{\pi}{2\sqrt{k}}. \]

The first-order sensitivity of \(t_M\) with respect to \(k\) is easily computed:

\[ \frac{d t_M}{d k} = -\frac{\pi}{4k^\frac{3}{2}}. \]

The numerical approach#

If we do not have the analytical solution of the ODE system, it is not possible to compute an explicit expression for the event equation as a function of \(t\) and \(k\), like we did above. By extension, we cannot compute an explicit expression for \(t_M\) and its sensitivity either. We can however compute numerically the sensitivity with the help of the variational equations.

In the absence of an analytical solution, the left-hand side of the event equation can be seen as an unknown function of \(t\) and \(k\):

\[ g\left(t, k\right) = 0. \]

We cannot write an explicit expression for \(t_M\) using this equation, but via the formula for the derivative of an implicit function, we can write the sensitivity \(d t_M / d k\) as

\[ \frac{d t_M}{d k} = -\frac{\frac{\partial g}{\partial k}}{\frac{\partial g}{\partial t_M}}. \]

\(\frac{\partial g}{\partial t_M}\) is the time derivative of \(g\) calculated at the trigger time \(t_M\), which can be computed directly from the original definition of the event equation in terms of the state variables:

\[ \frac{\partial g}{\partial t_M} = -kx\left(t_M\right). \]

In order to compute \(\frac{\partial g}{\partial k}\), we need to augment the original ODE system with the variational equations for \(x\), \(v\) and \(g\) with respect to \(k\):

\[\begin{split} \begin{cases} x^\prime & = v \\ v^\prime & = -kx \\ \left( \frac{\partial x}{\partial k} \right)^\prime & = \frac{\partial v}{\partial k} \\ \left( \frac{\partial v}{\partial k} \right)^\prime & = -x-k\frac{\partial x}{\partial k} \\ \left( \frac{\partial g}{\partial k} \right)^\prime & = -x-k\frac{\partial x}{\partial k} \end{cases}. \end{split}\]

We can now proceed to the definition of the heyoka.py integrator:

import heyoka as hy
import numpy as np

# The dynamical variables (including the
# variational variables).
x, v, x_k, v_k, g_k = hy.make_vars("x", "v", "x_k", "v_k", "g_k")

# The spring constant.
k = hy.par[0]

# The ODEs.
x_t = v
v_t = -k*x
x_k_t = v_k
v_k_t = -x-k*x_k
g_k_t = -x-k*x_k

# The initial conditions.
ic = [0., 1., 0., 0., 0.]

# Event to detect the maximum amplitude.
ev = hy.t_event(v)

# Definition of the integrator.
ta = hy.taylor_adaptive([(x, x_t),
                         (v, v_t),
                         (x_k, x_k_t),
                         (v_k, v_k_t),
                         (g_k, g_k_t)],
                        ic, t_events = [ev])

# Pick a concrete value for the spring constant.
ta.pars[0] = .456

Let us propagate up to a large time coordinate. The integration will anyway be stopped almost immediately due to the event triggering when the maximum amplitude is reached:

ta.propagate_until(1e9)
(<taylor_outcome.???: -1>,
 1.2823207799363494,
 1.2823207799363494,
 2,
 None,
 None)

Recall that, analytically, we expect the sensitivity value to be \(-\frac{\pi}{4k^\frac{3}{2}}\), i.e.,

-np.pi/(4*ta.pars[0]**(3./2))
-2.550601538829664

From the numerical integration, we can compute the sensitivity value, as explained above, as \(-\frac{\frac{\partial g}{\partial k}}{\frac{\partial g}{\partial t_M}}\). The value of \(\frac{\partial g}{\partial k}\) can be read directly from the state vector (at index 4), while \(\frac{\partial g}{\partial t_M} = -kx\left(t_M\right)\):

-ta.state[4]/(-ta.pars[0]*ta.state[0])
-2.5506015388296643

Indeed, the numerical value matches the analytical result to machine precision.

Application to optimisation problems#

Suppose that we want to determine what value the spring constant \(k\) must assume in order for the maximum amplitude of the oscillator to be \(A\) (a fixed constant). We can formulate this problem as the minimisation of the function

\[ f\left( k \right) = \left[ x\left(t_M\left(k\right), k\right) - A\right]^2, \]

where \(x\left(t_M\left(k\right), k\right)\) is the value assumed by the coordinates \(x\) at the event trigger time \(t_M\). Local optimisation algorithms can greatly benefit from the availability of the gradient of \(f\) with respect to the optimisation variable \(k\). For the harmonic oscillator system considered here, \(df/dk\) is easily computed analytically as

\[ \frac{df}{dk} = -\left(\frac{1}{\sqrt{k}} -A \right)k^{-\frac{3}{2}}. \]

If we assume that an analytical solution is not available (as it is generally the case), we can compute \(df/dk\) numerically with the help of the variational equations. Specifically, we can write:

\[ \frac{df}{dk} = 2 \left[ x\left(t_M\left(k\right), k\right) - A\right]\left( \frac{\partial x}{\partial t_M}\frac{\partial t_M}{\partial k} + \frac{\partial x}{\partial k} \right), \]

where:

  • \(\partial x/\partial t_M\) is the time derivative of \(x\) (i.e., the velocity \(v\)) at the trigger time \(t_M\),

  • \(\partial t_M/\partial k\) is the sensitivity of \(t_M\) (as computed earlier),

  • \(\partial x/\partial k\) is the sensitivity of \(x\) with respect to \(k\) (which appears as an extra state variable in the augmented ODE system defined earlier).

Let us now define the objective function and let us implement its gradient using the numerical solution of the augmented ODE:

# Pick a concrete value for the A constant.
A = 10.123

# Objective function.
def fun(x):
    # Reset the integrator state.
    ta.reset_cooldowns()
    ta.time = 0
    ta.state[:] = ic
    ta.pars[0] = x[0]
    
    # Propagate until the event triggers.
    oc, _, _, _ = ta.propagate_until(1e9)
    if int(oc) != -1:
        raise
    
    return (ta.state[0] - A)**2

# Derivative of 'fun' with respect to k.
def jac(x):
    # Reset the integrator state.
    ta.reset_cooldowns()
    ta.time = 0
    ta.state[:] = ic
    ta.pars[0] = x[0]
    
    # Propagate until the event triggers.
    oc, _, _, _, _, _ = ta.propagate_until(1e9)
    if int(oc) != -1:
        raise

    # Compute the sensitivity of t_M.
    tM_k = -ta.state[4]/(-ta.pars[0]*ta.state[0])

    return [2*(ta.state[0] - A)*(ta.state[1]*tM_k + ta.state[2])]

Let us now compute \(df/dk\) via the analytical formula for a specific value of \(k\):

# Pick a value for k.
k_val = 7.23

-(1/np.sqrt(k_val)-A)*k_val**(-3./2)
0.5015866697490922

And here’s the value computed via the numerical integration of the variational equations:

jac([k_val])
[0.5015866697490922]

Indeed, the two values agree to machine precision.