Automatic Tracking of Measurability

Introduction

The automatic tracking of measurability has been part of finmath-lib since its earliest version ( 2004). It adds a method to random variables to inspect their measurability. Its concept and implementation is similar to a forward mode automatic differentiation, in the sense that each operator is augmented with a corresponding operation on measurability and certain random variables (constants, Brownian increments) are initialized with specific values.

Definition

For a random variable X we define a map T : X → [-∞ , ∞) such that for for t ≥ T(X) it is guaranteed that X is Ft-measurable. It is not guaranteed that T(X) is the smallest such number, although in most cases the implementation can provide the smallest time.

Applications

Originally the concept was introduces as a safe-guard in least-square-estimates of the conditional expectation (American Monte-Carlo), to ensure that the regression basis functions are Ft-measurable if used in an Ft-conditional expectation. However, the concept allows for important optimizations in the context of stochastic automatic differentiation: it may be used to detect cases where the computationally expensive conditional expectation operator can be avoided. It is:

E(X | Ft) = X if t ≥ T(X).
See ssrn.com/abstract=3000822 for the interaction with an AD/AAD algorithm.

Implementation

Then random variable interface provides a method getFiltrationTime() implementing T(X) such that for a random variable X represented by the object X and t calculated by t = X.getFiltrationTime() it is guaranteed that X is Ft-measurable (note again: it is not guaranteed that t is the smallest such number, although in most cases the implementation can provide the smallest time).

The implementation is similar to a forward mode automatic differentiation, where the operator on random variables is augmented by additional operations on the filtration time:

Let T(X) denote the filtration time of X, i.e., T(X) = X.getFiltrationTime(). We apply the following definitions:

  • For constant (deterministic) random variables C we set T(C) = -∞
  • For Brownian increments W(t+Δt)-W(t) we set T(W(t+Δt)-W(t)) = t+Δ
  • For operators f on random variables X1,…,Xn with Z = f(X1,…,Xn) we set T(Z) = max(T(X1),…,T(Xn))

This definition of T already fulfils the requirement that t = T(Z) is a time guaranteeing that Z is Ft-measurable, however it may not give the smallest such filtration time. There are a few optimizations or special cases which can be implemented, e.g. for Z = 0 * X we have T(Z) = T(0) (instead of T(Z) = max(T(0),T(X)) = T(X)).

Note: This definition already covers the generation of the correct filtration times for an Euler-scheme of an Ito process.

An implementation may achieve this by augmenting (extending) the type X by (X,t) with t = T(X) and overloading operators on X.

Example

The method is implemented in RandomVariable implementing the interface RandomVariableInterface in finmath-lib. The interface is given as (extract)

public interface RandomVariableInterface {
    double getFiltrationTime();
    public RandomVariableInterface add(RandomVariableInterface randomVariable);

    // ... (declaration of other methods)

}

and the implementation is given as (extract)

public class RandomVariable implements RandomVariableInterface {

    private final double      time;                    // Time (filtration)

    @Override
    public double getFiltrationTime() {
       return time;
    }

    // ... (implementation of other methods)

    @Override
    public RandomVariableInterface add(RandomVariableInterface randomVariable) {
       // Set time of this random variable to maximum of time with respect to which measurability is known.
       double newTime = Math.max(time, randomVariable.getFiltrationTime());

       // ... (calculate newRealizations as sum of this and randomVariable)

       return new RandomVariable(newTime, newRealizations);
    }
}

References

For an application to AAD for American Monte-Carlo simulation see

Fries, Christian P., Automatic Backward Differentiation for American Monte-Carlo Algorithms - ADD for Conditional Expectations and Indicator Functions (June 27, 2017). Available at SSRN: https://ssrn.com/abstract=3000822