math (3) - Linux Manuals

math: Math facilities of the library include:

NAME

Math tools - Math facilities of the library include:

Pseudo-random number and low-discrepancy sequence generators

Implementations of pseudo-random number and low-discrepancy sequence generators. They share the ql/RandomNumbers directory.

One-dimensional solvers

The abstract class QuantLib::Solver1D provides the interface for one-dimensional solvers which can find the zeroes of a given function.

A number of such solvers is contained in the ql/Solvers1D directory.

The implementation of the algorithms was inspired by 'Numerical Recipes in C', 2nd edition, Press, Teukolsky, Vetterling, Flannery - Chapter 9

Some work is needed to resolve the ambiguity of the root finding accuracy defition: for some algorithms it is the x-accuracy, for others it is f(x)-accuracy.

Optimizers

The optimization framework (corresponding to the ql/Optimization directory) implements some multi-dimensional minimizing methods. The function to be minimized is to be derived from the QuantLib::CostFunction base class (if the gradient is not analytically implemented, it will be computed numerically).

The simplex method.RS 4

This method, implemented in QuantLib::Simplex, is rather raw and requires quite a lot of computing resources, but it has the advantage that it does not need any evaluation of the cost function's gradient, and that it is quite easily implemented. First, we must choose N+1 starting points, given here by a starting point $ mathbf{P}_{0} $ and N points such that [ mathbf{P}_{mathbf{i}}=mathbf{P}_{0}+


mathbf{e}_{mathbf{i}}, ] where $
$ is the problem's characteristic length scale). These points will form a geometrical form called simplex. The principle of the downhill simplex method is, at each iteration, to move the worst point (highest cost function value) through the opposite face to a better point. When the simplex seems to be constrained in a valley, it will be contracted downhill, keeping the best point unchanged.

The conjugate gradient method.RS 4 We'll now continue with a bit more sophisticated method, implemented in QuantLib::ConjugateGradient . At each step, we minimize (using Armijo's line search algorithm, implemented in QuantLib::ArmijoLineSearch) the function along a line defined by [ mathbf{d_i} = -bla f(mathbf{x_i})+ac{


rt bla f(mathbf{x_i})ightVert ^{2}}{
rt bla f(mathbf{x_{i-1}})ightVert ^{2}}mathbf{d_{i-1}}, ] [ mathbf{d}_{0} = -bla f(mathbf{x}_{0}). ]

As we can see, this optimization method requires the knowledge of the gradient of the cost function. See QuantLib::ConjugateGradient .

Author

Generated automatically by Doxygen for QuantLib from the source code.