Well, this is a new one. While I was writing a new post about the new finite-difference framework, I tried to refer to a past post only to find out it wasn’t there. So here it is: a lost post on time-dependent finite-difference operators. It should have been posted after this one, but that can’t be helped now.

The next week, on December 7th and 8th, I’ll be in Düsseldorf for the next QuantLib User Meeting. I haven’t been hearing from the organizers lately, but there might be places left, so download the flyer and reach out to them if you’re interested.

Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the buttons for that are in the footer. Also, make sure to check my Training page.

Time-dependent operators

So far, I ignored the possibility that the parameters of the model depend on time. However, the basic classes in the framework need little modification to cover that case; the additional code is shown in the listing below.

    class TridiagonalOperator {
        class TimeSetter {
            virtual ~TimeSetter() {}
            virtual void setTime(Time t,
                                 TridiagonalOperator&) const = 0;
        bool isTimeDependent() const { return !!timeSetter_; }
        void setTime(Time t) {
            if (timeSetter_)
                timeSetter_->setTime(t, *this);
        boost::shared_ptr<TimeSetter> timeSetter_;

    template <class Operator>
    class BoundaryCondition {
        virtual void setTime(Time t) = 0;

    template <class Operator>
    void MixedScheme<Operator>::step(array_type& a, Time t) {
        for (Size i=0; i<bcs_.size(); i++)
        if (theta_!=1.0) { // there is an explicit part
            if (L_.isTimeDependent()) {
                explicitPart_ = I_-((1.0-theta_) * dt_)*L_;
            // continue as before
        if (theta_!=0.0) {
            // the same for the implicit part

The TridiagonalOperator class gains an inner class, TimeSetter, and a data member holding an instance (possibly null) of the same. The interface of the inner class is simple enough; apart from the destructor, it consists of the single method setTime that takes a reference to the operator and modifies its elements accordingly to the current time (also passed). Of course, the method is declared as pure virtual; the actual logic will be implemented in derived classes. In what can be seen as an implementation of the Strategy pattern, the setTime method of TridiagonalOperator works by delegating the job to the stored setter, if any.

The BoundaryCondition class doesn’t follow the same pattern; instead, it simply declares a virtual setTime method that is supposed to perform any required setup. In the simple Dirichlet and Neumann conditions available from the library, the method does nothing.

Finally, the code of the evolution schemes needs to be modified to take into account the possibility of time dependence. As an example, the listing shows the modified step method from the MixedScheme class template; you can compare it with the version in this post. First, it calls the setTime method on all boundary conditions; then, if the operator is time dependent, it calls its setTime method and recomputes the explicit and implicit operators \( I - (1-\theta) \cdot \Delta t \cdot L \) and \( I + \theta \cdot \Delta t \cdot L \) as required. The remaining calculations are performed as before.

Unfortunately, the available implementations of actual time-dependent operators are not as simple as the modifications required in the basic classes. The next listing shows part of the code used to implement a Black-Scholes-Merton operator with time-dependent coefficients.

    class PdeSecondOrderParabolic {
        virtual ~PdeSecondOrderParabolic() {}
        virtual Real diffusion(Time t, Real x) const = 0;
        virtual Real drift(Time t, Real x) const = 0;
        virtual Real discount(Time t, Real x) const = 0;
        virtual void generateOperator(Time t,
                                      const TransformedGrid& tg,
                                      TridiagonalOperator& L) const {
            for (Size i=1; i < tg.size() - 1; i++) {
                Real sigma = diffusion(t, tg.grid(i));
                Real nu = drift(t, tg.grid(i));
                Real r = discount(t, tg.grid(i));
                Real sigma2 = sigma * sigma;

                Real pd = -(sigma2/tg.dxm(i)-nu)/ tg.dx(i);
                Real pu = -(sigma2/tg.dxp(i)+nu)/ tg.dx(i);
                Real pm = sigma2/(tg.dxm(i) * tg.dxp(i))+r;
                L.setMidRow(i, pd,pm,pu);

    class PdeBSM : public PdeSecondOrderParabolic {
        PdeBSM(const shared_ptr<GeneralizedBlackScholesProcess>&);

    template <class PdeClass>
    class PdeOperator : public TridiagonalOperator {
        PdeOperator(const Array& grid, const PdeClass& pde)
        : TridiagonalOperator(grid.size()) {
            timeSetter_ =
                boost::shared_ptr<GenericTimeSetter<PdeClass> >(
                     new GenericTimeSetter<PdeClass>(grid, pde));

    typedef PdeOperator<PdeBSM> BSMTermOperator;

Like for the engine I’ve shown in the previous example, several bits of behavior were put in different classes in order to reuse them. For instance, the PdeSecondOrderParabolic class can be used for partial differential equations (PDE) of the form

and defines a generateOperator method that calculates the coefficients of the corresponding tridiagonal operator, given a grid and a time; the \( \nu \), \( \sigma \) and \( r \) coefficients above are provided by inheriting from the class and overriding the drift, diffusion and discount methods, respectively. As you might have noticed, discount is a misnomer; the coefficient actually corresponds to the instantaneous risk-free rate. (Come to think of it, generateOperator is not the correct name, either. The method doesn’t create an operator; it fills, or updates, the coefficients of an existing one.) In our case, the derived class PdeBSM specifies the coefficients for the Black-Scholes-Merton process; the methods above, not shown for brevity, are implemented by fetching the corresponding information from the passed process.

The PdeOperator class template, shown in the same listing, inherits from TridiagonalOperator. It takes as template argument a class like PdeBSM; that is, one that provides a generateOperator method with the correct semantics. The constructor uses an instance of such class, together with a given grid, to build a time setter. Once the setter is stored, the PdeOperator instance can be safely sliced and passed around as a TridiagonalOperator instance. The setter is an instance of the GenericTimeSetter class template, not shown here, that simply implements its required setTime method by passing the given time, the stored grid and the operator to the generateOperator method of the stored PDE class.

Finally, by instantiating PdeOperator with the PdeBSM class we get a time-dependent Black-Scholes-Merton operator; we can give it a name by means of a simple typedef.

Again, I’m not sure that this is the right balance between readability and reusability (and to be fair, I’m not sure it isn’t, either). Like in the previous example, you could simplify the hierarchy by putting the whole thing in a single BSMTimeSetter class. Its constructor would store the grid and a process, and its setTime method would contain the code which is currently split between the several classes in the previous listing. All in all, I think we’re short of actual evidence about which version is best.

There are lots of other things in the framework that I could describe. There’s the OperatorFactory I mentioned earlier, whose methods take a process, dispatch on its type, and return the corresponding operator; and which, this being C++ and not Java, could have been implemented as a set of overloaded functions instead. There’s a PdeShortRate class, which works as PdeBSM for one-factor short-rate models. There’s an OperatorTraits class used to define a few types used around the framework. There are others I don’t even remember.

However, I’ll leave them for you to explore. Next time, we’ll be back to the new finite-differences framework.