Once code is written you must begin three tasks that should continue so long as the code is in use:
Debugging has two stages:
Profiling involves the insertion of diagnostics into your code to assess its performance on a given platform. You want to find out:
Validation of both your model and your algorithms is critical to the development of any piece of scientific software. How one might do this is the focus of this lecture. We will mention several approaches that are geared toward traditional simulation codes, but most of them have analogs that can be applied to any scientific computation project.
For example, a one-dimensional problem with slab symmetry can be set up on a grid first from left to right and then from right to left. If these simulations were to be carried out with a perfect algorithm on a perfect platform they would produce the same numbers up to a reflection. However, in a realistic setting one should see differences that should be understood, especially if they are significantly larger than the expected round-off error. Such differences might be caused by a bug like a subtle index mistake in a loop or a boundary value specification. They might also arise due to an asymmetry in your algorithm. For example, if your code includes a Gaussian elimination subroutine it may always back-solve from right to left.
Another example in the same spirit is to compute a problem with periodic boundary conditions and its shift by half a period. Any funny looking behavior at a computational boundary that does not then shift to the middle of the problem is almost certainly an indication of a bug, most likely in your imposition of the periodic boundary condition.
When validating an algorithm, comparisons should be done on the same model. For example, give yourself the capability to choose from among several algorithms in your code. Algorithms that are known to be accurate can be used to validate faster algorithms. For example, the direct solution of a linear system can be used to validated an iterative solver, even if the direct solver is far too slow to be used in a full simulation. Similarly, one iterative solver can be used to validate another. Validation on the level of a full simulation can be studied by comparing with a simulation by a mature code that uses completely different algorithms. For example, you can compare a Monte Carlo simulation with a finite element simulation of the same model.
When validating a model, comparisons should be done either with the same algorithm or in such a way as to minimize the effect of any algorithm differences. For example, when validating a model of chemical reactions, one should compare it to a more complete model solved by the same algorithm. On the other hand, when comparing a diffusive model of particle transport with a kinetic one the algorithms will be very different.