OxCal > Analysis > Details

Analysis Details

MCMC Analysis

MCMC analysis is used for almost all multi-parameter Bayesian analysis performed by OxCal. The exceptions are the functions Combine() and D_Sequence() which can be reduced to problems with only one independent parameter and therefore can be solved analytically.

Bayes theorem

The Bayesian analysis is based on Bayes theorem (Bayes 1763) which can be summarised as:

posterior = likelihood × prior / constant

Here we define the parameters of interest t for which we have observations y. In general Bayes theorem then becomes:

p(t|y) = p(y|t) p(t) / p(y)
p(t|y) ∝ p(y|t) p(t)

The posterior p(t|y) is usually what we are interested in (the probability given all of the observations). The likelihood p(y|t) is defined by the observations and the prior p(t) by the model.

The problem with calculating probabilities of this form is that they are multidimensional, often to a high degree. For this reason Markov chain Monte-Carlo (MCMC) is used. The idea is to build up a representative sample of possible solutions for t.


The results from any analysis are generated as marginal posterior densities (probability distributions) for each of the parameters. Ranges are also calculated, incorporating 68.2, 95.4 and 99.7% of the total area of the distributions, with the highest probability density. Such a range is often referred to as a highest posterior density (hpd) range. You can, as an option, select a floruit instead (see options below).

In addition to the reported ranges, summary statistics are calculated for the probability distributions: mean, standard deviation, and median. These are most useful for quasi-Normal distributions and are not a substitute for the hpd ranges, especially in the case of simple radiocarbon calibrations and other multi-modal distributions.

Maths ↓

The Metropolis-Hastings Algorithm

In this program the particular algorithm used is the Metropolis-Hastings Algorithm.

The way this works is if you have a particular solution t(j) and you wish to find the next solution t(j+1) then you choose a trial move to t' and calculate the ratio:

a = |∂t'/∂t(j)| p(t'|y) / p(t(j)|y)

If a > 1 then t(j+1) = t'
else with probability a set t(j+1) = t'
and otherwise set t(j+1) = t(j)

In many cases of simple move the Jacobean |∂t'/∂t(j)| is 1, but in some multiple-parameter moves it is not.

MCMC trial moves

There are three types of trial move used in the MCMC analysis. The first of these is where a single parameter is updated (order of parameter updating is randomised). Such a parameter will be given a trial value which lies somewhere in the range of possibilities provided by the constraints. To present these moves visually we will use a "." to represent a normal parameter and a "|" to denote a parameter which is a boundary of a group. The following series of representations shows single parameter updating in progress.

    |   .  .  ..  .   .   |  . .  .|  .    .    .  |
    |   ..    ..  .   .   |  . .  .|  .    .    .  |
    |   ..    ..  .   .   |  .   ..|  .    .    .  |
    |   ..    ..  .   .   |  .   ..  |.    .    .  |

The second form of move is where a group of events is moved together as in:

    |   ..    ..  .   .   |  .   ..  |.    .    .  |
    |   ..    ..  .   .|  .   ..  |   .    .    .  |
    |   ..    ..  .   .  |  .   ..  | .    .    .  |
    |   ..    ..  .   . |  .   ..  |  .    .    .  |

A third move is where a group of events is expanded or contracted by moving one of the boundaries as in:

    |   ..    ..  .   . |  .   ..  |  .    .    .  |
    |   ..    ..  .   . |  .   ..  |...|
    |   ..    ..  .   . |  .   ..  | .  .  . |
    |   ..    ..  .   . |  .   ..  |  .   .   .  |

A fourth move is where a group of events is expanded or contracted by moving one of the events within the group and everything in proportion as in:

    |  .  .  .  .  .  .  . |  .   ..  |  .    .    .  |
    | . . . .   .   .   .  |  .   ..  |  .    .    .  |
    |   .   .   .   . . . .|  .   ..  |  .    .    .  |
    |  .  .  .  .  .  .  . |  .   ..  |  .    .    .  |

These latter three moves are designed to improve mixing between states and convergence. These two moves are also sometimes combined where one boundary affects more than one group of events.

All but the first of these moves has a Jacobean which is not 1. If ta and tb are the parameters for the two boundaries in question and there are n events being expanded or contracted then:

|∂t'/∂t(j)| = [(tb'-ta')/(tb(j)-a(j))]n

Passes and burn-in

The program is set up to perform at least 10 groups of MCMC trials. In each of these groups we:

The initial default settings are for a total number of 30,000 passes, that is 3,000 per group; each pass includes a trial of each variable and some multiple variable trials as discussed in the previous section. By comparing the results from each group of trials we decide if the trial number is high enough by looking at the convergence integrals (see below). If any of these are lower than 95% of the expected maximum, the number of passes is automatically increased by a factor of two.

Diagnostic tools

There are two main diagnostic measures used in OxCal:

Agreement indices

The agreement indices, tells us something about how well the prior model agrees with the observations (expressed in terms of the likelihoods). This is important because it is very easy to construct a model which is clearly at variance with the observational data. There are four forms of the agreement index calculated by the program:

An alternative approach to using agreement indices is to use outlier analysis. When outlier analysis is used, the agreement index is still calculated (see details below).

Maths ↓

The agreement indices are based on the ratio of the likelihood under two different models.

In the most simple model is where each parameter has a uniform prior, or the prior is independent of t. In such cases we have:

p(t|y) ∝ p(y|t)

Individual agreement indices

Often an observation yi relates to a single parameter ti and under this simple model as there are no relationships between the parameters we have:

p(ti|yi) ∝ p(yi|ti)

Under our full model ti depends on all of the other parameters and observations through the equation:

p(t|y) ∝ p(y|t) p(t)

From this we can integrate over all other parameters to get the marginal probability density, p(ti|y).

We then define the agreement index to be:

Fi = ∫p(yi|ti) p(ti|y) dti / ∫p(yi|ti) p(yi|ti) dti
Ai = 100 Fi

If the posterior and likelihood distributions are the same this ratio is 1 (or 100%), in some instances it can also rise to greater than 1 (> 100%) where the posterior picks up the higher portions of the likelihood distribution.

In outlier analysis we have an associated parameter φi which is 1 when the measurement is an outlier and 0 when it is not. The prior for φi being 1 is qi and for it to be zero is (1-qi). When outlier analysis is in use the following factor is added into the value of Fi:

(qi φi + (1-qi)(1-φi))/(qi2 + (1-qi)2)

This factor is on average one if the posterior for φi=1 is qi but if the posterior outlier probablity is higher than the prior the factor is usually smaller (as long as qi < 0.5). This modification of the agreement index (introduced in v4.1.4 results in lower agreement indices where there are more outliers which helps with model comparison. This factor makes no difference when outlier analysis is not used and in general if outliers are accepted as part of the model, the agreement indices are not needed (except for parameters which do not have outliers applied.

The individual agreement indices are reported in the A column of the results table.

Overall measures of agreement

We can expand on the treatment of single parameters, to look at the model as a whole. One way we can do this is to calculate the quantity:

Fmodel = ∫p(y|t) p(t|y) dt / ∫p(y|t) p(y|t) dt

Which is a kind of pseudo Bayes factor. Alternatively we can calculate the product of the individual agreement indices (which generally gives approximately the same value unless the parameters are very highly correlated):

Foverall = ∏i Ai

These ratios tend to get further and further from 1 for larger number of observations. For this reason we define two indices based on these ratios which depend on the number of likelihood distributions n:

Amodel = 100 Fmodel1/√n
Aoverall = 100 Foverall1/√n

In principle Fmodel and Amodel are better measures. Aoverall is given for compatibility with previous versions of the program.

Combinations and choice of threshold

In the case of simple combinations generated from the function Combine(), D_Sequence or the & operator, there is only one independent parameter. In such cases Foverall will be equal to Fmodel. Because in this case there is only one independent parameter in the comparison model, we give the agreement index in this case a special name:

Acomb = 100 Foverall1/√n

If we combine normal distributions in this way we can also do a χ2 test and it turns out that the level at which this fails (95% null hypothesis) is when:

An = 100/√(2n)

We can also calculate the logarithmic average of the individual agreement indices that are required to fail the same test by calculating A'n where:

A'n = 100 (Aoverall/100)1/√n = 100 Foverall1/n

If we do this for a range of values of n:


    n     An(%)    A'n(%)
    1     70.7     70.7
    2     50.0     61.3 
    3     40.8     59.6
    4     35.4     59.5
    5     31.6     59.8
    6     28.9     60.2
    7     26.7     60.7
    8     25.0     61.3
    9     23.6     61.8
   10     22.4     62.3
   15     18.3     64.5
   20     15.8     66.2
   25     14.1     67.6
   30     12.9     68.8
   40     11.2     70.7
   50     10.0     72.2
   60      9.1     73.4
   80      7.9     75.3
  100      7.1     76.7

the value of A'n is around 60% for a wide range of values on n. This is the justification for taking 60% as the threshold for acceptability of individual agreement indices. The argument for the same threshold for Aoverall and Amodel is that we expect ln(F) to randomly walk from 0 as we increase the number of parameters. This threshold is given the symbol:

A'c = 60%

The choice of threshold of acceptability for these indices is somewhat arbitrary - though no more so than the recommendations for Bayes factors. In practice, however, the levels suggested here have been found with experience to be about right in identifying models that are inconsistent with the observations, and individual samples that are clearly aberrant in their context.


The extent to which the MCMC analysis provides a truly representative set of posterior probability distributions depends on a number of factors. To some extent, the effectiveness of the algorithm can be tested by seeing how similar different attempts to perform the analysis are. This is measured by calculating an overlap integral:

The program will automatically check the convergence of all of the parameters and increase the number of MCMC passes if any fall below 95%.

If you choose the option to 'include convergence data' you can plot an example segment of the MCMC trial sequence on an individual plot. This can be useful in seeing what is going on in the MCMC algorithm.

Note that a good convergence does not guarantee a representative solution. If mixing is poor (if for example the probability distributions are very multi-modal) it is possible that some solutions might never been reached in any of the MCMC trials. In practice with this kind of application and with the MCMC kernel used in this program, such cases are rare.

Maths ↓

The overlap integral used to test for convergence is calculated in this way. The marginal posterior density distribution is built up in a series of MCMC trial groups, each starting from a different initial state. If pi(ti|y) is the density accumulated in all groups of trials so far and p'i(ti|y) is the density function estimated from a new group of trials, the quantity:

Ci = 100 [∫ p'i(ti|y) pi(ti|y) dti]2/[(∫ p'i2(ti|y) dti)(∫ pi2(ti|y) dti)]

is used as an estimate of convergence. This should be equal to 100% if both distributions are identical.


There are a number of options that can be selected for the analysis. These are normally set by using the [Tools > Options] dialogue in the input utility or the [Options > Analysis] dialogue for single calibrations. These add an Options() command at the start of the command file which defines the defaults for the analysis as a whole. The Options() command is typically of this form:

 option = value;
 option = value;

The following table gives all of the options, their possible values, the default value, and an explanation of their effect.

Option Values Default Explanation
BCAD TRUE | FALSE TRUE Whether BC/AD are used in the log file output
ConvergenceData TRUE | FALSE FALSE Whether sample convergence data is included in the output data file
Curve filename intcal13.14c The default calibration curve
Cubic TRUE | FALSE TRUE Whether cubic (as opposed to linear) interpolation is used for calibration curves
Ensembles number 30 The number of age-depth ensembles stored during the analysis
Floruit TRUE | FALSE FALSE Whether quantile ranges are calculated instead of the default highest posterior density (hpd)
Intercept TRUE | FALSE FALSE Whether the intercept method is used for radiocarbon calibration ranges
kIterations number 30 The default number of MCMC passes
PlusMinus TRUE | FALSE FALSE Whether + and - are used in place of BC and AD in log files
RawData TRUE | FALSE FALSE Whether raw calibration curve data is included in the output data file
Resolution number 5 The default bin size for probability distributions of Date and Interval type
Round TRUE | FALSE FALSE Whether ranges are rounded off
RoundBy number 0 Resolution of rounding (0 for automatic)
SD1 TRUE | FALSE TRUE Whether 68.2% (1 σ) ranges are given in the log and tab delimited files
SD2 TRUE | FALSE TRUE Whether 95.4% (2 σ) ranges are given in the log and tab delimited files
SD3 TRUE | FALSE FALSE Whether 99.7% (3 σ) ranges are given in the log and tab delimited files
UniformSpanPrior TRUE | FALSE TRUE Whether the two extra prior factors suggested by Nicholls and Jones 2001 are used
UseF14C TRUE | FALSE TRUE Whether all calibrations take place in F14C space (rather than BP space)
Year number 1950.5 The datum point for ages - the default is mid AD 1950

More ↓

Options for adjusting details of the prior

The exact form of the prior probabilities applied during trapezium and other phase models can be modified using extensions of the existing options. The previous versions of OxCal had an option UniformSpanPrior that allowed the switching on or off of the elements of the prior model (equation 6 of Bronk Ramsey 2001 and equation 30 of Bronk Ramsey 2009a). These, and the new elements of the trapezium prior, can now be adjusted if required for sensitivity testing. The following commands can be added to the options block:

Option setting Explanation
UniformSpanLimits=0; No prior added for limits
UniformSpanLimits=1; Limits as in Bronk Ramsey 2001 (equation 7)
UniformSpanLimits=2; Limits prior as in Bronk Ramsey 2009a (equation 30)
UniformSpanGroups=0; No prior added for group models
UniformSpanGroups=1; Prior added as in Bronk Ramsey 2001 (equation 6)
UniformSpanGroups=2; Also adds Lee and Bronk Ramsey 2012 pUTP prior
UniformSpanPrior=0; Removes all additional prior factors
UniformSpanPrior=1; Sets UniformSpanGroups and UniformSpanLimits to the maximum value (this is the default).

Command line options

Options can also be set by command line options when calling the analysis program. Furthermore default options for an installation of the program are set in OxCal.dat file found in the same directory as the program files. The possible options are given below:

-afilename   append log to a file*
-b1          BP                     -b0     BC/AD
-cfilename   use calibration data file
-d1          plot distributions     -d0     no plot 
-fn          default iterations for sampling in thousands
-g1          +/-                    -g0     BC/AD/BP 
-h1          whole ranges           -h0     split ranges 
-in          resolution of n
-ln      limit on number of data points in calibration curve (see Resolution)
-m1          macro language         -m0     simplified entry
-n1          round ranges           -n0     no rounding
-o1          include converg info   -o0     do not include 
-p1          probability method     -p0     intercept method 
-q1          cubic interpolation    -q0     linear interpolation 
-rfilename   read input from a file+
-s11         1 sigma  ranges        -s10    range not found 
-s21         2 sigma ranges         -s20    range not found
-s31         3 sigma ranges         -s30    range not found
-t1          terse mode             -t0     full prompts
-u1          uniform span prior     -u0     as in OxCal v2.18 and previous
-v1          reverse sequence order -v0     chronological order
-wfilename   write log to a file*
-yn          round by n years       -y0     automatic rounding


* Note that with either of these options the tabbed results will then be sent to the console output and can therefore be redirected to a file or a pipe; the standard redirection > or > > can be used instead if only the log file needs redirecting.

+ Note that the standard redirection < can also be used.

Automatic range rounding

If automatic range rounding is selected, the rounding is carried out depending on the overall range of the result. The following table shows the degree of rounding selected.


Total range        Round to the nearest

   1 -   50          1 year
  50 -  100          5 years
 100 -  500         10 years
 500 - 1000         50 years
1000 - 5000        100 years
...                ...

Pre-processor calculations

Unlike previous versions of OxCal, this version allows many of the the normal calculation functions you would expect in a programming language. The main operators ( + - / * ) can be used as can the functions: log, log10, exp, sin, cos, asin, acos, tan, atan, abs, sqrt. In addition four special functions are provided:

These special functions are provided primarily for testing models. As an example of their use the following expression:


would give a representative possible value for the radiocarbon concentration (expressed in BP) for the year AD1800.

The pre-processor also has three control functions that enable simple macros to be written these are:

Together these functions allow macros to be written that perform a range of operations. As a simple example the following code will calibrate all radiocarbon dates between 1000 and 2000 in 10 year intervals.

/* Example of a macro for calibrating all of
the radiocarbon dates between 1000 and 2000 in
increments of 10 */

// define and initialise the variable
 // calibrate the date

Finally you should note that comments can be introduced into code, either using // for single lines or enclosed between /* and */ for longer comments.