UNIVERSITY OF OXFORD
RADIOCARBON ACCELERATOR UNIT

OxCal Program v3.10

(c) Copyright Christopher Bronk Ramsey 2005

[Menu and Toolbar Overview]

[Index]

The Manual

The OxCal program is intended to provide radiocarbon calibration and analysis of archaeological stratigraphy. The program runs under Microsoft Windows 95 on IBM PC compatibles.

The program is simple to use for basic radiocarbon calibration for which results are given both in text and graphical form.

Models based on archaeological or geological information can be included in the analysis. The information for such analysis can be entered using the windows interface or in the form of text command files.

09/02/05

WARNING

The Bayesian analysis part of this program provides the ability to perform calculation based on complex models. It is possible to create models which may bias your data in ways you do not intend (see, for example, Steier and Rom 2000 and comments (Bronk Ramsey 2000) on that paper). In particular the use of 'Boundaries' is very important where there are many poorly distiguished data-points. If you are unsure about your models please ask for advice, either from an experienced user of the program, or the author.

Program References

If you use this program, you should quote the reference for the calibration curve used, the version of OxCal (with any non-standard options set) and the references Bronk Ramsey 1995 and Bronk Ramsey 2001.  If you are wiggle-matching tree-ring sequences you should quote Bronk Ramsey, van der Plicht and Weninger 2001.


For further information contact the author:

Dr. C. Bronk Ramsey
Oxford Radiocarbon Accelerator Unit
Research Lab for Archaeology
6 Keble Rd.
Oxford OX1 3QJ
U.K.

christopher.ramsey@rlaha.ox.ac.uk



Installation [Contents][Index]

Installation

Installation of the program is very simple.

You will need a PC running Windows95 or similar. The program is supplied as a self extracting file. Copy this (using Windows Explorer) to a suitable directory for the installation such as:

C:\Program Files\OxCal3

Don't try to install this on top of previous versions of OxCal as the new version is sufficiently different that this might cause problems.

Then extract the program by double clicking on the OCDxx.exe file.

A shortcut to OxCal.exe can be created in the normal way.

The new version can read all old input (.14i) files but has been designed to keep these separate from the temporary data files and program files. You should therefore copy any only input files to a suitable working directory such as:

C:\My Documents\OxCal3

All new input files should be saved here too.


Network installation

There should be no problem with running the program over a network. The program itself would normally be stored on the server on a read-only drive - eg:

N:\Program Files\OxCal3

With the input files being saved in a working directory which has read- write access - eg:

H:\OxCal3

When you first run the program in such a situation it would be worth saving a blank input file in this directory immediately by pressing the button. This will ensure that any datafiles produced are stored in the right place.



Getting Started with OxCal [Contents][Index]

Getting Started


Running the Program

Like all windows programs, OxCal can be started by double clicking on the program icon from Windows Explorer. More conveniently you can create a shortcut and put it on your desktop or start menu.

One the program has been run once you can also start it by double clicking on any input file (with a .14i extension).


Wizards

The program has input and output wizards to help with the most often used functions of the progam. These should be fairly self explanitory. Everything that can be performed with the Wizards can also be performed using the toolbar and menus.

Calibrating a Single Date

To calibrate a single date simply press the button and then fill in the dialogue box:

with the name of the sample (optional), a radiocarbon date and error term.

A probability distribution will be displayed along with the ranges. If you wish to incorporate a plot into a work processor file it can be copied and pasted as with any other editor.

The display can be printed in the normal way assuming you have a printer capable of producing graphical output from windows by pressing the button.


Producing a Multiple Plot

To produce a series of plots is almost as simple. First press the button to get a new plot input window. You will see that this window has two visible panes. Drag the (radiocarbon date) icon from the right hand pane onto the (plot) icon in the left hand pane. Fill in the details of the radiocarbon date in the dialogue box. Repeat this operation as many times as you wish to build up the plot. The order can be changed, if you wish, by dragging the icons around.

Once you have specified the plot press the button to perform the calculations. You will then be presented with a results window allowing you to select how you would like the results to be presented. Try double clicking on the three icons:

Alternatively, the two log files can also be opened by pressing the button on the toolbar and the plot by pressing the button.

The results in any of these output windows can be saved to a filename of your chose or copied into another program. Close the windows when you have finished with them. When you close the input window you will be asked if you wish to save the entries you have made.

The Output Wizard helps you through these operations.



Acknowledgements [Contents][Index]

Acknowledgements

Many people have helped in the production of this software. I am in debt to the many users of the program who have written in with suggestions and information on bugs. In particular, Alex Bayliss of English Heritage has been instrumental in keeping up the momentum for development. Many others have also sent in suggestions, some of which have been implemented. Others have not - ususally because of lack of time - not because the ideas themselves were not good. Andrew Millard has been particularly useful in checking for problems with new versions.

Catlin Buck and Cliff Litton and their teams must also be mentioned as the originators of the concept of using Bayesian statistics in this context and without them that aspect of the program would have been unlikely to have been produced.

Goeff Nichols of Aukland University brought many fresh ideas to the subject, in particular the notion that the 'Boundaries' themselves should be poisson distributed as well as the individual events. He also suggested many improvements to the MCMC algoriths used which have helped to improve the convergence of the calculations.

The VERA group in Vienna are due thanks both for the financial support of the developments of version 3.0 and for many useful discussions, especially with Werner Rom and Peter Steier. Stephan Puchegger also very helpfully pointed out an incomplete treatment of the calibration procedure where there are variations in the calibration curve uncertainty (especially important over the transition to the pre-holocene part of the calibration curve).

Finally I would like to thank Paula Reimer for allowing me to distribute the Intcal98 dataset with this program.

Christopher Ramsey



Archaeological and Environmental Considerations [Contents][Index]

Archaeological and Environmental Considerations

Where the information concerning the age of a sample is limited to a single radiocarbon date a simple calibration is all that is required. However if there is more information available it seems most sensible to incorporate this into the probability distributions calculated. This is particularly important given that the calibration process itself can often lead to chronological error margins which make answering the archaeological or environmental questions posed difficult (see for example Manning and Weninger 1992).

This section is not intended to explain how to use the program but rather explain the range of information which can be dealt with. Throughout the text key words are included such as (R_Date) which refer to the relevant program commands. The use of these will become clear in the subsequent sections of the manual and can be ignored on a first read through.

Types of Chronological Information

Dating Simulation

Combination of Dates

Stratigraphic Information

Information from Analysis



Chronological Information [Up][Contents][Index]

Types of Chronological Information

The types of information which can be used to make chronological inferences can be divided into two broad categories: the first are the age measurements themselves and the second are the stratigraphic relationships (see Harris) between samples. Our knowledge about the actual age of archaeological artifacts can come from a variety of sources: All of these different types of information can be introduced into an archaeological model built with OxCal.

See also [Program Operation]


Historical Information

Some artifacts such as coins can be directly dated by using information from the historical record. Such information can usually be written in terms of a specific year (C_Date) perhaps with an error term associated with it (given in this program as one standard deviation).

See also [Program Operation] [Example] [Mathematical Methods]


Radiocarbon Dates

Radiocarbon dates must be calibrated in order to use them in conjunction with other techniques or if any chronological inferences (such as the length of phases) are to be made(see Bowman 1990 or Aitken 1990). These are entered in this program as radiocarbon dates (R_Date) with an error quoted as one standard deviation (the form used by all radiocarbon laboratories). It is assumed that any laboratory multiplier has already been applied.

See also [Program Operation] [Example] [Mathematical Methods]


Luminescence Dates

The methods of thermo-luminescence (TL) and optically stimulated luminescence (OSL) dating are particularly important beyond the range of radiocarbon or in the many sites where preservation of organic material is poor. Both methods are a measure of accumulated radioactive dose from a site since the samples were subjected to heat (TL) or sunlight (OSL). The raw results can be entered into this program by defining the year of measurement (Year) the site dose rate (Dose) and associated error (Error) and then entering the measured doses received by the samples which yield a calendar age (L_Date). Alternatively since the laboratories will frequently give the results in the form of calendar ages they can simply be entered in the same way as historical information.

One feature of luminescence dates is that they sometimes have asymmetric errors associated with them these can also be entered by either method.

See also [Program Operation] [Mathematical Methods]


Other Dating Methods

Information from other dating methods such as dendro-chronology or Uranium series can usually be entered in the form of calendar dates (C_Date) possibly with asymmetric errors.

Other Information

There may also be other forms of information which you wish to include in a study. Many of these it will be possible to write in the form of calendar ages in conjunction with `stratigraphic' information. It is also possible with this program to define your own probability distributions and use them in the analysis (Prior).

Dating Simulation

Because of the variable nature of the radiocarbon calibration curve it is often difficult to predict beforehand how good a set of radiocarbon dates are likely to be in answering a set of archaeological questions. This program incorporates a technique (R_Simulate) for generating a radiocarbon date (with random but realistic errors) given the calendar age expected and the error term that the radiocarbon lab is capable of providing. Using this it is possible for the archaeologist to try out different possible dating programs and see how much information he is likely to be able to gain from them.

See also [Program Operation] [Example]



Combination of Dates [Up][Contents][Index]

Combination of Dates

The simplest form of analysis which one might wish to perform on a series of samples is the combination of several dates to give one measurement with smaller errors associated with it. Combination of dates should clearly only be carried out if there is good reason to assume that the events being dated all occurred within a short period (`short' here implies small in comparison to the errors associated with the dating methods).

There are various different sorts of combination which can be performed:

See also [Program Operation] [Mathematical Methods]

Radiocarbon Dates

It is very important to combine different radiocarbon dates correctly. If the dates are all from the same sample or object then the radiocarbon dates should be combined before calibration (R_Combine). Such a combination is checked for internal consistency by a chi squared test which is performed automatically by this program (see Shennan 1988 p65 for a description of this method).

If the radiocarbon dates have been made on sample of different ages (where the age differences are known) the combination can be done after calibration using Combine or D_Sequence.

See also [Program Operation] [Mathematical Methods]


Other Dates

For other dating methods combination is more straight-forward and calendar dates (C_Date) can be combined directly whilst performing a chi squared test by using a special procedure (C_Combine) or the probability distributions combined (Combine). The latter method allows the combination of dates of different types (radiocarbon, OSL, TL etc.).

In the case of luminescence dates it is important that any combination of dates is performed before the application of the error term for the site dose rate. This will be treated correctly by this program if the raw results are entered rather than results simply in the form of calendar ages.

See also [Program Operation] [Example] [Mathematical Methods]


Summing probability distributions

Combining probability distributions by summing is usually difficult to justify statistically but it will generate a probability distribution which is a best estimate for the chronological distribution of the items dated (Sum). The effect of this form of combination is to average the distributions and not to decrease the error margins as with other forms of combination.

See also [Program Operation]


Offset Dates

Sometimes the dated event is offset in some way relative to the event of archaeological interest in such cases we might wish to offset a probability distribution (Offset) by a particular amount. This is possible with any type of distribution and the offset value can include an error term.

See also [Program Operation] [Example] [Mathematical Methods]


Wiggle Matching

One form of combination which is of particular relevance to radiocarbon dates is needed if several radiocarbon measurements have been made on a piece of wood (or some other material with annual growth layers). In such a case the difference in calendar age between the samples is known and this can be very useful in obtaining very accurate dates in spite of the calibration procedure. Such measurements are sometimes called `Wiggle Matching' because of the use they make of the form of the calibration curve. Radiocarbon dates of this type can easily be combined using this program (D_Sequence) as long as the calendar age gap between each sample is known (see stratigraphic information).

See also [Program Operation] [Example] [Mathematical Methods]



Stratigraphic Information [Up][Contents][Index]

Stratigraphic Information

Here I wish to use the term stratigraphic information in the broadest sense as referring to anything which defines the relative ages of different samples or objects. Clearly in many situations this will be simple archaeological or geological stratigraphy but in other situations other information might be treated in exactly the same way.

As a trivial example, a sample taken from between two layers securely dated to 1066 and 1087 will have exactly the same chronological constraints as a sample which is simply known to have come from the reign on William the Conqueror. Both cases can be treated as a sequence of: one securely dated event 1066; the item in question (perhaps with a radiocarbon date of 950BP+-30) and finally another securely dated event 1087. In terms of a stratigraphic diagram we might draw this as:

Sequence
{
 C_Date 1066;
 R_Date 950 30;
 C_Date 1087;
};
The implications of such a simple sequence are fairly obvious in that the original probability distribution for the radiocarbon date will simply be truncated at the two dates 1066 and 1087. The value of analysis only becomes significant in more complicated situations where the implications of the stratigraphic information are not so obvious.

A very important point must be made which is that radiocarbon often do not directly date the context itself and so a naive use of stratigraphic information to refine the dating of the objects can be quite wrong. As an example sample A in pit 1 may be older than sample B in pit 2 even if pit 2 is older than pit 1.

The taphonomy of a site must be carefully considered in constructing a chronological stratigraphy from the physical stratigraphy.

In general the relative order of all samples is rarely known but various stratigraphic groupings can be defined.

See also [Program Operation] [Mathematical Methods]

Phases

A phase can be defined as a group of items for which one has no information about the relative ages but all of which share some relationships with items outside. You might for example have two radiocarbon dates:
Phase
{
 R_Date 2700 30;
 R_Date 2800 35;
};
This might then be part of a sequence.

If the samples form a coherent group then they should be enclosed within Boundaries.

See also [Program Operation] [Warning] [Example]


Sequences

A sequence is here defined as a group of events or phases which are known to follow one after another with no possibility of overlap. For example a fragment of a model might include:
Sequence
{
 R_Date "A" 2760 35;
 Phase
 {
  R_Date "B" 2700 30;
  R_Date "C" 2800 35;
 };
 R_Date "D" 2660 35;
};
The stratigraphic information from most sites can in fact be written solely in terms of nestings of phases and sequences. However, to use sequences properly an understanding of Boundaries is needed (see Steier and Rom 2000 and comments on that paper). In this program it is also possible to define a minimum gap between two events in a sequence so that you might have:
Sequence
{
 C_Date 1066;
 Gap 10;
 R_Date 950 30;
 C_Date 1087;
};
(Please note that in this manual and for this program sequences are always written in the order old to young although they can be displayed in reverse order for consistency with physical archaeological stratigraphy).

See also [Program Operation] [Example] [Warning]


Boundaries

The basic assumption underlying the analysis performed by this program is that the dates of the events being analysed are randomly selected from a uniform distribution. Without any other information, the program will assume a priori that the period from which they are selected has no limits. In practice, this is usually not the case (for associated dangers see Steier and Rom 2000 and comments on that paper) and the events are selected from a slice of time with a start and a finish. To tell the program the Boundary command is used.

As an example in the case given above we might have:

Sequence
{
 Boundary Start;
 Sequence
 {
  R_Date 800 35;
  Phase
  {
   R_Date 750 30;
   R_Date 800 35;
  };
  R_Date 660 35;
 };
 Boundary End;
};
Any coherrent group of events should be contained within boundaries in this way in order to signal that they all belong to one period.

See also [`Using boundaries]


Sequences with known age gaps

A specific type of stratigraphic information `Wiggle Matching' are sequences where the gap between specific events is known precisely in terms of calendar years. Tree rings are the most obvious example of this but some forms of sedimentary deposit also lend themselves to this type of treatment. One might draw a stratigraphic diagram for such a system as:
D_Sequence
{
 R_Date 2760 35;
 Gap 30;
 R_Date 2910 30;
 Gap 30;
 R_Date 2870 35;
};
See also [Program Operation] [Example] [Warning]

Sequences with approximate gaps

Another type of stratigraphic information which is occasionally encountered is a sequence of layers or events with a gap which is known approximately (from sedimentation rates, peat growth etc.). This is in principle very similar to the previous type of sequence:
V_Sequence
{
 R_Date 3000 35;
 Gap 30 20;
 R_Date 2910 30;
 Gap 30 20;
 R_Date 2870 35;
};
See also [Program Operation] [Example] [Warning]

Termini

To return to more common archaeological situations we must also consider events which define a terminus ante quem (TAQ) or terminus post quem (TPQ) within a sequence. If for example a coin dated to 1066 is found between two archaeological samples in a sequence it follows that the later sample in the sequence must have been (deposited) after 1066 but the earlier sample might be before or after:
Sequence
{
 R_Date 980 35;
 TPQ
 {
  C_Date 1066;
 };
 R_Date 930 30;
};
See also [Program Operation] [Example] [Warning]

Cross Linking

Occasionally there is some archaeological information which links two fairly independent sequences. The ability to deal with such relationships is provided in this program by allowing references to items in previously defined sequences. An example might be:
Sequence
{
 R_Date "A" 900 30;
 R_Date "B" 830 35;
};
Sequence
{
 R_Date "C" 940 35;
 TPQ
 {
  XReference "A";
 };
 R_Date "D" 890 70;
};
Such use of references should be used with caution and combinations of sequences and phases used where possible.

See also [Program Operation] [Warning]


Warning

A very important point must be made which is that radiocarbon often do not directly date the context itself and so a naive use of stratigraphic information to refine the dating of the objects can be quite wrong. As an example sample A in pit 1 may be older than sample B in pit 2 even if pit 2 is older than pit 1.

The taphonomy of a site must be carefully considered in constructing a chronological stratigraphy from the physical stratigraphy.



Information from Analysis [Up][Contents][Index]

Information from Analysis

In the first instance this program is designed to take into account stratigraphic information from a site and modify the probability distributions obtained directly from radiocarbon calibration or other dating methods (called `prior' probability distributions) in the light of this additional data (producing so called `posterior' probability distributions). There are however other, equally important, types of information which can be obtained. See also [Program Operation]

First and last dated events

For a given group of dates which may be constrained in some way by stratigraphic information it is useful to be able to obtain a probability distribution for the first and last members of the group. eg:
Sequence
{
 Boundary Start;
 Sequence
 {
  R_Date 800 35;
  Phase
  {
   First;
   R_Date 750 30;
   R_Date 800 35;
   Last;
  };
  R_Date 660 35;
 };
 Boundary End;
};
It should be stressed that to use this to estimate the start and end of phases relies on the fact that the distribution of dated samples within the group are representative of the archaeological phase in question. If no objects have been recovered from the first century of a period no amount of statistical analysis can determine when that period began! Furthermore if there are no dated events prior to the period and a large number of dated events within it statistical analysis is liable to indicate that the period started earlier than it actually did simply because of the inevitable scatter in the measurements. These caveats are no more or less relevant to non-mathematical methods of analysis and simply imply good archaeological practice in bracketing periods.

See also [Using boundaries] [Program Operation] [Example] [Mathematical Methods]


Duration of phases and sequences

The next type of information which one might wish to glean from the analysis is the span of a group of dates. A probability distribution can be generated which represents the difference in age between the first and last items in a group. eg.:
Sequence
{
 Boundary Start;
 Sequence
 {
  R_Date 800 35;
  Phase
  {
   R_Date 750 30;
   R_Date 800 35;
   Span;
  };
  R_Date 660 35;
 };
 Boundary End;
};
Clearly you should bear in mind the caveats mentioned in the preceding section.

See also [Program Operation] [Example] [Mathematical Methods]


Using Boundaries

The two previous sections outline one way in which a group of dated events can be treated in relation to archaeological phases. This approach assumes that the dated events are both well constrained and cover the archaeological phase from start to finish. An alternative approach is to assume that the deposition of dated artifacts is fairly uniform chronologically and use the distribution to estimate the boundaries of the archaeological phases using this model. This is the other function of the Boundary statement used to mark which samples come from a set period. eg:
Sequence
{
 Boundary Start;
 Phase
 {
  R_Date 750 30;
  R_Date 830 30;
  R_Date 820 30;
  R_Date 760 30;
  R_Date 810 30;
  R_Date 800 30;
 };
 Boundary End;
 Span;
};
Using such a model will give a much more realistic estimate of the phase boundaries than simply assuming that the events are unconstrained (ie not using Boundaries at all) and estimating when the first and last events took place. If the phase is well constrained anyway the results will be very similar.

See also [Program Operation] [Example] [Mathematical Methods]


Interval between two events

It is often useful to be able to find out what the interval between two phases or two events was. A probability distribution can be obtained for such events which follow one after the other in a sequence. For example the fragment:
Sequence
{
 R_Date 800 35;
 Interval;
 R_Date 660 35;
};
It is also possible to calculate a probability distribution for the difference between any two events in an analysis (Difference).

See also [Program Operation] [Example] [Mathematical Methods]


The ordering of events

Sometimes you may wish to estimate the probability of various possible orders of dated events. Assuming that the dating evidence is good enough to provide the necessary discrimination such probabilities can easily be calculated (Order).

See also [Program Operation] [Example] [Mathematical Methods]


Reliability of stratigraphy

Clearly any analysis relies very strongly on the reliability of the information included. The analysis does include the calculation of some overall indicators of how well all of the data incorporated in the analysis agrees and which elements of the data are most suspect. It is frequently the case that there is some uncertainty associated with the stratigraphic evidence for an item (or indeed the date measurement itself). In these cases it is necessary to be able to find out how likely an item is to be in a particular place in a chronological sequence. If the position of an item is questioned (Question) in this way the item will be ignored in the main analysis and a probability calculated. Consider for example the fragment:
Sequence
{
 R_Date 970 35;
 R_Date 1180 30?
 R_Date 930 35;
};
This would give a fairly low probability of being true (in fact 0.7%).

No provision has been made for assigning probabilities to the veracity of dated events as such a practice seems rather arbitrary and virtually impossible to justify.

See also [Program Operation] [Mathematical Methods]


Correlation between two events

The resultant probability distributions after analysis are not in general independent. For example two events in a sequence may have probability distributions which overlap but clearly given the fact that they are in a sequence the second one must always follow the first. It is, therefore, occasionally useful to be able to display a plot of one distribution relative to another. This is can be achieved (Correlation) although it should be said that the resultant two dimensional map needs some practice in interpretation.

See also [Program Operation] [Example]



Program Operation [Contents][Index]

Program Operation

This section of the manual is intended to provide all of the information needed to make full use of the program.

Overview of Operation

Entering Information

Performing the Analysis

Graphical Display

Batch Processing


Overview of Operation


Model Building

Throughout this manual you will find references to commands such as Sequence {C_Date 1066; R_Date 950 50; C_Date 1087;}; whereas interaction with the program is normally through a purely graphical interface. The sequence is put together by dragging the (Sequence), (C_Date) and (R_Date) icons into the model you are building. This has the advantage that the user does not have to worry about the syntax of entering the information (brackets, semi-colons etc). However, underlying the windows interface the program creates a text file with a list of commands.

As an example of this consider the process of making up a multi-plot which was discussed in the section on producing a multi-plot. In that case an input window was opened and a few radiocarbon dates were added. Unknown to the user a command file was produced which might have looked something like:

Plot "Example 1"
{
 R_Date "OxA-1011" 2340 60;
 R_Date "OxA-1012" 3550 70;
 R_Date "OxA-1013" 3670 50;
};
The user then would use the button (or the [File|Analyse] menu item) to perform the calculation and the button (or the [File|Create Plots] menu item) to actually create and display the plot.

All information is added in essentially the same way. The right hand pane of the input window has a tree organised into the various different types of information you might wish to add.

Let us consider one of the examples given in section on `Archaeological Considerations':

Sequence
{
 Boundary;
 Sequence
 {
  R_Date 800 35;
  Phase
  {
   R_Date 750 30;
   R_Date 800 35;
  };
  R_Date 660 35;
 };
 Boundary;
};
To enter the data for this press the button to get a new window. Then find the (Sequence) icon in the tree in the right hand window pane and drag it onto the icon in the left hand pane. You will be prompted for a name for the sequence which is optional.

The program will they ask you if you wish to put automatic boundaries around this sequence - answer YES.

The first radiocarbon date would then be entered by dragging the (R_Date) icon over (ensuring that the sequence branch of the tree is expanded) and dropped onto the 'Queries' icon within the sequence. The phase can be added in the same way but this time say NO to the addition of surrounding boundaries. The two radiocarbon dates would then be added to the phase (while it is expanded). The phase branch can then be minimised by pressing on the associated button. The final radiocarbon date is then added after the phase as before. Note that the sequence is in chronological order (oldest first).

Items can be added in any order. Just remember that if you wish to add items to a group (such as a phase) ensure that it is expanded (using if necessary) whereas if you wish to add an item just after the group it should be collapsed (using if necessary). Items can be moved around, copied, pasted and deleted in the normal way. To change the values for an item simply double click on the values. From the windows interface it is always possible to delete the last item or change the data if some mistake has been made.

The actual text of the input can be seen in the bottom left hand pane of the window by dragging up the bar just above the bottom. This text can be edited directly as long as you press the button first. Clearly such editing does not have the safeguards associated with Windows entry and so care must be taken to keep the syntax of the commands correct (see `CQL Command Summary').

Once the data has been entered save it using the button. Such files should have the file extension .14i. They can then be recalled using the button. You can copy and paste whole tree branches from one model to another if you wish.

Adding a large number of (for example) radiocarbon dates from a database or spreadsheet is easy. The format should be two columns (date and error) or three (name, date and error). Simply copy the data from the database/spreadsheet and paste it into the model tree.


Analysis

Having entered the information the analysis can be performed by clicking on the button. Make sure that the left hand pane of the correct input window is selected. In the example we are considering this will first of all perform the calibrations and then perform a statistical method called `Markov Chain Monte-Carlo (MCMC) Sampling' to incorporate the stratigraphic information (see Buck et al 1992 and Gilks et al 1996). When the calculations have finished a plot organiser window will be shown which allows you to select the form in which you want the results.


Plot and Results Organisation

The view of the plot organiser looks something like:

This looks rather like input file and can be manipulated in a similar way. There are essentially two ways of using this: the toolbar can be used to generate the results in text form (using the button) or in plot form (using the button); alternatively individual elements can be selected by double clicking on then in the plot organiser window.

The results are given in two text formats:

the latter being most useful for entry into databases and spreadsheets.

Plots can be generated in several different forms using the toolbar:

Alternatively by double clicking on individual items in the tree plots of selected items can be generated. The plot for the prior distributions in this case looks something like this:

And the posterior distributions which are the result of the full analysis like this:

Here the dark histograms show the posterior distributions and the outlines the priors (no account taken of the constraints). Note that the bourdaries mean that it has been assumed that all of the events come from one uniformly represented period. For this reason the last date is more likely to be similar to the others than to be an outlier.

This has completed all of the steps needed to perform the calculations.

A large number of data files are produced during the calculations and if you have finished with all of these you can press the button to delete all of these for this project. Any plot archives which have been saved will not be deleted but all log files, data files and plot organisation files will be deleted.


Batch Processing

It is now fairly easy to use this program for processing large batches of samples from a spreadsheet or database:

Entering Information [Up][Contents][Index]

Entering Information

As has been explained in the Program Operation information can either be entered using the Windows interface (which is fairly self explanatory) or in the form of commands in a run file. The `CQL Command Summary' section gives a complete description of how each of the command is used and shows the icon used in the Windows interface. In this section only the commands will be referred to for reasons of brevity. If you cannot find the appropriate icon for any operation in the selection tree you should look it up in the `CQL Command Summary'.

In addition to the information here you should now be in a position to make sense of the commands given in the section on `Archaeological Considerations' and you should also be able to call up the example run files either using the button or simply by clicking on the example file icon in this manual.


Chronological Information

This is normally entered using R_Date in the case of radiocarbon dates and C_Date for calendar dates. If the errors on calendar dates are asymmetric (such as 1066+100-60) they can also be entered using C_Date (as in C_Date 1066 100 60;). Radiocarbon dates are always entered in radiocarbon years BP. Calendar dates can be either entered as BP or AD depending on the setting of the options set. If you are using BC/AD, BC dates should be entered as negative numbers thus C_Date -100 10 implies a calendar date of 100BC with a ten year error.

Within multi-plots or other groups dates can be offset using the Offset command. For example a carved piece of wood thought to be 60+-10 years old at the time of felling might have been radiocarbon dated. A probability distribution for its felling date would then be given by the two commands:

R_Date 980 50; 
Offset 60 10;
Note that the offset is positive to produce a later probability distribution.

Luminescence dates are another type of chronological information that can be entered. Assuming you are not simply entering them as calendar ages, the year of measurement, dose rate and error in the dose rate must be entered. Instead of entering the calendar ages you can then enter the sample estimated doses (prefixed by `d'). For example:

Plot
{
 Year 1994;
 Dose 2.0e-3;
 Error 5%;
 C_Date d1.0 d0.2;
 C_Date d1.1 d0.2;
 C_Date d1.3 d0.2;
};
In this case the first date will be calculated from the dose rate to be 500+-100 years before 1994 and then an additional error of 5% added in. The error is always given in terms of a percentage as above or as a proportion (as in Error 0.05;). If the error is defined within a combination (Combine) it will not be applied until after the combination has been performed.

Dating Simulation

To use the radiocarbon dating simulation procedure use R_Simulate giving the calendar age expected and the precision expected from the radiocarbon lab. Thus for a date in the British Iron Age you might try:
R_Simulate -500 60;
In this case you will find that the errors associated with the radiocarbon dates are always large. Every time you recalculate this you will get a different radiocarbon date (with a similar distribution to the measurements you would expect to get).

See also [Archaeological Considerations]


Combinations

Combinations of all kinds can be performed with Combine. If radiocarbon dates are to be combined before calibration R_Combine should be used and if you wish to combine calendar ages with a chi squared test you should use C_Combine. The command Sum can be used if you wish to average distributions (equally weighted) to arrive at a frequency distribution (this does not relate to a single event). An agreement index is produced for combinations of distributions.

See also [Archaeological Considerations]


Stratigraphic Information

This sort of information can usually simply be entered using nested sequences and phases with associated boundaries (Sequence, Phase and Boundary). In a sequence you can also ensure that there is a gap by using a Gap command.

Note that groups or related events (coming from one period) should be enclosed with boundaries. The 'Auto Boundary' feature of the program is designed to help with this. When you add a phase or a model the program asks whether the group is a well defined separate group (rather than being just a part of a larger group). If you answer yes the phase or sequence will be bracketted by boundaries.

Within sequences termini ante quem and termini post quem can be defined using TAQ and TPQ.

The special case of `wiggle-matching' is covered by the defined sequence command (D_Sequence). In such a group each item must be separated by a Gap command giving the separation between the measured samples. The same calculation can also be performed in a slightly different way using Combine (see D_Sequence). The similar case of sequences where the gap is only know approximately is covered by the variable sequence command (V_Sequence) within which each item must be separated by a gap with an error term.

It is also possible to put extra constraints on a date by referring to it in more than one place using the command XReference. Consider the example from `Archaeological Considerations':

Sequence
{
 R_Date "A" 900 30;
 R_Date "B" 830 60;
};
Sequence
{
 R_Date "C" 940 60;
 TPQ
 {
  XReference "A";
 };
 R_Date "D" 890 70;
};
Here: A must be before B and D; B must be after A; C must be before D; D must be after A and C.

NOTE: that cross references can be conveniently entered using the Windows interface by holding down the [Ctrl] key and dragging from the cross reference to the new position.

See also [Archaeological Considerations]


Requesting additional information from analysis

Another aspect of entering the data is deciding what additional information will be required from the analysis.

Three types of information can be requested for any group of dates (in a phase, sequence etc) the probability distribution for the first date (First) the last (Last) and the difference between the two (Span). It is also possible to request the interval between items in a sequence using Interval or the difference between any two dates using Difference.

So as an example where they have all been requested one might have:

Sequence
{
 R_Date "K" 2760 60;
 Interval "I";
 Phase "1"
 {
  First "B";
  R_Date "L" 2700 50;
  R_Date "M" 2800 60;
  Last "E";
  Span "S";
 };
 R_Date "N" 2670 60;
 R_Date "O" 2660 60;
 Difference "D" "N" "K";
};
In this example the distributions B and E will be plotted with the distributions for K, L, M, N and O which are produced by the analysis. Distributions I (which gives the interval between K and the first item in phase 1), S (which gives the span of phase 1) and D (the time between K and N) will all be plotted on a separate page of the analysis output since they represent age differences rather than absolute ages.

NOTE: that to enter the paramters for Difference using the Windows interface you can just hold down the [Ctrl] key and drag the parameters onto the (expanded) icon. See also Shift.

If you wish to question the presence of a item in a sequence this is done by ending the command with a `?' instead of a `;' or drag over the icon. This removes the constraints imposed by the position of this sample in the sequence and tells the analysis program to calculate the probability that a sample should be in this position in the sequence (see section on `Probability and agreement indices' and Question).

Correlations between two events can be plotted using Correlate this gives plots of like:


Probabilities of being before and after events

Two functions are also provided for calculating probability distributions for years being before (Before) or after (After) an event or group of events. Consider, for example, a radiocarbon date for an event: we wish to find a probability distribution which covers the period after the event; this will be given by A calculated by the following commands:
After "A" {R_Date 3050 60;};
The main use for such distributions is for use in combinations where you might wish to add into a probability distribution the fact that the event must be after or before another.

Ordering of events

A command Order can be used to estimate the probabilities of all possible orders of a series of events. This grouping can be used in exactly the same way as Phase. As a simple example the following sequence of commands:
Order
{
 R_Date "A" 1100 50;
 R_Date "B" 1000 50;
 R_Date "C" 900 50;
};
gives the resultant probabilities:
 74.2% A B C 
 20.4% A C B 
  5.1% B A C 
  0.1% B C A 
  0.1% C A B 
Note that probabilities below 0.1% are not shown and that a maximum of 50 different orders are reported. There is also a limit of 50 on the number of items that can be ordered in this way.

See also [Archaeological Considerations]


Adding extra plotting instructions

At the data entry stage you can also decide on some aspects of the final plots. Page breaks can be put into multi-plots with Page, horizontal dividing lines with Line and visible comments with Label. The axes can also be defined by using Axis.

Removing lines from a command file

Lines can be temporarily removed from a command line by starting the line with an `!'. This allows comments to be added to a run file and lets temporary changes be made without loosing the input data (see also Comment).

Performing the Analysis [Up][Contents][Index]

Performing the Analysis

Once the data has been entered the analysis can be performed simply by pressing the button (or using the menu item [File|Analyse...]). You might need to activate the top left hand pane of the relevant input window before doing this. Recalculation can be performed by the same procedure but note that the data-files will be overwritten.


Calibration and Calculation

The first stage is the calibration of radiocarbon dates (the methods used are similar to those used by Stuiver and Reimer 1993 and van der Plicht 1993, the error terms in the calibration curve are taken into account; also see Dekling and van der Plicht 1993) and calculation of other distributions (these are C_Date, R_Date and R_Simulate). The distributions produced by this stage could be referred to as 'prior' distributions (a term from Bayesian statistics - see Bayes 1763 and Doran and Hodgson 1975) because the represent the state of our knowledge before any stratigraphic information has been included. Next all of the calculations which can be done analytically are performed (these are After, Before, C_Combine, Combine, Difference, D_Sequence, First, Last, Offset, R_Combine, Shift and Sum).

See also [Mathematical Methods]


MCMC Sampling

If it is required the second stage of analysis is automatically started. This stage uses a method called `MCMC Sampling' to incorporate the stratigraphic evidence (see Buck et al 1992 and Gilks et at 1996}. This method will be applied only if Correlate, Interval, Order, Span, Sequence, TAQ, TPQ or V_Sequence have been used. This stage of the calculation can take quite a long time and a dialog box is displayed showing progress. It is quite normal for a few error messages to appear in this window during the first few seconds of the analysis as the stratigraphic order is resolved. If the constraints you have entered are impossible to fulfill the message `cannot resolve order' will persist and you should cancel the analysis any try to sort out what is wrong (check first that you have entered any sequences in chronological order - oldest first).

During the sampling information is displayed indicating how it is progressing. A typical message would be something like:

 Done: 43.2%  Ok: 100.0%  C>=98.6%
indicating that the sampling process is 43.2% complete, all of the iterations fit the constraints and the worst convergence value so far has been 98.6%.

Note that if the convergence is poor to begin with the program will continue to lengthen the sampling time until it has risen above 95%.

See also [Mathematical Methods]


Calculation Times

Obviously calculation times are very difficult to predict as they depend both on the nature of the data and the computer being used. With greater than 100MHz Pentium computers even quite complex models only take a few minutes to run. Some models may require many more iterations to converge properly than others. In general it is best to avoid very deeply nested phases and boundaries.

Relationship files

If you are interested in the details of how the MCMC sample is being performed it is possible to view the data file which defines the relationships between the samples. To do this you double click on the icon in the plot organiser after the analysis has finished (see example). See also [File Formats]

Log files

Text versions of the results is always written out to two files which can be viewed by pressing the button or double clicking on the relevant icons in the plot organiser window. See the following examples: The editor is only capable of dealing with log files up to 40kB long. If the log file is longer than this the program will fail to open it and you will have to use an alternative editor.

Probabilities and agreement or likelihood indices

Some indication is clearly needed as to how well the data agree with the stratigraphic constraints.


Chi squared test

In the case of combinations of dates (prior to calibration) a chi squared test is done (see Shennan 1988 p65). An error message will be generated if the confidence limits drop below 5%. The results of the chi squared test are given on the plot at the head of the group to be combined and will look something like:
R_Combine 913+-5 (df=3 T=1.9(5% 7.8))
The value given for T is the chi squared value calculated and the value given in brackets is the level above which T it should not rise (the degrees of freedom are given by df).

Agreement index

In the case of other types of analysis each posterior distribution is given an agreement index which is displayed on the plot with the sampled distribution name. The mathematical definition for this is given in the appendix but it indicates the extent to which the final (posterior) distribution overlaps with the original distribution. An unaltered distribution will have an index of 100% but it is possible for the value to rise above this if the final distribution only overlaps with the very highest part of the prior distribution. If the value of this for any individual item is below 60% it may be worth questioning its position in the stratigraphy and an error message is generated (this level of disagreement is very similar to that for the 5% level chi squared test).

See also [Mathematical Methods]


Overall agreement of models

For a group of items (such as a sequence) it is possible to define an overall agreement index which is a function of all of the indices within the group (see the appendix on `Mathematical Methods'). If this falls below 60% it may be worth re-evaluating the assumptions made. This overall agreement is shown on the plot at the top of the sampled group and will be in a form like:
Sequence {A=100.9%(A'c=60.0%)}
where A is the calculated overall agreement index and A'c is the level below which it is not expected to fall.

See also [Mathematical Methods]


Overall agreement for combinations

In the case of combinations (Combine and D_Sequence) a agreement index is calculated which is similar to the overall agreement index. Since all of these dates are correlated the criterion for agreement is slightly different - again the program will indicate if the agreement is poor (again this threshold is similar to the 5% chi squared test). Such agreement indices will be shown in the plot in the form:
Combine test [n=4 A=124.4%(An=35.4%)]
where A is the calculated agreement index and An is the value (dependent on n) below which it should not fall.

Related to this agreement index is a value calculated if you question a value for a combination (Combine) or a wiggle match (D_Sequence). This value is again about 100% if the questioned item combines as well as expected and decreases in proportion to the probability if the combination is not very likely. The value of this can also rise higher than 100% if the agreement is unusually good.

See also [Mathematical Methods]


Probabilities

If you question the position of an item in a sequence a probability is calculated instead of an agreement index. This will always be less than or equal to 100% and gives the probability (given the prior) distribution that the item comes from the particular place in the stratigraphy selected. This value might be fairly low even if the agreement would be fine when the constraints are very stringent and the initial distribution wide.

See also [Mathematical Methods]


Convergence

The convergence is a measure of how quickly the MCMC sampler is able to give a representative and stable solution to the model. Details of the measure used are given in the section on Mathematical Methods.

The number of iterations is automatically increased until the convergence is satifactory.

The convergence can also be studied in more detail by opting to store convergence data during the sampling process (see Calculation options). If this is done then after the calculation the convergence for individual distributions can be seen in square brackets either in the plot organiser or on the plots themselves.

If convergence data has been included the actual sampling process can be observed by clicking on the button or using [File|Individual plots]. The resultant plot will look something like:

The dots each represent single samples. This is only a small section of the total sampling run but it allows you to see if the model is getting 'stuck' in particular parts of the distribution.


Calculation options

There are several options relevant to the calculation methods and the reporting format. All relevant options can be accessed by using the (or the [File|Analysis Options...] menu item). Options will be automatically saved from session to session.


Calibration Curve

The most obvious option is the data file which is used for the calibration curve. To change this simply use the [Browse] button on the dialog box.Different calibration curves can be used for different samples using Curve.

There is another option relating to the calibration curve: whether or not a cubic function is used in interpolating the calibration curve (see mathematical methods for details) - this produces a smoother looking curve and distributions but makes very little difference to any numerical values. See also [Calibration Data] and [Resolution]


Reporting

The way in which calendar dates are reported and read in are affected by the first two options. Calendar dates can be given as BP (before 1950) instead of BC/AD; the strings `BP', `BC' and `AD' can be omitted (using `-' for BC). The third option relates to the way in which sequences are reported and displayed: the normal order for sequences is oldest first (chronological) but this can be reversed to correspond to archaeological stratigraphy (youngest at the top); the data must still be entered in chronological order; only Sequence, V_Sequence and D_Sequence are affected by this option.

Resolution

This defines the resolution to which the calibration curve (and any calculated distributions) are stored. Obviously calculations become slower and the related files larger for a finer resolution. Assuming the resolution is set at less than 20 years the results will still be given to the nearest year above this they will be given to the nearest 10 years.
______________________________________________

Storage    Result      
Resolution Resolution
_____________________

   1         1       
   2         1       
   4         1       
   6         1       
   8         1       
  10         1       
  15         1       
  20        10       
 100        10       
 200       100       
1000       100       
_____________________
See also [Calibration Data]

Ranges

Any combination of one two and three sigma ranges can be selected. The ranges can be calculated by the intercept method (only relevant to radiocarbon dates) or the probability method. The ranges can be forced to be whole (that is not divided up into segments); for the probability method this produces `floruits' and in the case of the intercept method the gaps in the ranges are simply removed giving one single range.

A option for rounding range values is provided. This will always round ranges outwards and the resolution of the rounding is dependent on the total range and the storage resolution.

______________________________________________

Total range        Round to the nearest
______________________________________________

   1 -   50          1 year
  50 -  100          5 years
 100 -  500         10 years
 500 - 1000         50 years
1000 - 5000        100 years
...                ...
______________________________________________
If the storage resolution is 4 years the ranges will be rounded to the nearest 5 years regardless of how short the total range is, if rounding is switched on.

If you prefer the resolution of rounding can be set by the user.


Advanced settings

These are for advanced manipulation of the MCMC analysis.

The Uniform span prior affects the way sequences of bounded events are treated (see mathematical methods). This option should normally be ON. It can be set to OFF for compatability with previous (earlier than 3.2) versions of the program.

Inclusion of the convergence data is dealt with above.

The inverse square modelling option allows analysis on an inverse time scale rather than a linear scale. This can be useful at the limit of radiocarbon or when dealing with very long timescales (see Bronk Ramsey 1998).

If the distributions after analysis are not sufficiently smooth, you may wish to change the default number of iterations for the MCMC sampler. This is normally set to 30k. Note that the program will automatically increase the number of iterations if the convergence is poor.


Input

The only option here is the default event type. This can be used for pasting in events of different types from data on a spreadsheet. The command string for this event will also not be shown on plots, log files etc.

Default system options

These are the options set for the program as it is supplied and should be set back to these values if you have problems.
________________________

Option          Setting  
____________________________

Calib curve     intcal04.14c

Cubic interpolation       on

Use BC/AD (not BP)        on
Use -/+ for BC/AD        off
Reverse plot order       off

Resolution                 5

1 Sigma ranges            on   
2 Sigma ranges            on
3 Sigma ranges           off
Probability method        on
Round off ranges          on
Round by		auto
Whole ranges             off

Uniform span prior        on
Include conv data        off
Inverse square modelling off

Default iterations       30k

Default event type    R_Date
____________________________

Command line equivalents

The resolution and (BP/AD/BC) options are also stored with each CQL command file. The form these options take is a string beginning with a `-'. The forms of this string are shown below and can be entered in this form in the command line version of the program.
-afilename   append log to a file*
-b1          BP                     -b0     BC/AD
-cfilename   use calibration data file
-d1          plot distributions     -d0     no plot 
-fn          default iterations for sampling in thousands
-g1          +/-                    -g0     BC/AD/BP 
-h1          whole ranges           -h0     split ranges 
-in          resolution of n
-ln	     limit on number of data points in calibration curve (see Resolution)
-m1          macro language         -m0     simplified entry
-n1          round ranges           -n0     no rounding
-o1          include converg info   -o0     do not include 
-p1          probability method     -p0     intercept method 
-q1          cubic interpolation    -q0     linear interpolation 
-rfilename   read input from a file+
-s11         1 sigma  ranges        -s10    range not found 
-s21         2 sigma ranges         -s20    range not found
-s31         3 sigma ranges         -s30    range not found
-t1          terse mode             -t0     full prompts
-u1          uniform span prior     -u0     as in OxCal v2.18 and previous
-v1          reverse sequence order -v0     chronological order
-wfilename   write log to a file*
-yn          round by n years       -y0     automatic rounding

* Note that with either of these options the tabbed results will then be sent to the console output and can therefore be redirected to a file or a pipe;  the standard DOS redirection > or >> can be used instead if only the log file needs redirecting.

+ Note that the standard DOS redirection < can also be used.



Graphical Display [Up][Contents][Index]

Graphical Display


Overview of Plots

The graphical display of the results from the calculations and analysis is normally in the form of a multiple plot which is generated from the plot organiser document by pressing the button.

The form of the plots is generally determined by the stratigraphic relationships and the type of calculations performed. The plots are divided into up to four pages or groups of pages:

As well as the actual distributions details of the stratigraphic structure imposed on the data is also displayed down the left hand axis. This should allow anyone else wishing to repeat the calculations to see at a glance haw the data has been set up. A reference string is also printed at the top of the graph which indicates which calibration curve has been used and the important system options from the OxCal program.


Plot Options

There are a number of options which can be selected before a plot has been created. These are all accessed via the [View|Plot Options] menu item.

Options

These are altered through the dialog box:

Calibration data: whether or not to view the plots on the calibration curve and whether radiocarbon ages are given as percent modern or BP.

Show: you can decide whether to display the ranges (if they have been calculated); if the distributions are to be shown these can be solid black or in outline and there is an option to normalise all distributions to the same area; the prior distributions can be shown in outline on posterior plots.

X-Axis: the default is BC/AD - Calibrated BP or Radiocarbon BP can also be selected; the label can be omitted.

Multiple Plots: plots can be forced to be individual (this also displays convergence data if it has been included at the time of calculation); the analysis structure and agreement indices can be shown or not as required; the number of plots per chart can also be altered.

Once a plot has been created these aspects can be changed by reloading the plot using the button or [File|Options] on the plot viewer.


Style

These options allow you to control some aspects of the plot style. They can either be determined before plot creation by using the [View|Plot Options|Style] menu item or, after creation of an plot by using the [View|Style] menu item.

General: the relative size of the text can be altered; references and page numbers are optional as is the use of colour and italic labels for posterior distributions; the alignment grid is also optional.

Plotting can either be as smooth polygons (default) or as rectangular histograms.

Single Plots: whether or not to show the gaussian distribution and the calibration curve.

Correlation plots: solid fil and contour plots can be chosen.


Control over the Plotting Procedure

The form of the plots can be altered in four ways - each of which have their advantages:

Commands embedded in the model

The plot can to some extent be altered at the data input stage by inserting page breaks (Page), horizontal lines (Line) and labels (Label). The range of the axes can also be defined (Axis).

The advantage of this method is that if the model is changed slightly and recalculated the plots will still be properly formatted.


Modification in the plot organiser

The plot organiser can be used to re-order items, delete those that are not required etc.. Axis specifiers, page breaks, labels, lines and groupings can also be added at this stage by dragging the relevant icons from the selection in the right hand pane of the window:

The organisation you create can be saved with a different file name to ensure that it is not over-written by a repeat calculation. This gives considerable flexibility in arranging plots.

By raising the bar at the bottom of the window the text version of this plot organisation file can be viewed and changed. The format is given in the section on File Formats.

To access the raw data of the individual calibrations, right mouse click on the relevant icon in the plot organiser - this will bring up an editor window. This will not work if convergence data has been included as this makes the files too large to edit. See section on file formats.


Alteration of created plots

Once a plot has actually been created the scope for change is more limited but the axes can still be adjusted, the fonts changed, labels reworded etc.. Most of these changes are performed using the [View] menu menu of the plot viewer: More fundamental changes can also be made by using the button or [File|Options] but this will undo any changes made through the [View] menu.

Using Plots

The plots can be printed out directly using the [File|Print] menu item as with most Windows programs. You can also copy the plots onto the clipboard for pasting into word-processor files or other drawing packages using [Edit|Copy graphics].

Viewing the Calibration Curve

You can easily view the calibration curve by pressing the button. You can scroll around the calibration curve by clicking on the button: this will create a scroll control which allows you to look through the curve.

It is also possible to plot results on the calibration curve while viewing a plot after calculation in a subsidiary window. This is most easily achieved by using the button.



Tutorial Examples [Up][Contents][Index]

Tutorial Examples

In order to help users learn how to use the system there are a number of example input files supplied with the program. They will have been installed on your computer if you followed the standard installation in the ./Manual/eg subdirectory. They can either be opened in the usual way for Windows programs (using [File|Open] or the button) or called directly from the manual by clicking on the relevant:
icon
You may need to configure your Browser to use OxCal to read view the model definition (.14i) files.

These examples are all taken to illustrate how the program can be used - they are not genuine archaeological examples. For real examples see Bronk Ramsey and Allen 1995, Bayliss et al 1997 and Needham et al 1998.


Plot Example 1

eg_plot1.14i

This example is simply a series of dates for calibration. Use is also made of the Line command to get a horizontal divide in the plot. Call the file up by clicking on the above icon. This should give you a window which looks something like this:

This window has four panes of which only the top two are visible. The right hand pane contains all of the items that you might wish to add to a plot and the left hand pane contains the plot as it has been constituted. To add extra items you would simply have to drag items from the right hand pane to the left. By dragging the frame up you should be able to see the bottom two panes too; in the left hand one of these is the text of the CQL command file which looks like this:

Plot "Test Plot 1"
{
 R_Date "OxA-1000" 3860 60;
 R_Date "OxA-1001" 3956 60;
 R_Date "OxA-1002" 3890 50;
 Line;
 R_Date "OxA-1010" 3640 60;
 R_Date "OxA-1011" 3530 60;
 R_Date "OxA-1012" 3450 60;
 R_Date "OxA-1013" 3560 50;
};
To perform the calibration simply press the button (or use [File|Analyse...]). You should be able to see the plot organiser window which looks like this:

This window allows you to organise the plot as you wish (add labels, change orders ..etc). To actually generate the plot press the button or double click on the relevant icon within the plot organiser window.

To look at a single calibration plot simply double click on the relevant icon within the plot organiser window. To see all of the individual calibration plots press the button.

To get the results in text form press the button. This will open two text files: Log.14l contains a detailed description of the results, ranges etc; Tabbed.14l contains tabulated results which can be copied into spreadsheets, databases etc. Either of these can also be opened by double clicking on the or icon in the plot organiser window. The text results can be printed directly but there are no formatting options.

Any of the plots can be printed directly from the program or copied and pasted into word processor documents. Axes, labels and fonts can be changed at any stage through the [View] menu.


Plot Example 2

eg_plot2.14i

This example is also a plot but this time showing more types of chronological evidence including calendar dates, offset dates and an integrated probability distribution. The offsets will take some time to calculate. Also shown in this example are simulated radiocarbon dates which use the function R_Simulate. In these a calendar date is specified and the program generates the sort of radiocarbon date you might expect to get for a sample of this age with an error of the size specified. Much use has been made of these in the subsequent examples - they will give different values every time they are run.

Plot "Example Plot 2"
{
 R_Date 1000 60;
 R_Date 900 60;
 R_Date 800 60;
 C_Date 1000 50;
 C_Date 1100 50;
 C_Date 1200 50;
 R_Simulate 1000 50;
 R_Simulate 1100 50;
 R_Simulate 1200 50;
 R_Date 1000 60; Offset 0 10;
 R_Date 1000 60; Offset 100 10;
 R_Date 1000 60; Offset 200 10;
 Before "before"
 {
  R_Date 1000 60;
 };
};
This example can be treated in the same way as the previous one. An additional operation you might try is to double click with the right mouse button on any of the individual icons. This will bring up the actual numerical data for the plots. This can be copied to a spreadsheet for further analysis or plotting should you wish.

Combine Example 1

eg_comb1.14i

This is a simple example of a combination of radiocarbon dates.

Plot "Combine Example 1"
{
 R_Combine "Conquest"
 {
  R_Date 925 30;
  R_Date 875 30;
  R_Date 927 30;
  R_Date 924 30;
  R_Date 868 30;
  R_Date 936 30;
 };
 Axis 900 1200;
};
Once you have calculated this you might try recalculating it with some different values. To do this return to the input window and double click on the values you wish to alter - a dialog box will allow you to alter the values.

The Axis command has been used to define the limits of the x-axis in the final plot.


Combine Example 2

eg_comb2.14i

The example of a combination shows how gaps can be used to combine distributions with different relationships to the dated event. This is in effect a wiggle match.

Combine "Combine Example 2"
{
 R_Date 1066 30;
 R_Date 1016 30; Gap 50;
 R_Date 966 30; Gap 100;
 R_Date 916 30; Gap 150;
 R_Date 866 30; Gap 200;
 R_Date 816 30; Gap 250;
};
When you have calculated the combination try viewing the results on the calibration curve using the button: the first page will probably have a large number of overlapping boxes but the second page should show how the combination has fitted the results to the calibration curve.

Sequence Example

eg_seq.14i

This example shows a typical application involving a sequence of items which contains a phase (ie. items with no known relative age differences). Analysing this will need MCMC sampling and will therefore take longer. The example is also set up to calculate the beginning, end and span of the phase as well as the span of the whole sequence: in the input file you should find folders () which contain these queries.

The enclosed sequence contains the obvious contstrain information. The outer sequence with the two boundaries is needed to ensure that the prior is for events Poisson distributed from a limited period of time.

Sequence "Sequence Example"
{
 Boundary;
 Sequence
 {
  R_Simulate 0 30;
  R_Simulate 50 30;
  R_Simulate 100 30;
  R_Simulate 150 30;
  R_Simulate 200 30;
  R_Simulate 250 30;
 };
 Boundary;
 Span "span seq";
};

Phase Example

eg_phase.14i

This example shows how Boundaries can be used to give estimates for the boundaries of phases. One of the dates generated here is fairly close to the modern end of the calibration curve and an information message may be displayed: use the [Retry] button to continue. Phases can be treated as sequential (with a possible gap) or abutting. In this case phases 1 and 2 are allowed to have a gap whereas phases 2 and 3 are assumed to be abutting.

Sequence "Phase Example"
{
 Boundary "Start 1";
 Phase "1"
 {
  R_Simulate 950 50;
  R_Simulate 1000 50;
  R_Simulate 1050 50;
  Interval "Span 1";
  !calculates interval between
  !Start 1 and End 1
 };
 Boundary "End 1";
 Interval "Interval 1 to 2";
 Boundary "Start 2";
 Phase "2"
 {
  R_Simulate 1150 50;
  R_Simulate 1200 50;
  R_Simulate 1250 50;
  Interval "Span 2";
 };
 Boundary "2 to 3";
 !Phase 2 abuts phase 3
 Phase "3"
 {
  R_Simulate 1300 50;
  R_Simulate 1350 50;
  R_Simulate 1400 50;
  Interval "Span 3";
 };
 Boundary "End 3";
};
Note that, using this model, spans of phases should be calculated by using Boundary and Interval (or Difference) rather than Span which gives only the span of the dated events. For example Interval "Span 1" in this case gives the interval between Boundary "Start 1" and Boundary "End 1". The same distribution could have been obtained using the command:
Difference "End 1" "Start 1"

Order Example

eg_order.14i

This shows a simple use of the facility for determining the order of events. It is also possible to put further constraints on these items to be ordered by using cross referencing or by placing the group as a whole within a sequence.

Order "Order Example"
{
 R_Date "A" 1100 50;
 R_Date "B" 1000 50;
 R_Date "C" 900 50;
};
Note: As this is a fragment of code (which might form part of a larger model, no boundaries have been used; the program will warn you that there are no boundaries defined.

Terminus Ante Quem

eg_taq.14i

This example shows how a terminus ante quem can be used as a constraint. In this case the first event `OxA-1000' is known to be pre-conquest but we have no knowledge about the second event in the sequence.

Sequence "Terminus Ante Quem Example"
{
 R_Date "OxA-1000" 970 40;
 TAQ
 {
  C_Date "Hastings" 1066;
 };
 R_Date "OxA-1001" 980 60;
 Correlation "correlation" "OxA-1000" "OxA-1001";
};
The example has also been set up to show how a correlation plot can be used. You can see from this plot the relationship between the two dated events and their relationship to the date of the conquest.

Note: As this is a fragment of code (which might form part of a larger model, no boundaries have been used; the program will warn you that there are no boundaries defined.


Difference and Interval Example

eg_diff.14i

This shows how the difference between two dates can be evaluated using Difference. In this case the same result has also been obtained by using the Interval command between the two dates in question to find the interval.

Sequence "Difference Example"
{
 Boundary;
 R_Simulate 0 30;
 R_Simulate 50 30;
 R_Simulate "test1" 100 30;
 Interval "testi";
 R_Simulate "test2" 150 30;
 R_Simulate 200 30;
 R_Simulate 250 30;
 Boundary;
 Difference "testd" "test2" "test1";
};

Wiggle Matching Example 1

eg_dseq1.14i

The example of a piece of long lived wood from the iron age is taken here to show how such material could in principle be used to overcome the calibration problems in that period.

D_Sequence "Wiggle Matching Example 1"
{
 First "first";
 R_Simulate -550 60; Gap 50;
 R_Simulate -500 60; Gap 50;
 R_Simulate -450 60; Gap 50;
 R_Simulate -400 60; Gap 50;
 R_Simulate -350 60; Gap 50;
 R_Simulate -300 60;
};
When you have performed the calibration try viewing the distributions on the calibration curve using the button. This will show how the data has been fitted to the calibration curve.

To get this to cover the range of the curve that you want double click on the plot (which will activate in-place editing) and then use either [View|Explore Curve] or [View|Adjust Axes].


Wiggle Matching Example 2

eg_dseq2.14i

This example is similar to the previous one except that it is set up to show that in principle other types of information can be included in such an analysis.

D_Sequence "Wiggle Matching Example 2"
{
 First "first";
 R_Simulate 0 30; Gap 50;
 R_Simulate 50 30; Gap 50;
 C_Combine
 {
  C_Date 90 60;
  C_Date 100 60;
  C_Date 110 60;
 }; Gap 50;
 R_Simulate 150 30; Gap 50;
 R_Simulate 200 30; Gap 50;
 R_Simulate 250 30;
};

Wiggle Matching Example 3

eg_dseq3.14i

In this example the wiggle matching includes some unmeasured rings. The program then calculates ages for these rings using the defined gaps.

D_Sequence "Wiggle Matching Example 3"
{
 First "first";
 Event "Zero"; Gap 50;
 R_Simulate -550 60; Gap 50;
 R_Simulate -500 60; Gap 50;
 Event "UnK"; Gap 50;
 R_Simulate -400 60; Gap 50;
 R_Simulate -350 60; Gap 50;
 R_Simulate -300 60; Gap 50;
 Event "Death";
};

Variable Sequence Example

eg_vseq.14i

This indicates the way in which approximately known age differences can be used for a sort of wiggle match.

V_Sequence "Variable Sequence Example"
{
 R_Simulate 0 30; Gap 50 10;
 R_Simulate 50 30; Gap 50 10;
 R_Simulate 100 30; Gap 50 10;
 R_Simulate 150 30; Gap 50 10;
 R_Simulate 200 30; Gap 50 10;
 R_Simulate 250 30;
};
Note: This model does not contain any Boundaries; in this case they really are not necessary as long as the Gap constraints are fairly tight as this is more like a Wiggle Match; the program will warn that there are no boundaries.

Multiple Example

eg_mult.14i

This contains various examples which show how the program might be used for more complicated analysis. In particular, use is made of the fact that the main elements of a multi-plot are calculated one after another so that it is possible to use the results from the first main group in subsequent ones.

Plot "Multiple Example"
{
 Phase
 {
  First "first";
  R_Simulate 140 30;
  R_Simulate 120 30;
  R_Simulate 130 30;
  R_Simulate 110 30;
  Last "last";
  Span "span phase";
 };
 After "after"
 {
  Prior "@first";
 };
 Before "before"
 {
  Prior "@last";
 };
 Combine
 {
  Prior "after";
  Prior "before";
 };
};


CQL Command Summary [Contents][Index]

CQL Command Summary

This section contains a description of how to use the CQL (Chronological Query Language) commands available in OxCal.

Index guide

Each entry in the alphabetical list consists of the keyword in bold followed by the icon used to access the command from the Windows interface followed by a definition of the syntax of the command. For example the entry for the command R_Date is:
R_Date
syntax = R_Date [name] date [error];
indicating that the command can be accessed by dragging the icon from the selection tree. In the syntax the values in italics are descriptive rather than verbatim and items in square brackets [item] are optional. Thus possible forms of this command are:
R_Date OxA-1000 3000 30;
R_Date OxA-1001 3000;
R_Date 3000 30;
In general in the syntax the term command implies that any command can be placed here but there are frequently restrictions. For example the function C_Combine can only be used to combine calendar dates and so although the syntax is given as:
C_Combine
syntax = C_Combine [name] { command; command; ...;};
the following will give rise to an error message indicating incorrect nesting:
C_Combine test {R_Date 3000 30; R_Date 3010 30;};
whereas what is expected is:
C_Combine test {C_Date 1000 30; C_Date 1010 30;};

Entry of Values


Dates

Dates can be entered as integers or floating point numbers. Radiocarbon dates are always assumed to be `Radiocarbon BP'. Calendar dates are usually given as BC/AD by use of the minus sign so that -100 indicates 100BC whereas 100 indicates 100AD. By setting the relevant system option it is possible to write all calendar dates as `Calendar BP' where 100 would indicate 1850AD.

Doses

For the verb C_Date it is also possible to enter the dates in terms of luminescence accumulated doses. To do this the dose rate must first be defined using the verb Dose. Full scientific notation may be used:
Dose 1.5e-3;
If dose rates are to be used with C_Date they must be prefixed by the letter d so that you might have:
C_Date d1.23 d0.13;
Again scientific notation may be used.

Strings

Strings such as names or labels can simply be typed as they are if they contain no gaps:
R_Date OxA-3000 3030 50;
but normally they should be surrounded with quotation marks:
R_Date "Bone needle A" 3030 50;


Nesting of Commands

Most of the rules covering nesting are fairly obvious but the following points should be born in mind:

Multi-Plots

A multi-plot may contain any other command except that a plot cannot be nested within a plot.

Functions returning values

These are After, Before, Combine, First and Last. They can contain another one of their own type but cannot contain sequences and phases. For technical reasons D_Sequence also falls into this category.

The special functions C_Combine and R_Combine can only contain C_Date and R_Date respectively (plus the display orientated commands).

Sequences and Phases

The commands Sequence, Phase, TAQ and TPQ can be freely nested inside each other. The special case V_Sequence may contain Sequence or functions but not any of the other sequence and phase commands

CQL Command Listing [Up][Contents][Index]

CQL Command Listing


After
syntax = After [name] [{ command; command; ...;}];
calculates the probability of any given year following a group of events; as an example After {C_Date 1000;}; will yield 1 for all dates after 1000 and 0 for all dates before.
See [Program Operation] [Mathematical Methods]
Axis
syntax = Axis min max;
defines the x-axis limits for the plot produced - remember that the labels on the left of the plot obscure some of the area so make min a bit lower than you actually need.
See [Program Operation]
Before
syntax = Before [name] { command; command; ...;};
calculates the probability of any given year preceding a group of events; similar to After.
See [Program Operation] [Mathematical Methods]
Boundary
syntax = Boundary [name];
used to define which events in a model are from well defined periods and to estimate the boundaries of these periods using a model of uniform distribution; must always be used in conjunction with Sequence as in:
Sequence {Boundary; Phase {R_Date 750 50; R_Date 800 60;}; Boundary;};
it can be used between phases to estimate the boundary between abutting phases.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
C_Combine
syntax = C_Combine [name] { command; command; ...;};
used to combine calendar dates; a chi squared test is performed.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
C_Date
syntax = C_Date [name] [date [error [error]]];
used to generate a gaussian probability distribution about a calendar age with a given error term (1 sigma); if no error is given a single spike is produced; if two errors are given an asymmetric distribution is generated.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Calculate
syntax = Calculate [name];
used within a function group to ensure that only the result of the function is plotted in any outer group: thus the main plot for
Phase {R_Date "A" 900 60; Combine "B" {R_Date 950 50; C_Date 1000 50; Calculate;};};
will only display distributions A and B and not the details of the combination.
See [Program Operation]
Combine
syntax = Combine [name] { command; command; ...;};
used to combine probability distributions of all types.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Comment
syntax = Comment [name];
this command has no effect and is equivalent to starting a line with an exclamation mark thus Ccomment "test comment"; is equivalent to !test comment which is more normally used.
See [Program Operation]
Correlate
syntax = Correlate name1 name2;
used to produce a correlation plot between two events which are otherwise related by some stratigraphic relationships; for example the commands
Sequence {R_Date "A" 1000 100; R_Date "B" 990 50; Correlate "R" "A" "B";};
will produce a correlation plot R for the two dates A and B.
See [Archaeological Considerations] [Program Operation]
Curve
syntax = Curve name [filename];
used to change the calibration curve used within the present group; note that the curve will be reset at the end of a phase, sequence etc; this command allows the user to calibrate a mixture of marine and terrestrial samples; the filename should always be specified the first time a calibration curve is used within one calculation.
See [Calibration Data]
D_Sequence
syntax = D_Sequence [name] { command; command; ...;};
used to combine dates when the age separation between them is known; this is most likely to be used for `wiggle matching' of radiocarbon dates made on tree ring sequences; for example three tree rings each separated by 100 years could be combined using
D_Sequence {R_Date 1100 50; Gap 100; R_Date 1020 50; Gap 100; R_Date 930 50;};
This is in fact similar to
Combine {R_Date 1100 50; Gap 200; R_Date 1020 50; Gap 100; R_Date 930 50;};
although the latter would behave differently if nested in a sequence since it will generate a resultant distribution.
IMPORTANT: dates are entered in chronological order (oldest first) although they can be displayed in reverse order (youngest at the top) by selecting the
[Options|System options|reverse order] option.

See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Delta_R
syntax = Delta_R Delta_R [error];
Used in association with Curve to generate a local marine calibration curve using the Delta_R offsets as defined in Stuiver and Braziunas 1993; reservoir values are available online from Queen's University Belfast; the basic marine curve is supplied with this program; to generate a marine curve for Iceland, for example, the commands:
Curve "marine98.14c"; Delta_R 49 19;
could be used; this function offsets the calibration curve in radiocarbon years.
See [Calibration Data] [Mathematical Methods]
Difference
syntax = Difference name name1 name2;
used for calculating the time difference between two dates; this function will only work within a Phase a Sequence or a V_Sequence; to calculate a difference distribution between two unconstrained dates use the commands
Phase { R_Date "A" 900 50; R_Date "B" 800 50; Difference "R" "B" "A";};
where the resultant distribution R = B-A.
See [Archaeological Considerations] [Program Operation]
Dose
syntax = Dose dose_rate;
defines the site dose rate for luminescence type dating methods within a group; see also Year and Error; dose rates can be given in scientific notation for example Dose 1.3E-6;.
See [Archaeological Considerations] [Program Operation]
Error
syntax = ERROR error_term;
used for defining errors proportional to the age within a group; intended for use with Dose; if a series of events is being combined with Combine the error will be applied after combination.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Event
syntax = Event name;
used to determine the distribution of an event which is constrained in some way by the model but which has no direct dating information. A model consisting almost entirely of events can be constructed to check the effective prior distributions for the model.
See [Example]
Factor
syntax = Factor factor;
used to multiply dates within a group by a set factor (as measured from the present which is defined by Year); this command is not expected to be used much except possibly in combination with Dose.
See [Mathematical Methods]
First
syntax = First [name] [{ command; command; ...;}];
calculates a probability distribution for the first event in a group; it can be used either with its own group as in
First "f" {R_Date 1000 100; R_Date 1100 100};
or to calculate the start of another group as in
Phase {First "f"; R_Date 1000 100; R_Date 1100 100;};
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Gap
syntax = GAP gap [error];
this command is intended primarily for use with D_Sequence (no gap error) and V_Sequence (with gap error); it can also be used in Sequence (no gap error) to ensure a gap between events in a sequence and in Combine (no gap error) where it functions rather like Offset.
See [Program Operation]
Interval
syntax = Interval [name];
used to calculate the interval between events in a sequence; for example
Sequence {R_Date "A" 900 50; Interval "R"; R_Date "B" 800 50;};
will find the expected interval between A and B; the same thing can be achieved with the more general command Difference.
See [Archaeological Considerations] [Program Operation] [Explanatory notes]
Last
syntax = Last [name] [{ command; command; ...;}];
used to calculate the probability distribution for the last event in a group; similar in operation to First.
See [Archaeological Considerations] [Program Operation]
Line
syntax = Line;
used to draw a horizontal line in multiple plots.
See [Program Operation]
Label
syntax = Label label;
used to insert a label in a multiple plot.
See [Program Operation]
Mix_Curves
syntax = Mix_Curves name name1 name2 proportion2 prop2err;
used to mix radiocarbon calibration curves; name1 and name2 must already have been defined using Curve statements; proportion2 and prop2err are the proportion and error in the proportion of the second curve in the mixture.
See [Calibration Data] [Mathematical Methods]
Offset
syntax = Offset offset [error];
used to offset distributions (a positive offset makes the distribution younger); for example an event dated by wood which had an age of 30+-10 years would have a probability distribution given by R_Date 3000 60; Offset 30 10;.
Offset should not be used for Delta_R corrections of marine samples as the offset is performed after calibration: in these cases Delta_R should be used.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Order
syntax = Order [name] { command; command; ...;};
used in exactly the same way as Phase except that the relative order of the events will be determined by the program.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
Page
syntax = Page;
produces a page break in a multiple plot.
See [Program Operation]
Phase
syntax = Phase [name] { command; command; ...;};
used to group events between which there are no known relationships but which may all share some relationship.
See [Archaeological Considerations] [Program Operation]
Plot
syntax = Plot [name] { command; command; ...;};
used to group dates together for plotting purposes only.
See [Producing a multiple plot]
Prior
syntax = Prior name [filename];
used to access stored probability distributions (which could be provided by the user or saved from previous calculations); thus Prior "OxA-3000"; will retrieve the distribution from the file OXA3000.14D; to refer to a file already defined within a previously calculated sequence or phase use the command in the form Prior "@OxA-3000"; which will retrieve the file OXA3000.14S; the filename can be used to specify a file which has not been generated by an earlier part of this calculation.
See [Archaeological Considerations] [File Formats]
Question
syntax = Question;
used to question the position of an event for example in a sequence; it is exactly equivalent to ending the previous command with a question mark instead of a semicolon; thus C_Date 1000 50; Question; is equivalent to the more normally used C_Date 1000 50? the commands
Sequence {R_Date "A" 900 50; R_Date "B" 800 50? R_Date "C" 700 50;};
will not use B in calculating the sequence but will give the probability that it occupies this position in the sequence.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
R_Combine
syntax = R_Combine [name] { command; command; ...;};
used to combine radiocarbon dates before calibration; a chi squared test is performed.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
R_Date
syntax = R_Date [name] date [error];
used for radiocarbon dates which are to be calibrated.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
R_Simulate
syntax = R_Simulate [name] date [error];
used for seeing what kind of radiocarbon measurement would be expected for a sample with a given calendar age; thus R_Simulate 1066 50; will give a radiocarbon date that you might expect to get for the battle of Hastings assuming the error you expect from the radiocarbon lab is +-50; each time the command is called a different radiocarbon date will be produced.
See [Archaeological Considerations] [Program Operation]
Reservoir
syntax = Reservoir Reservoir_Age [error];
Used in association with Curve to generate a calibration curve for some sort of reservoir with a known age constant; a fresh water lake might have a mean reservoir age (in calendar years) of 80+/-20 years a suitable smoothed curve might be generated by the commands:
Curve "CAL10.DTA"; Reservoir 80 20;
the reservoir is assumed to be small compared to the atmosphere; mixing within the reservoir is assumed to be good; a simple box diffusion model is used.
See [Calibration Data] [Mathematical Methods]
Sequence
syntax = Sequence [name] { command; command; ...;};
allows the information that one event precedes another to be incorporated into the resultant probability distributions; the sequence can contain phases and functions as well as simple dated events; TAQ and TPQ functions can also be used to allow for termini ante quem and termini post quem. IMPORTANT: dates are entered in chronological order (oldest first) although they can be displayed in reverse order (youngest at the top) by selecting the
[Options|System options|reverse order] option.

See [Archaeological Considerations] [Program Operation]
Shift
syntax = Shift name name1 name2;
used for shifting one probability by another; this function will only work within a Phase a Sequence or a V_Sequence; as an example of its use consider D which lies as long after C as B is after A where we have dates for A, B and C:
Phase
{
 R_Date "A" 1200 60;
 R_Date "B" 1100 60;
 R_Date "C" 1000 60;
 Difference "R" "B" "A";
 Shift "D" "C" "R";
};
where the resultant distribution R = B-A and so D = C+R = C+B-A as required.
Span
syntax = Span [name];
used to calculate the span of a phase, sequence or other group which is defined as the probability distribution for the difference between the first and last events of a group; thus to find the span of a phase the necessary commands might be:
Phase {R_Date 1000 100; R_Date 900 60; Span "R";};
See [Archaeological Considerations] [Program Operation]
Sum
syntax = sum [name] { command; command; ...;};
used for adding probability distributions to arrive at the best estimate for the chronological distribution of the events; differs from Combine in that ranges are expanded rather than reduced with additional information; the resultant distribution does not relate to a single event and so cannot be used as the input to other functions; the elements within the sum are treated as a phase and can be constrained in a similar way; note that, for example, the 95% range for a Sum distribution give an estimate for the period in which 95% of the events took place not the period in which one can be 95% sure all of the events took place.
See [Archaeological Considerations] [Program Operation]
TAQ
syntax = TAQ [name] { command; command; ...;};
similar to TPQ but for a terminus ante quem.
See [Archaeological Considerations] [Program Operation]
TPQ
syntax = TPQ [name] { command; command; ...;};
(terminus post quem) used within a sequence to force all items later in the sequence to follow the items in the TPQ group; all items earlier in the sequence are not directly affected; thus if a coin with a date of 1066 is found between two samples in a sequence the second sample B in the sequence must be later than 1066 but that is the only direct constraint:
Sequence {R_Date "A" 1050 60; TPQ {C_Date 1066;}; R_Date "B" 1030 60;};
B will be forced after 1066 but A will not have to be before 1066; note that A will be indirectly affected because of the constraint that A is before B.
See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
V_Sequence
syntax = V_Sequence [name] { command; command; ...;};
similar to D_Sequence except that the gaps can be defined with errors; the calculations involved are actually rather different and might fail to give a result if the error is very low; the calculations may also become slow if the agreement between the results is poor. IMPORTANT: dates are entered in chronological order (oldest first) although they can be displayed in reverse order (youngest at the top) by selecting the
[Options|System options|reverse order] option.

See [Archaeological Considerations] [Program Operation] [Mathematical Methods]
XReference
syntax = XReference name;
used to refer to an event already defined somewhere else in the stratigraphic sequence.
See [Archaeological Considerations] [Program Operation]
Year
syntax = Year year;
used to define the year of measurement for luminescence dates or anything for which a an age factor or proportional error term is required; if this is not defined the year is assumed to be 1950.
See [Archaeological Considerations] [Program Operation]


Mathematical Methods [Up][Contents][Index]

Mathematical Methods

It is the nature of archaeological material and of age measurements that all methods are necessarily approximate and the errors assigned are never statistically precise. This package is not intended to be mathematically rigorous but rather to provide distributions and numbers which are useful in archaeological investigation. Mathematicians and statisticians might realise that this has been written by a physicist and some of the methods used (such as agreement indices) are demonstrated to be useful indicators but no formal proof of their validity is given. In apologising for this in advance I would however say that I feel that this is probably the most valuable approach to this type of problem where undue rigour would probably lead to paralysis.

Calculations

MCMC Sampling

Agreement



Calculations [Up][Contents][Index]

Calculations


Interpolation

All distributions and calibration curves are stored at the resolution set in the system options, rs. There is also a calculation resolution defined which is rc=1 for rs=1...19, rc=10 for rs=20...199 and so on. All dates are rounded to this value (input and output). Interpolation between the stored points is linear.

When integrations or differentiations are carried out they are at the resolution rc. The details of the interpolation methods (such as methods of rounding used) have been carefully chosen to give the expected results and variation from the analytical values are rarely more than a single year with the standard options.

The files for the calibration curve usually have a different resolution to the internal storage resolution and so some form of interpolation is needed. This can either be linear or a cubic function depending on the setting in the system options. The cubic interpolation does not fit a spline function as this is very time consuming to calculate and can have some undesirable features such as large excursions between points. The cubic function used here gives a smooth curve with a continuous first differential but gives very little overall difference from the linear interpolation. The form of the function between two points is simply defined by the four surrounding points. If fj defines the function at tj the interpolation between tj and tj+1 is given by f(t) where:

The calibration curve is stored in two arrays one ri defining the radiocarbon age of the tree rings and another sigmai defining the errors associated with these measurements. Both ri and sigmai which are stored at the resolution rs are generated from the supplied calibration curves using the above interpolation method.


Calendar and BP dates

Output and input can be given in terms of calendar years yCAL (AD/BC) or years before 1950 yBP (cal BP). The relationship between these is simply:

yCAL = 1950 - yBP

Thus:

10BP = 1940AD, 11950BP = 10000BC

It should be noted that this does imply a year 0 in the AD/BC sequence which is strictly speaking incorrect. With radiocarbon dates the problem is clearly semantic, with historical evidence it should be borne in mind that age differences across the BC/AD boundary are actually one year larger that they should be. Alternatively negative numbers (BC) should always be taken as the start of the year and positive numbers (AD) as the end. Thus -1 is the start of the first year BC whereas +1 is the end of the year 1AD. The reason for this problem is that in order to keep the internal representation of the numbers consistent it is very difficult to have to deal with a number set which goes from -1 to 1.


Radiocarbon calibration

The radiocarbon calibration itself is performed (using the verb R_Date) by a comparison of the measured radiocarbon age to the values stored in the curve (the methods used are similar to those used by Stuiver and Reimer 1993 and van der Plicht 1993; also see Dekling and van der Plicht 1993; the error terms in the calibration curve are taken into account as in the latest versions of CALIB). This allows a variance distribution to be calculated: if the radiocarbon date is rm with a measurement error sigmam the variance distribution vi and the probability distribution pi are then given by:

In this program the distribution is left normalised to a maximum of 1 rather than the actual probability of any individual year.

NOTE This is different to old versions of OxCal (pre 3.2) where pi was simply set to exp(-vi/2).

See also [Archaeological Considerations]


Reservoir corrections

Reservoir corrections can be made using the command Reservoir. This is suitable for a simple reservoir with a uniform diffusion rate with the atmosphere. Such a diffusion rate can be expressed in terms of a time constant tau. If the reservoir concentration is r(t) and the atmospheric concentration is given by R(t) we can write down the differential equation:

Solution of this differential equation requires a knowledge of the curve R(t) for all times before t. A linear extrapolation is assumed before the start of the curve using a gradient estimated from the first half of the curve R(t). The uncertainties in this are assumed to be ten times larger than those quoted for the first point in the curve; in practice these assumptions are unlikely to be significant unless the time constant is very long or you are considering points close to the start of the calibration curve.

Treatment of the uncertainties is more complicated. If the uncertainties associated with each point on the calibration curve are assumed to be independent the uncertainties in the reservoir curve should be smaller. In practice the errors almost certainly to some extent systematic. They have therefore been treated in exactly the same way as the concentrations themselves: if sigma(t) and Sigma(t) are the respective uncertainties we assume:

If there are also uncertainties in tau the solution of the equations would involve a double integration which would in practice be very slow. Another algorithm has therefore been adopted which is to increase the sigma in proportion to the difference between R(t) and r(t). Thus if the uncertainty in tau is deltatau:

For the oceans a properly modeled ocean curve should be used (see Stuiver et al 1998 - marine data). Local corrections can then be made using a Delta_R correction term:

See also [Calibration Data]


Mixed calibration curves

Calibration curves can be mixed using the Mix_Curves statement. This defines the two curves to be mixed and the proportion of the second one to be included. The mixing is calculated in terms of radiocarbon concentration rather than radiocarbon age. If the radiocarbon concentrations of the two curves are:

R1 ± E1
R2 ± E2

and the proportion of the second curve is P ± D then the resultant distribution is given by:

Rr = (1 - P) R1 + P R2
Er = Sqrt[((1 - P) E1)2 + (P E2)2 + (D (R2 - R1))2]

See also [Calibration Data]


Calendar dates and Asymmetric dates

Calendar dates are either entered as tc±dtc with the command CAL or tc + d(+)tc - d(-)tc. In some cases such dates may be entered as accumulated doses dm from luminescence measurements in which case these are converted to calendar dates on input using the measurement year tm and dose rate dr and the simple relationships:

tc = tm - (dm/dr)

dtc = -(ddm/dr)

Rounding to the nearest rc will take place at this stage so you may notice a slight change in the entered values especially if rc is 10 or 100.

Calculation of symmetric probability distributions is simple:

The function used for asymmetric dates is rather more complex:


Proportional errors and factors

Proportional errors (Error) are particularly relevant to luminescence dating methods and proportional factors (Factor) can also be used for this type of application. Both of these are related to the year of measurement tm. A proportional factor f can be applied to a distribution with the mapping:

p'(t) = p(t/f)

And a proportional error df by using the mapping:

The distribution is then renormalised.

In the program these error factors are normally calculated before each distribution is reported except in the case of functions such as Combine which give a resultant distribution when the factor is only applied to the final result to prevent the systematic errors being reduced in the combination process.


Range calculation

Ranges are calculated to the resolution rc (that is normally to the nearest year) by linear interpolation of the probability or variance arrays. The boundaries of the age ranges for the intercept method are simply given by variance distribution: the levels required being 1, 4 and 9 for the 1, 2 and 3 standard deviation ranges (this is the method first used by Stuiver and Reimer 1986; see also Bowman 1990). This method is only used for radiocarbon dates and calendar dates, the probability method always being used for the results of any more complex analysis.

The probability method (selected for all types of distribution in the system options) calculates the ranges in a different way (similar to the method used by van der Plicht 1993). The elements of the probability distribution array pi are sorted by size and the integral normalised to 1.0. Starting from the top the array is then integrated until a certain proportion of the total is achieved (68.2%, 95.4% or 99.7%) and the level at this point in the distribution found pr. The ranges can then be defined as those parts of pi > pr.

If whole ranges are selected from the system options with the probability method a slightly different method is employed in order to generate floruits (see Aitchison et al 1991): the probability distribution is normalised to an integral of 1.0 and then the distribution is integrated from each end until a certain proportion of the curve has been excluded (15.9%, 2.3% or 0.15% from each end); the range defined is then the part of the distribution between these two points.

Integrated distributions (generated by the functions Before and After) define ranges directly from the height of the distribution using the values 0.682, 0.954 and 0.997.


Combinations and Wiggle Matching

Combinations of radiocarbon dates prior to calibration (R_Combine) and direct combination of calendar dates with gaussian errors (C_Combine) is performed in the normal way with a chi squared test (see for example Shennan 1988 p65).

Combinations of probability distributions (Combine) are simply done by using the Bayesian rules for combinations of probabilities (see Bayes 1763 and Doran and Hodgson 1975): if we have two probability distributions p1(t) and p2(t) these are combined as:

r(t) = p1(t) p2(t)

or more generally:

For the purposes of display the maximum of the resultant distributions is always normalised to 1.

If within a group defined for the function Combine the distributions are given a Gap gi then the combination is performed as:

We can then define a new set of original distributions p'i using

p'i = r(t+gi)

A very similar method to this is used for wiggle matching using the command D_Sequence, only difference being the way in which the gap is defined (between each successive distribution). A probability distribution r(t) is always calculated for the start of the sequence. This is given by:

and the resultant distributions then calculated using:

In the case of Bayesian wiggle matches and combinations, the program also calculates the chi-squared value for the best fit (ie the highest point on the probability distribution). This is reported in the text log file. For wiggle matching tree ring sequences, where the overall precision can be very high, you use a resolution of one year.

See also [Archaeological Considerations]


First and Last Dated Events in a Group

The probability of being after a single event is given by:

and so if a group of events are independent the probability of being after all of them is given by:

This is the distribution (normalised to a maximum of 1) returned by After. From this a distribution r'(t) can be calculated which gives a probability distribution for the last of the group of events:

r'(t) = dr(t)/dt

This is the distribution returned by the request Last within a phase if MCMC sampling is not needed.

The probabilities of being before a group of events and a distribution for the first event of a phase can be similarly defined.

WARNING: These methods assume that the events are entirely independant; in most cases a much better estimate will be arrived at using MCMC sampling from a phase which is enclosed within Boundary events.

See also [Archaeological Considerations]


Offset dates and Age Differences

If a distribution is offset by an amount dt the probability distribution is simply given by:

r(t) = p(t-dt)

If the offset has an error associated with it then dt+-sigma the distribution is given by:

A similar method is used to calculate a probability distribution for the age difference between two independent distributions (only employed when MCMC sampling is not necessary) using Difference.

And to shift one distribution by another using Shift:

See also [Archaeological Considerations]



MCMC Sampling [Up][Contents][Index]

MCMC Sampling

This method is used for estimating constrained distributions. In principle it is fairly simple and gives distributions very close to analytically calculated distributions with very much less computation time and complexity in the calculations (see Buck et al 1992, Gilks et al 1996 and Gelfand and Smith 1990).

The Sampling Process

The sampling process used is a form of Markov Chain Monte Carlo (see Gilks et al 1996 for an overview of the techniques). This technique allows a samples to be taken which properly reflect the probability distributions and constraints, thereby building up a histogram which can then be used as an estimate of the prior probability densities. If the initial probability distribution is unconstrained an approximation to the initial distribution is produced.

This program uses a mixture of Metropolis-Hastings algorithm and the more specific Gibbs sampler.

The Metropolis-Hastings algorithm uses a set of proposal moves which can both result in changes to single elements of the model or changes to the duration and timing of whole groups. This provides much faster convergence for complex models than the use of the Gibbs sampler on its own.

There are several different methods of implementing the Gibbs sampler; the one employed by this program is that a value t is selected and then p(t) compared to another randomly chosen value r which lies between 0 and max(p(t)); if p(t) > r the value is accepted as a sample; if p(t) < r the process is repeated until it is successful. These sampled values are then collected and a sample distribution generated. If the initial probability distribution is unconstrained an approximation to the initial distribution is produced.

This program is initially set up to do 30,000 iterations which gives fairly smooth distributions for most purposes but reasonable results are achieved much more quickly than this and the process can be stopped after 3,000 iterations. The first 100 iterations are discarded to allow the sampling process to converge.

Every 3000 iterations (called a `pass') the sampled distributions are saved (and can be plotted by redrawing the window) and checked for convergence. The results of the convergence tests are saved in a file Converg.14L. Every 6000 iterations any boundary conditions are relaxed to allow the system to find a new starting point (this is followed by a new burn-in period of 100 iterations from which the results are discarded). A full run consists therefore of five sub-runs each with a new starting point. The convergence tests will indicate if convergence is slow or different starting points have a significant effect on the result. The convergence test consists of checking the distribution from the preceding pass, p(t), with the accumulated distribution, P(t) - thus no real indication of the convergence is given until after two passes. The function used is an overlap integral of the form:

If the convergence is poor (less than 95%) the pass interval (initially 3000) will be increased by a factor of two. This is repeated until the convergence is satisfactory. The sampling can however be abandonned if necessary but in this case the results should not be used as the model is clearly not stable.

See also [Program Operation] and the section on [Convergence].


Constraints

The reason for using the MCMC method is that it enables constraints to be imposed by defining a constraint distribution c(t) the sampling is then performed from the combined distribution p(t)c(t) (application of Bayes' theorem). The c(t) term is usually defined by the samples taken from the other samples.

The whole operation consists of finding samples from each distribution which are consistent with the constraints (sometimes this is not possible in which case the message `cannot resolve order' is displayed). Each distribution is then sampled always calculating the constraints from the latest sample of the other values. In this way once the constraints have been satisfied they will be for all subsequent sampling iterations.

Since the initial sample may be unrepresentative it is usual to ignore the first few iterations and in this program the first 100 are discarded.

Sequences

For a group of items in a sequence (Sequence) the constraints are fairly simple. If the sampled times are written as t_i for the ith member of the group the constraints are:

t_i < t_i+1, forall i < n

If a phase (Phase) is contained within a sequence this becomes a little more complicated. If we treat each member of the sequence as a phase of one or more elements t_ij the constraints for any two subsequent elements of the sequence will be:

t_ij < t_(i+1)k, forall j and k

These constraints provide a constraint function c(t) which is just an upper and lower limit which can be easily incorporated into the sampling method.

Termini

The presence of termini within a sequence is dealt with by dispensing with one set of constraints. If the ith member of the sequence is a terminus ante quem (TAQ) and, for simplicity, assuming the elements on either side are simple elements the constraints imposed are:

t_(i-1) < t_i, t_(i-1) < t_(i+1)

Sequences with approximate gaps

These are treated rather differently (V_Sequence). Here a function is defined for c(t) which is then used in the sampling process. If we have three members of the sequence as:

the constraint distribution for t_i will be given by:

It should be noted that if the error term is too low in this the samples would always be constrained to the initial selection. Also such an initial selection will become increasingly difficult in these circumstances so this method should only be used when the error terms are greater than the resolution defined (in fact the program forces you to do this) and may well fail if the sequence has a large number of element. Such failure will result in very slow progress and the message `improbable value'.

See also [Archaeological Considerations] [Program Operation]


Boundaries

Boundaries are important to the creation of large models as they allow one to correct for the greater statistical weight of groups of events with larger spans. The theory behind the notion of uniform phases is outlined elsewhere (Buck et al 1992) but further insights have been provided by Goeff Nichols (private communication).

A justification for the technique can be outlined thus: The geological or archaeological events under study in any dating research are assumed to be Poisson distributed through a period of time. The dated events which we put into our model are also assumed to be Poisson distributed within the intervals between the archaeological or geological events. In this program the arcaheological or geological events are denoted by the term Boundary. Ordinary events are put into the model useing any of the date specification commands or the generic term Event which has no dating information associated with it.

The boundaries will then divide the events up into a series of phases (in the most general sense - the events could be ordered within this as a sequence). We would like our prior density to be independent of the number of dated events within each phase, and, ideally the overall start and finish to be independent of the number of postulated internal Boundaries. All that follows is a result of these criteria.

The statistical weight of a single phase with a starting boundary b_i, a final boundary b_i+1 and ni events within it, is proportional to:

For this reason a prior probability is applied which is proportional to:

Looking at this in another way, for any boundary b_i there is a preceding phase with n_(i-1) items (starting with the boundary b_(i-1)) and one following with n_i items (ended by another boundary b_(i+1)). A prior probability function f(t) can then be calculated for the position of b_i:

This is the method outlined by Buck et al 1992.

Additional factors for unform overall span

Hovever this is not the end of the story. In version 3.2 or later of this program if the 'Uniform span prior' option is set, two further functions are added to the prior in these cases.

The first of these is needed because we do not wish the prior for the length of overall sequence of events to depend on the number of boundaries in the model. For this reason the prior is made proportional to:

where b_1 is the first boundary and b_m is the last (ie there are m boundaries in total).

The second addition takes account of the fact that if there is an upper and lower limit independantly applied to the boundaries in general we need to take account of the fact that this will 'favour' shorter spans. This effect can be reversed by applying a prior factor of:

where b_llim is the lower limit for the boundaries and b_ulim is the upper limit.

Together all of these factors give a uniform prior density for the span of the entire sequence of boundaries.

Nesting of structures

The program takes account of the depth of nesting of boundaries. All boundaries to be treated as comparable should be at the same nesting depth. This enables events of different class to be treated in a way which makes sense. A boundary at an outer leve is treated as an uper or lower limit as applicable. A sequence of boundaries at an inner level is treated like two events (the start and end of this sequence).

Having boundaries at too many different levels is liable to make the convergence very slow.

TAQ and TPQ

With these termini - it can be ambiguous whether the events are within a particular phase or not. The decision as to whether they should be included is made dynamically. This method is not entirely rigorous but it does achieve the desired effect of ensuring that the number of events specified within a TAQ or TPQ will not in itself greatly affect the prior probabilities for the boundaries.

See also [Archaeological Considerations] See also [Information from analysis] [Program Operation]


Additional Information

One benefit of the MCMC sampling method is that it is very easy to find out additional information after each iteration of the system.

First and last events

If requests have been made for First (or Last) a distribution is built up which contains the first (or last) sampled value within a group after each iteration.

Note that this is not the same as the estimate of a phase boundary assuming a model of uniform deposition of dated material.

Spans intervals differences and shifts

The requests Span, Interval and Difference all compare two sampled values and then generate a distribution for the difference between them. In the case of Span the first and last samples from the group are found and the difference between these calculated. Interval does the same thing for subsequent members of the group and Difference for specified items. Shift builds a distribution for the addition of the samples for two items.

Again note that the span calculated in this way does not make any assumptions about the deposition rate and will tend to give results which are too high unless the phase is properly constrained.

Correlations

The request for a correlation plot (Correlate) builds a two dimensional histogram of the samples from the two requested items.

Ordering

The order command (Order) simply keeps track of the order of all of the elements after each iteration of the MCMC Sampling process. The probability of these orders is thereby determined (only the 50 most likely orders are stored).

See also [Archaeological Considerations] [Program Operation]


Probabilities

The MCMC sampling method can be used to generate probabilities for certain propositions. For example, if we question whether a distribution should lie at a particular position in a sequence (say between the elements i and i+1) we can perform the sampling process for the questioned item giving values t_q and for each iteration we can ask whether this is true:

t_i < t_q < t_(i+1)

When all of the iterations have been completed we have a total number of iterations n and the number of times the above constraints were obeyed n_TRUE and so the probability can be calculated:

p = n_TRUE/n

See also [Archaeological Considerations] [Program Operation]



Agreement [Up][Contents][Index]

Agreement


Agreement indices

It is necessary to define an index which gives a good measure of how well any posterior distribution agrees with the prior distribution. It seems intuitive that any such distribution should be unity (100%) if there is no alteration in the distribution and that the index the index should fall off in proportion to the probability of the prior distribution selected in the posterior distribution. Such behaviour is in fact provided by the function defined here for individual distributions. Let the prior distribution be p(t) and the posterior distribution be p'(t) an agreement index can then be defined:

which is a simple overlap integral between the two distributions. We will come back to the subject of the threshold for accepting the agreement as good - this turns out to be about 60% for most purposes.


Likelihood indices

To see if a probability distribution is likely to combine well with the group of other distributions we can define an overlap integral similar to that for the agreement. Assuming that the prior distribution of interest is p(t) and the combination of all of the other distributions is r(t) the likelihood index is defined (for this program) as:


Overall agreement

To calculate on overall pseudo-Bayes-Factor, B, for the model, one might simply multiply together all of the agreement indices for the individual distributions (B=A1 A2 A3 ... An). However, to make this figure easier to use a modification of this definition has been used here. The rationale is this: since the agreement indices Ai will average about 1, their logarithms will tend to average about zero (this is not strictly true but a reasonable first order assumption); assuming these deviations are all random, ln(B) will tend deviate from zero as a random walk; the scale of any such deviation will therefore tend to be proportional to the square root of n.

The most useful definition for the overall agreement is therefore found to be

Variations from 100% will have the same significance as they do for the individual agreements.

With the exception of the power term, this is then a pseudo Bayes-factor (see for example chapter 9 of Gilks et al 1996 and the agreement indices Ai are factors of this term. The Bayes factor here is being used to compare the constrained model to the entirely unconstrained model. The power term merely provides the convenience of a suitable acceptance cutoff which is independant of the total number of terms (see below).

This overall agreement function has some interesting properties. The first of these can be found by considering the particular case of combinations of probability distributions (here performed with Combine and D_Sequence): in such cases the errors are not independent as all of the comparisons are made with the same posterior distribution which has an error which decreases with square root of n. The special case of combinations of gaussian distributions (generated with C_Date) gives identical results to the direct combinations of gaussians (using C_Combine) and so it seems reasonable that the threshold for acceptance of the combination should be the same as the chi squared test normally performed. It turns out (and this can be verified by trying groups of values) that the threshold for Aoverall which corresponds to the chi squared test at 5% is equal to:

At this threshold, we can then calculate the logarithmic average of the individual agreement indices that make this up. This is given by:

These results are tabulated here for some values of n:

________________________

  n     An(%)    A'n(%)
________________________
  1     70.7     70.7
  2     50.0     61.3
  3     40.8     59.6
  4     35.4     59.5
  5     31.6     59.8
  6     28.9     60.2
  7     26.7     60.7
  8     25.0     61.3
  9     23.6     61.8
 10     22.4     62.3
 15     18.3     64.5
 20     15.8     66.2
 25     14.1     67.6
 30     12.9     68.8
 40     11.2     70.7
 50     10.0     72.2
 60      9.1     73.4
 80      7.9     75.3
100      7.1     76.7
________________________
From this table it can be seen that for most purposes, where the number of constraints is small, a reasonable value of the agreement i of a single constrained distribution, given by A'n, is approximately:

A'c = 60%

This is then taken as the threshold of acceptance for the individual agreement indices.

Aoverall was defined to be An index based on this, whose significance would be independent of n. For this reason, A'c is also always used by OxCal as the threshold for Aoverall when the errors are non-correlated. When the errors are correlated (as for combinations and wiggle matches), An is used instead.

The mathematical formulation here is not entirely rigorous, and given the nature of the problem this is probably inevitable. However, these agreement indices do give a good working indication of when a statistical model is inconsistent with the age measurements used.



Calibration Data [Contents][Index]

Calibration Data


File Formats

The program is supplied with data files for Intcal04 which are identical to those for the CALIB program. The three data files include in this distribution are:
FileContentsReference
IntCal04.14cAtmospheric data for the N HemisphereReimer et al 2004
Marine04.14cMarine data (requires local correction)Hughen et al 2004
ShCal04.14cAtmospheric data for the S HemisphereMcCormac et al 2004

The default curve is the atmospheric N hemisphere curve intcal04.14c.

Previous versions of the calibration curve are also included: cal_86.dta (was called cal10.dta in older versions), cal_93.dta (was cal20.dta) and intcal98.14c

A post-bomb compilation is also included in kueppers04.14c see Kueppers et al 2004 for details. Because of the very fast rise in radiocarbon over this period a resolution of 0.1 years may work best - or consider whether a reservoir time constant should be applied - even if only of 1-2 years as this can make a significant difference.

This program will work with any data files intended for the CALIB (*.14c) or the Groningen program (*.dta).

The current IntCal datasets are based on a BP timescale and are comma delimited. This is recognised by the program by the presence of the CAL BP label in the header. The comment lines are started with a # and the comment included as a short reference starts with ##. The format is:

CAL BP, 14C age,Error,Delta 14C,Sigma
Previous versions of CALIB used a data format with five columns of numbers:
Calendar_date     Delta_14C     error     14C_Age       error
whereas the Groningen program uses a basic file format of:
Calendar_date     14C_Age       error
The program automatically detects the format. In addition to the calibration curve data itself the files can also be modified to provide the reference data on the top of the plots. Lines starting with the character "  or the string  ## are combined together to form the reference string.

Lines not starting with the reference character or which do not contain data in the right format are ignored.


Changing Calibration Curves

To use different calibration curves within the same plot use the command Curve. This is done by dragging the relevant icon from the right hand window when building a model. Note that multiple plots on the calibration curve always use the default curve.

For environments with a reservoir effect a special curve can be generated using Reservoir. Samples from the oceans should use a specific marine curve and Delta_R corrections.

In the Southern Hemisphere the ShCal.14c curve should be used.

See [Mathematical Methods]


Marine Curves and Corrections

A calibration curve modeled for the oceans (see Hughen et al 2004) is supplied and is called marine04.14c. This can be offset for local variations using Delta_R corrections (see Stuiver and Braziunas 1993) which are available online from Queen's University Belfast.

See [Mathematical Methods]


Mixed Calibration Curves

These can be included using a combination of the Curve and Mix_Curves statements. A typical application where there was a 20±5% marine component would include the statements:

Plot
{
 Curve "intcal04" "C:\Program Files\OxCal3\intcal04.14c";
 Curve "local_marine" "C:\Program Files\OxCal3\marine04.14c"; Delta_R 100 30;
 Mix_Curves "mixed" "intcal04" "local_marine" 20 5; 
 R_Date 660 35;
};
See [Mathematical Methods]

Default Calibration Curve

If you wish to change the default calibration curve file you should:

File Formats and Directory Structure [Contents][Index]

File Formats and Directory Structure

The directories used by the program are based around both the program_directory (where the program is installed) and the user_directory (where you store your input or model definition files). In a typical installation the program_directory would be C:\Programs\OxCal and the user_directory would be C:\My Documents\OxCal. These are used as follows:
program_directory
contains the programs and calibration data
program_directory\Manual
contains the manual
program_directory\Manual\eg
example files
user_directory
contains your input files
user_directory\Data
has subdirectories grouping results data
user_directory\Data\Untitled
results of 'quick' calculations
user_directory\Data\Eg_plot1
results from Eg_plot1
user_directory\Data\...
...etc.
Files of various different types are used by the program and can be distinguished by their file extensions.

Calibration data files (*.dta or *.14c) are dealt with in the section on calibration data. Input (model definition) files (*.14i) are covered in the CQL command summary and log files (*.14l) are simple text files (with the exception of Relate.14l - see below). The four remaining file types are probability data files (*.14d or *.14s), plot organiser files (*.14p), viewer files (*.14v) and MCMC relationship files (Relate.14l).


File Names

All data file names are made up of three parts. The first part (two letters) is made up on the basis of position within the model. The second part (six letters) is made up in the following way: The file name extension depends on the type of file.
.14d
data files before analysis (simple calibration etc)
.14s
data files after analysis (including stratigraphic information)
.14p
plot organiser files (include references to data files)
.14v
viewer files (actual plots)
.14i
input or model definition files
.14l
log files (including relationship file)
.dta
Groningen data files
.14c
Seattle data files

Configuration File

All of the configuration information (including the strings used by the program) are stored in a file Oxcal3.ini in your Windows directory. If the settings of the program become corrupted for some reason delete this file and this will set everything back to the installed configuration.

Data Files

The basic data format is very simple and consists of lines of data with the format:
Calendar_age     Probability
or if calibration curve data is included (as it usually is for radiocarbon dates):
Calendar_age     Probability    14C_Age    error
The former is all that is required if you wish to produce a prior probability distribution in some other way. The resolution used internally will be the smallest gap between any two successive points and the distribution should be given in `oldest first' order.

Additional information is also included in files by lines starting with special characters.

" reference
As for calibration data files this gives the reference for any data
$ title
Gives the title for the data plot and the label used in multiple plots
# date error
Gives the date and error of a radiocarbon date (used for the gaussian curve)
! comment
Gives the title and other comment material for the plot
_ sigma from width
Gives range data for a particular sigma (or probability) confidence limit with a starting calendar age and width - if the width is -1 the range is treated as an `older than' range and if the width is zero it is treated as a `younger than' range
@
This data file gives relative ages rather than absolute calendar ages
* nx ny minx miny maxx maxy
This file contains a correlation plot with nx by ny points covering the given range - the data will then be a list of probabilities (one per line) starting at minx, miny given as rows (in x)
. minx maxx
Can be used to enlarge the range of a plot to encompass the range given
^ value
if value is greater than 1 gives the number of events; otherwise gives the maximum of the normalised curve
% n [value]
If no value is given will set an internal register to n; if a value is given it will be printed (as a percentage) in a multiple-plot only if the internal register is equal to n

To edit the data files produced right mouse click on the relevant icon in the plot organiser (see section on graphical display).


Plot Organiser Files

These are really only differentiated from data files in that they are allowed to contain some extra elements. In normal use the plot file will provide the framework for a multiple plot with references to the data files which actually contain the probability distributions. It is fairly easy to alter the plot files using the plot organiser window in order to change the order of plots, add extra page breaks, and alter labels.

Any of the special lines above might be found in a plot file but in addition the following are used:

/
Forces a page break at this point
| type
The next distribution is of a given type
< filename
Read in a data file (delete when finished with plot)
{ filename
Read in a data file (do not delete it with the plot)
> label
Plot a label at this point
>!_
Draws a solid horizontal line across the page
>!.
Draws a dotted horizontal line across the page
( comment
append this comment to the next label
) comment
append this to the comment below the last label
[ name
start a structure bracket with the appropriate name
]
finish a structure bracket
~ value
define the value for the overall agreement
& value
define the value for the agreement of this group
Looking at a few plot files should allow you to become familiar with the structure and alter them in any way you might want.

Viewer Files

These are binary files containing all of the plot information for a plot or many pages of plots. They cannot be manipulated except using the OxCal viewer program. From this, the graphics of the plots can be pasted into other applications, page by page.

Relationship Files

These are normally produced by the program and should not be tampered with. If, however, you are worried about exactly what the program is doing you can look at these files to check that the relationships have been correctly defined. To read the file simply double click on the icon in the plot organiser.

The format of the relationship file is fairly simple. Each distribution is introduced with a header line:

$ refno gap error name
The reference number is used in all of the relationships. The gaps are used for specific purposes (eg sequencing) - they usually represent a period after the event in which nothing else can occur - refer to command summary for details. The filename is used for the prior distribution (with an extension .14d and for the sample distribution with an extension .14s.

Following such a header there are then a number of lines (can be zero) giving the relationship of this event to the others. The relationships allowed are:

>no
greater than
<no
less than
>>no
greater than a boundary
<<no
less than a boundary
=no
equal to
=no1 - no2
equal to no1 - no2
=no1 + no2
equal to no1 + no2
=no1 * no2
gives a correlation plot between two distributions
|no
spans a distribution (for spans of phases)
=>no
equal to or greater than (for finding the ends of phases)
=<no
equal to or less than (for finding the starts of phases)
|>no
spans a distribution (only lower end affected)
|<no
spans a distribution (only upper end affected)
?>no
asks is this greater than?
?<no
asks is this less than?
?|no
asks does this span?
~>no
approximately greater than (used in V_SEQ)
~<no
approximately less than (used in V_SEQ)
~no
approximately equal to
~~no
approximately equal to
?~no
asks is this approximately equal to
??no
asks is this approximately equal to
?=no
asks is this equal to
!no
request for information
:no
order event
Without giving the entire code it is difficult to explain exactly what each of these does. The reason for much of the complexity is that the program is intended to be able to handle even the most obscure of nestings (such as wiggle matched sequences within variable sequences).

If you are in any doubt as to whether the program is working correctly for some complicated configuration, set up a simple example with calendar dates rather than radiocarbon dates and use correlation and difference plots to follow what the program is doing.



Error Messages [Up][Contents][Index]

Error Messages

Error messages come with four levels of severity.

The lowest level is simply information you may wish to know (such as which calibration curve you are using) these are prefixed with the label INFORM. The next level up are warning error messages which may give rise to misleading or incorrect results these are given a label WARN. In both of these cases you are presented with a message box (unless the system option `Quiet' has been chosen) and you can continue by clicking on the [Retry] button. Using the [Abort] button will end the operation as soon as possible and the [Ignore] button will send the program into `Quiet' mode in which any errors are printed to the log file but do not generate message boxes.

Error messages at the next level up FATAL will result in the program finishing and you should close any windows left open and restart the program. System errors are errors which have not been trapped by the program and may be more generally associated with your system - you may need to restart your computer.

The following is an exhaustive list of the error messages produced by the program in alphabetical order.



Bibliography and References [Contents][Index]

Bibliography and References



Log File Examples [Contents][Index]

Log File Examples


Log.14l

INFORM  : References - M. Stuiver, A. Long and R.S. Kra eds. 
 1993 Radiocarbon 35(1); 
OxCal v3 cub r:4 sd:12 prob[chron]

( Sequence 
R_Date : 2760±40BP
  68.2% confidence
    982BC (12.4%) 960BC
    935BC (31.9%) 890BC
    883BC (23.9%) 844BC
  95.4% confidence
    999BC (95.4%) 830BC
( Phase 
R_Date : 2700±30BP
  68.2% confidence
    897BC (31.4%) 870BC
    854BC (36.8%) 823BC
  95.4% confidence
    906BC (95.4%) 811BC
R_Date : 2800±40BP
  68.2% confidence
    1000BC (68.2%) 912BC
  95.4% confidence
    1048BC (95.4%) 843BC
) Phase 
R_Date : 2660±40BP
  68.2% confidence
    891BC ( 6.4%) 882BC
    845BC (61.8%) 802BC
  95.4% confidence
    900BC (95.4%) 798BC
) Sequence 
( MCMC
Sampled : 2760±40BP
  68.2% confidence
    998BC (68.2%) 922BC
  95.4% confidence
    1032BC (95.4%) 884BC
 Agreement  84.9%
Sampled : 2700±30BP
  68.2% confidence
    900BC (43.3%) 865BC
    855BC (24.9%) 832BC
  95.4% confidence
    909BC (95.4%) 819BC
 Agreement 101.1%
Sampled : 2800±40BP
  68.2% confidence
    968BC (65.7%) 898BC
    862BC ( 2.5%) 857BC
  95.4% confidence
    984BC (95.4%) 843BC
 Agreement  93.5%
Sampled : 2660±40BP
  68.2% confidence
    834BC (68.2%) 803BC
  95.4% confidence
    880BC (95.4%) 794BC
 Agreement 116.9%
Overall agreement  96.9%
) MCMC
29496 iterations used

Tabbed.14l

2760±40BP	-982.4	-844	-999.2	-830.4	
2700±30BP	-897.2	-823.2	-906	-810.8	
2800±40BP	-999.6	-911.6	-1048	-843.2	
2660±40BP	-890.8	-801.6	-900	-798	
@2760±40BP	-998.4	-921.6	-1031.6	-883.6	
@2700±30BP	-899.6	-831.6	-908.8	-818.8	
@2800±40BP	-968.4	-857.2	-984	-843.2	
@2660±40BP	-834.4	-803.2	-880	-794.4

Relate.14l

$ 3 0 0 $24O14
< 5
< 6
$ 5 0 0 $2300U
> 3
< 7
$ 6 0 0 $25S14
> 3
< 7
$ 7 0 0 $21W14
> 5
> 6

Converg.14l

PASS	1
	$24O14	 99.7%
	$2300U	 99.8%
	$25S14	 99.5%
	$21W14	 99.5%
PASS	2
	$24O14	 99.4%
	$2300U	 99.7%
	$25S14	 99.3%
	$21W14	 99.6%
PASS	3
	$24O14	 99.3%
	$2300U	 99.5%
	$25S14	 99.5%
	$21W14	 99.2%
PASS	4
	$24O14	 99.4%
	$2300U	 99.6%
	$25S14	 99.5%
	$21W14	 99.7%
PASS	5
	$24O14	 99.6%
	$2300U	 99.6%
	$25S14	 99.4%
	$21W14	 99.7%
PASS	6
	$24O14	 99.6%
	$2300U	 99.4%
	$25S14	 99.4%
	$21W14	 99.5%
PASS	7
	$24O14	 99.5%
	$2300U	 99.4%
	$25S14	 99.5%
	$21W14	 99.5%
PASS	8
	$24O14	 99.5%
	$2300U	 99.6%
	$25S14	 99.3%
	$21W14	 99.7%
PASS	9
	$24O14	 99.4%
	$2300U	 99.7%
	$25S14	 99.4%
	$21W14	 99.6%


OxCal Bugs [Up][Contents][Index]

Known Bugs



OxCal Development [Up][Contents][Index]

OxCal Development

Version 3.10 (09/02/05)


Version 3.9 (10/06/03)


Version 3.8 (30/07/02)


Version 3.7 (05/12/01) - not for general release


Version 3.6 (27/07/00) - released to those with problems


Version 3.5 (19/07/00)


Version 3.4 (28/04/00)


Version 3.4 (beta1) (30/03/00)


Version 3.3 - release (21/09/99)


Version 3.3(beta1) (17/09/99)


Version 3.2 (06/07/99)


Version 3beta (20/1/98)


Version 2.18 (11/9/95)


Version 2.17 (24/7/95)


Version 2.16 (8/4/95)


Version 2.15 (14/4/95)


Version 2.14 (8/2/95)


Version 2.13 (3/2/95)


Version 2.12 (17/1/95)


Version 2.11 (4/1/95)


Version 2.10 (8/12/94)


Version 2.01 (28/9/94)


Version 2.00 (1/8/94)


Proposed or Requested changes



Menu and Toolbar Overview [Up][Contents][Index]

Menu and Toolbar Overview


Toolbar

The toolbar is divided into a logical sequence relating to the normal order of operations.


Main menu (any child window)


Input or model definition

As for main window and also:

Plot organiser

As for main window and also:

Plot viewer

This is a separate program with its own toolbar and menu. The toolbar can be used for many of the operations.

Toolbar

Open a document
Save current document
Print the active document
Change plot options
Copy picture to clipboard
Increase plot size
Decrease lot size
Go to the first page
Go to the previous page
Go to the next page
Go to the last page
Explore the calibration curve
Move left in the calibration curve
Move right in the calibration curve
Cover a wider area of the calibration curve
Cover a smaller area of the calibration curve
Diplay program information

Menu Structure



Index

Index

  • Acknowledgements
  • Adding extra plotting instructions [Entering Information]
  • Additional Information [MCMC Sampling]
  • Advanced settings [Performing the Analysis]
  • After [CQL Command Listing]
  • Agreement
  • Agreement indices
  • Likelihood indices
  • Overall agreement
  • Agreement index [Performing the Analysis]
  • Agreement indices [Agreement]
  • Aitchison T., B. Ottaway and A.S. Al-Ruzaiza [Bibliography and References]
  • Alteration of created plots [Graphical Display]
  • Analysis [Program Operation]
  • Archaeological and Environmental Considerations
  • author [OxCal Program Manual]
  • Axis [CQL Command Listing]
  • Batch Processing [Program Operation]
  • Bayes T.R. [Bibliography and References]
  • Bayliss A. [Bibliography and References]
  • Before [CQL Command Listing]
  • Bibliography and References
  • Aitchison T., B. Ottaway and A.S. Al-Ruzaiza
  • Bayes T.R.
  • Bayliss A.
  • Bowman S.
  • Bronk Ramsey C.
  • Bronk Ramsey C.
  • Bronk Ramsey C.
  • Bronk Ramsey C.
  • Bronk Ramsey C.
  • Bronk Ramsey C.
  • Bronk Ramsey C., J. van der Plicht and B. Weninger
  • Buck C.E., C.D. Litton and A.F.M. Smith
  • Buck C.E., C.D. Litton and E.M. Scott
  • Buck C.E., J.B. Kenworthy, C.D. Litton and A.F.M. Smith
  • Christen J.A. and C.D. Litton
  • Dekling H. and J. van der Plicht
  • Doran J.E. and F.R. Hodson eds.
  • FG McCormac, AG Hogg, PG Blackwell, CE Buck, TFG Higham, and PJ Reimer
  • Gelfand A.E. and A.F.M. Smith
  • Gilks W.R., S. Richardson and D.J.Speigelhalter
  • Harris E.C.
  • KA Hughen, MGL Baillie, E Bard, A Bayliss, JW Beck, C Bertrand, PG Blackwell, CE Buck, G Burr, KB Cutler, PE Damon, RL Edwards, RG Fairbanks, M Friedri
  • Kueppers, L. M., J. Southon, P. Baer, and J. Harte. 2004
  • Manning S.W. and B. Weninger
  • Needham S.
  • PJ Reimer, MGL Baillie, E Bard, A Bayliss, JW Beck, C Bertrand, PG Blackwell, CE Buck, G Burr, KB Cutler, PE Damon, RL Edwards, RG Fairbanks, M Friedri
  • Shennan S.
  • Steier P. and W. Rom
  • Stuiver M. and P.J. Reimer
  • Stuiver M. and P.J. Reimer
  • Stuiver M. and R.S. Kra eds.
  • Stuiver M. and T.F. Braziunas
  • Stuiver M., A. Long A., and R.S. Kra eds.
  • Stuiver M., P.J. Reimer and T.F.Braziunas
  • Stuiver M., P.J. Reimer, E. Bard, J.W. Beck, G.S. Burr, K.A. Hughen, B. Kromer, G. McCormac, J. van der Plicht and M. Spurk
  • van der Plicht J.
  • Boundaries
  • MCMC Sampling
  • Stratigraphic Information
  • Boundary [CQL Command Listing]
  • Bowman S. [Bibliography and References]
  • Bronk Ramsey C.
  • Bibliography and References
  • Bibliography and References
  • Bibliography and References
  • Bibliography and References
  • Bibliography and References
  • Bibliography and References
  • Bronk Ramsey C., J. van der Plicht and B. Weninger [Bibliography and References]
  • Buck C.E., C.D. Litton and A.F.M. Smith [Bibliography and References]
  • Buck C.E., C.D. Litton and E.M. Scott [Bibliography and References]
  • Buck C.E., J.B. Kenworthy, C.D. Litton and A.F.M. Smith [Bibliography and References]
  • C_Combine
  • CQL Command Listing
  • CQL Command Summary
  • C_Date [CQL Command Listing]
  • Calculate [CQL Command Listing]
  • Calculation options [Performing the Analysis]
  • Calculation Times [Performing the Analysis]
  • Calculations
  • Calendar and BP dates
  • Calendar dates and Asymmetric dates
  • Combinations and Wiggle Matching
  • First and Last Dated Events in a Group
  • Interpolation
  • Mixed calibration curves
  • Offset dates and Age Differences
  • Proportional errors and factors
  • Radiocarbon calibration
  • Range calculation
  • Reservoir corrections
  • Calendar and BP dates [Calculations]
  • Calendar dates and Asymmetric dates [Calculations]
  • Calibrating a Single Date [Getting Started with OxCal]
  • Calibration and Calculation [Performing the Analysis]
  • Calibration Curve [Performing the Analysis]
  • Calibration Data
  • Changing Calibration Curves
  • Default Calibration Curve
  • File Formats
  • Marine Curves and Corrections
  • Mixed Calibration Curves
  • Changing Calibration Curves [Calibration Data]
  • Chi squared test [Performing the Analysis]
  • Christen J.A. and C.D. Litton [Bibliography and References]
  • Chronological Information
  • Dating Simulation
  • Entering Information
  • Historical Information
  • Luminescence Dates
  • Other Dating Methods
  • Other Information
  • Radiocarbon Dates
  • Combination of Dates
  • Offset Dates
  • Other Dates
  • Radiocarbon Dates
  • Summing probability distributions
  • Wiggle Matching
  • Combinations [Entering Information]
  • Combinations and Wiggle Matching [Calculations]
  • Combine [CQL Command Listing]
  • Combine Example 1 [Tutorial Examples]
  • Combine Example 2 [Tutorial Examples]
  • Command line equivalents [Performing the Analysis]
  • Commands embedded in the model [Graphical Display]
  • Comment [CQL Command Listing]
  • Configuration File [File Formats and Directory Structure]
  • Constraints [MCMC Sampling]
  • Control over the Plotting Procedure [Graphical Display]
  • Converg.14l [Log File Examples]
  • Convergence [Performing the Analysis]
  • Correlate [CQL Command Listing]
  • Correlation between two events [Information from Analysis]
  • CQL Command Listing
  • After
  • Axis
  • Before
  • Boundary
  • C_Combine
  • C_Date
  • Calculate
  • Combine
  • Comment
  • Correlate
  • Curve
  • D_Sequence
  • Delta_R
  • Difference
  • Dose
  • Error
  • Event
  • Factor
  • First
  • Gap
  • Interval
  • Label
  • Last
  • Line
  • Mix_Curves
  • Offset
  • Order
  • Page
  • Phase
  • Plot
  • Prior
  • Question
  • R_Combine
  • R_Date
  • R_Simulate
  • Reservoir
  • Sequence
  • Shift
  • Span
  • Sum
  • TAQ
  • TPQ
  • V_Sequence
  • XReference
  • Year
  • CQL Command Summary
  • C_Combine
  • Dates
  • Doses
  • Entry of Values
  • Index guide
  • Nesting of Commands
  • R_Date
  • Strings
  • Cross Linking [Stratigraphic Information]
  • Curve [CQL Command Listing]
  • D_Sequence [CQL Command Listing]
  • Data Files [File Formats and Directory Structure]
  • Dates [CQL Command Summary]
  • Dating Simulation
  • Chronological Information
  • Entering Information
  • Default Calibration Curve [Calibration Data]
  • Default system options [Performing the Analysis]
  • Dekling H. and J. van der Plicht [Bibliography and References]
  • Delta_R [CQL Command Listing]
  • Difference [CQL Command Listing]
  • Difference and Interval Example [Tutorial Examples]
  • Doran J.E. and F.R. Hodson eds. [Bibliography and References]
  • Dose [CQL Command Listing]
  • Doses [CQL Command Summary]
  • Duration of phases and sequences [Information from Analysis]
  • Entering Information
  • Adding extra plotting instructions
  • Chronological Information
  • Combinations
  • Dating Simulation
  • Ordering of events
  • Probabilities of being before and after events
  • Removing lines from a command file
  • Requesting additional information from analysis
  • Stratigraphic Information
  • Entry of Values [CQL Command Summary]
  • Error [CQL Command Listing]
  • Error Messages
  • Event [CQL Command Listing]
  • Factor [CQL Command Listing]
  • FG McCormac, AG Hogg, PG Blackwell, CE Buck, TFG Higham, and PJ Reimer [Bibliography and References]
  • File Formats [Calibration Data]
  • File Formats and Directory Structure
  • Configuration File
  • Data Files
  • File Names
  • Plot Organiser Files
  • Relationship Files
  • Viewer Files
  • File Names [File Formats and Directory Structure]
  • First [CQL Command Listing]
  • First and last dated events [Information from Analysis]
  • First and Last Dated Events in a Group [Calculations]
  • Gap [CQL Command Listing]
  • Gelfand A.E. and A.F.M. Smith [Bibliography and References]
  • Getting Started with OxCal
  • Calibrating a Single Date
  • Producing a Multiple Plot
  • Running the Program
  • Wizards
  • Gilks W.R., S. Richardson and D.J.Speigelhalter [Bibliography and References]
  • Graphical Display
  • Alteration of created plots
  • Commands embedded in the model
  • Control over the Plotting Procedure
  • Modification in the plot organiser
  • Options
  • Overview of Plots
  • Plot Options
  • Style
  • Using Plots
  • Viewing the Calibration Curve
  • Harris E.C. [Bibliography and References]
  • Historical Information [Chronological Information]
  • Index
  • Index guide [CQL Command Summary]
  • Information from Analysis
  • Correlation between two events
  • Duration of phases and sequences
  • First and last dated events
  • Interval between two events
  • Reliability of stratigraphy
  • The ordering of events
  • Using Boundaries
  • Input [Performing the Analysis]
  • Input or model definition [Menu and Toolbar Overview]
  • Installation
  • Network installation
  • Interpolation [Calculations]
  • Interval [CQL Command Listing]
  • Interval between two events [Information from Analysis]
  • KA Hughen, MGL Baillie, E Bard, A Bayliss, JW Beck, C Bertrand, PG Blackwell, CE Buck, G Burr, KB Cutler, PE Damon, RL Edwards, RG Fairbanks, M Friedri [Bibliography and References]
  • Kueppers, L. M., J. Southon, P. Baer, and J. Harte. 2004 [Bibliography and References]
  • Label [CQL Command Listing]
  • Last [CQL Command Listing]
  • Likelihood indices [Agreement]
  • Line [CQL Command Listing]
  • Log File Examples
  • Converg.14l
  • Log.14l
  • Relate.14l
  • Tabbed.14l
  • Log files [Performing the Analysis]
  • Log.14l [Log File Examples]
  • Luminescence Dates [Chronological Information]
  • Main menu (any child window) [Menu and Toolbar Overview]
  • Manning S.W. and B. Weninger [Bibliography and References]
  • Marine Curves and Corrections [Calibration Data]
  • Mathematical Methods
  • MCMC Sampling
  • Additional Information
  • Boundaries
  • Constraints
  • Performing the Analysis
  • Probabilities
  • The Sampling Process
  • Menu and Toolbar Overview
  • Input or model definition
  • Main menu (any child window)
  • Plot organiser
  • Plot viewer
  • Toolbar
  • Mix_Curves [CQL Command Listing]
  • Mixed calibration curves [Calculations]
  • Mixed Calibration Curves [Calibration Data]
  • Model Building [Program Operation]
  • Modification in the plot organiser [Graphical Display]
  • Multiple Example [Tutorial Examples]
  • Needham S. [Bibliography and References]
  • Nesting of Commands [CQL Command Summary]
  • Network installation [Installation]
  • Offset [CQL Command Listing]
  • Offset Dates [Combination of Dates]
  • Offset dates and Age Differences [Calculations]
  • Options [Graphical Display]
  • Order [CQL Command Listing]
  • Order Example [Tutorial Examples]
  • Ordering of events [Entering Information]
  • Other Dates [Combination of Dates]
  • Other Dating Methods [Chronological Information]
  • Other Information [Chronological Information]
  • Overall agreement [Agreement]
  • Overall agreement for combinations [Performing the Analysis]
  • Overall agreement of models [Performing the Analysis]
  • Overview of Operation [Program Operation]
  • Overview of Plots [Graphical Display]
  • OxCal Bugs
  • OxCal Development
  • OxCal Program Manual
  • author
  • Page [CQL Command Listing]
  • Performing the Analysis
  • Advanced settings
  • Agreement index
  • Calculation options
  • Calculation Times
  • Calibration and Calculation
  • Calibration Curve
  • Chi squared test
  • Command line equivalents
  • Convergence
  • Default system options
  • Input
  • Log files
  • MCMC Sampling
  • Overall agreement for combinations
  • Overall agreement of models
  • Probabilities
  • Probabilities and agreement or likelihood indices
  • Ranges
  • Relationship files
  • Reporting
  • Resolution
  • Phase [CQL Command Listing]
  • Phase Example [Tutorial Examples]
  • Phases [Stratigraphic Information]
  • PJ Reimer, MGL Baillie, E Bard, A Bayliss, JW Beck, C Bertrand, PG Blackwell, CE Buck, G Burr, KB Cutler, PE Damon, RL Edwards, RG Fairbanks, M Friedri [Bibliography and References]
  • Plot [CQL Command Listing]
  • Plot and Results Organisation [Program Operation]
  • Plot Example 1 [Tutorial Examples]
  • Plot Example 2 [Tutorial Examples]
  • Plot Options [Graphical Display]
  • Plot organiser [Menu and Toolbar Overview]
  • Plot Organiser Files [File Formats and Directory Structure]
  • Plot viewer [Menu and Toolbar Overview]
  • Prior [CQL Command Listing]
  • Probabilities
  • MCMC Sampling
  • Performing the Analysis
  • Probabilities and agreement or likelihood indices [Performing the Analysis]
  • Probabilities of being before and after events [Entering Information]
  • Producing a Multiple Plot [Getting Started with OxCal]
  • Program Operation
  • Analysis
  • Batch Processing
  • Model Building
  • Overview of Operation
  • Plot and Results Organisation
  • Proportional errors and factors [Calculations]
  • Question [CQL Command Listing]
  • R_Combine [CQL Command Listing]
  • R_Date
  • CQL Command Listing
  • CQL Command Summary
  • R_Simulate [CQL Command Listing]
  • Radiocarbon calibration [Calculations]
  • Radiocarbon Dates
  • Chronological Information
  • Combination of Dates
  • Range calculation [Calculations]
  • Ranges [Performing the Analysis]
  • Relate.14l [Log File Examples]
  • Relationship Files [File Formats and Directory Structure]
  • Relationship files [Performing the Analysis]
  • Reliability of stratigraphy [Information from Analysis]
  • Removing lines from a command file [Entering Information]
  • Reporting [Performing the Analysis]
  • Requesting additional information from analysis [Entering Information]
  • Reservoir [CQL Command Listing]
  • Reservoir corrections [Calculations]
  • Resolution [Performing the Analysis]
  • Running the Program [Getting Started with OxCal]
  • Sequence [CQL Command Listing]
  • Sequence Example [Tutorial Examples]
  • Sequences [Stratigraphic Information]
  • Sequences with approximate gaps [Stratigraphic Information]
  • Sequences with known age gaps [Stratigraphic Information]
  • Shennan S. [Bibliography and References]
  • Shift [CQL Command Listing]
  • Span [CQL Command Listing]
  • Steier P. and W. Rom [Bibliography and References]
  • Stratigraphic Information
  • Boundaries
  • Cross Linking
  • Entering Information
  • Phases
  • Sequences
  • Sequences with approximate gaps
  • Sequences with known age gaps
  • Termini
  • Warning
  • Strings [CQL Command Summary]
  • Stuiver M. and P.J. Reimer
  • Bibliography and References
  • Bibliography and References
  • Stuiver M. and R.S. Kra eds. [Bibliography and References]
  • Stuiver M. and T.F. Braziunas [Bibliography and References]
  • Stuiver M., A. Long A., and R.S. Kra eds. [Bibliography and References]
  • Stuiver M., P.J. Reimer and T.F.Braziunas [Bibliography and References]
  • Stuiver M., P.J. Reimer, E. Bard, J.W. Beck, G.S. Burr, K.A. Hughen, B. Kromer, G. McCormac, J. van der Plicht and M. Spurk [Bibliography and References]
  • Style [Graphical Display]
  • Sum [CQL Command Listing]
  • Summing probability distributions [Combination of Dates]
  • Tabbed.14l [Log File Examples]
  • TAQ [CQL Command Listing]
  • Termini [Stratigraphic Information]
  • Terminus Ante Quem [Tutorial Examples]
  • The ordering of events [Information from Analysis]
  • The Sampling Process [MCMC Sampling]
  • Toolbar [Menu and Toolbar Overview]
  • TPQ [CQL Command Listing]
  • Tutorial Examples
  • Combine Example 1
  • Combine Example 2
  • Difference and Interval Example
  • Multiple Example
  • Order Example
  • Phase Example
  • Plot Example 1
  • Plot Example 2
  • Sequence Example
  • Terminus Ante Quem
  • Variable Sequence Example
  • Wiggle Matching Example 1
  • Wiggle Matching Example 2
  • Wiggle Matching Example 3
  • Using Boundaries [Information from Analysis]
  • Using Plots [Graphical Display]
  • V_Sequence [CQL Command Listing]
  • van der Plicht J. [Bibliography and References]
  • Variable Sequence Example [Tutorial Examples]
  • Viewer Files [File Formats and Directory Structure]
  • Viewing the Calibration Curve [Graphical Display]
  • Warning [Stratigraphic Information]
  • Wiggle Matching [Combination of Dates]
  • Wiggle Matching Example 1 [Tutorial Examples]
  • Wiggle Matching Example 2 [Tutorial Examples]
  • Wiggle Matching Example 3 [Tutorial Examples]
  • Wizards [Getting Started with OxCal]
  • XReference [CQL Command Listing]
  • Year [CQL Command Listing]