Once the likelihood probability distributions have been specified, OxCal provides tools for manipulating these in various ways. To some extent this has already been seen in the parameterisation of date information section above. More generally the aim of the program is to allow mathematical operations to be performed on parameters that have an uncertainty associated with them (expressed as a likelihood function) in just the same way as you would for exactly known parameters. This is in addition to the original methods:
The allowed operators are & | + - * / ( ) and any of the standard maths functions (abs, exp, ln, sqrt, sin, cos ... etc.). The following table summarises the use of the operators and the equivalent functions.
Original style example |
Arithmetic style example |
Constraint |
---|---|---|
Combine("C") { R_Date("A",2000,20); C_Date("B",20,30); }; |
C = R_Date(2000,20) & C_Date(20,30); |
tc=ta=tb |
Sum("C") { R_Date("A",2000,20); C_Date("B",20,30); }; |
C = R_Date(2000,20) | C_Date(20,30); |
pc(tc) = pa(tc) + pb(tc) |
R_Date("A",2000,20); N("B",20,30); Shift("C","A","B"); |
C = R_Date(2000,20) + N(20,30); |
tc=ta+tb |
R_Date("A",2000,20); N("B",20,30); Difference("C","A","B"); |
C = R_Date(2000,20) - N(20,30); |
tc=ta-tb |
- NA - |
C = N(2000,20) * N(2,0.1); |
tc=ta*tb |
- NA - |
C = N(2000,20) / N(2,0.2); |
tc=ta/tb |
- NA - |
C = 1 / N(2,0.2); |
tc=1/ta |
The function Combine() equates to the & (AND) operator. The use of this defines that both distributions apply to the same parameter - in the case above that the event is both radiocarbon dated AND we know that it is somewhere around AD20. Exactly the same information can actually be expressed in terms of a cross reference:
C = R_Date(2000,20); C &= C_Date(20,30);
or:
R_Date("C",2000,20); C_Date("=C",20,30);
The function Sum() equates to the | (OR) operator. The use of this defines that either one OR the other distribution might apply to the same parameter. This logical operation is sometimes used to provide a estimate for the distribution of different parameters - however, this distribution is folded together with the uncertainty in those parameters and so can give a misleading impression. It certainly cannot be used as a substitute for proper Bayesian analysis as it takes no account per se of the implicit grouping of of the parameters.
You can see the effect of all of these functions in the following examples (use [View > Plot parameters] to see the output after you have run the analysis):
And = N(200,20) & N(150,30); Or = N(200,20) | N(150,30); Plus = N(100,20) + N(150,30); Minus = N(200,30) - N(20,40); Times = N(100,20) * N(2,0.1); Divide = N(300,20) / N(2,0.2);
This is used to apply more than one probability distribution to the same parameter. The combined probability is just the product the two other distributions:
This is used when a parameter might equally well be sampled from either distribution. The combined probability is just the sum of the other distributions:
In the MCMC coding the histogram for the parameter tc contains all samples of parameters ta and tb. The area of the histogram plotted is in proportion to the number of parameters sampled.
All of these operate in essentially the same way. In MCMC sampling, the independent parameters ta and tb are sampled according to their probabilities. The dependent parameter tc is then calculated and a histogram created for its distribution. Mathematically the probability distribution for tc are given respectively by:
However these can more conveniently be worked out in terms of the following single integrals:
Constraint on tc | Integrated against ta | Integrated against tb |
---|---|---|
tc=ta+tb | pc(tc) ∝ ∫ pa(ta) pb(tc-ta) dta | pc(tc) ∝ ∫ pa(tc-tb) pb(tb) dtb |
tc=ta-tb | pc(tc) ∝ ∫ pa(ta) pb(ta+tc) dta | pc(tc) ∝ ∫ pa(tc+tb) pb(tb) dtb |
tc=ta*tb | pc(tc) ∝ ∫ pa(ta) pb(tc/ta) (1/ta) dta | pc(tc) ∝ ∫ pa(tc/tb) pb(tb) (1/tb) dtb |
tc=ta/tb | pc(tc) ∝ ∫ pa(ta) pb(ta/tc) (ta/tc2) dta | pc(tc) ∝ ∫ pa(tc tb) pb(tb) tb dtb |
tc=1/ta | pc(tc) ∝ pa(1/tc) (1/tc2) |
In general if we have the relationship:
then:
This is used for all functions such as exp, ln, sqrt, sin, cos ... etc. Note that for MCMC analysis, again the independent parameter ta can be sampled and the dependent parameter tc calculated and a resultant histogram built up for the marginal probability density. In all cases the independent parameter will be assumed to have a uniform prior - this means that in most cases the effective prior for the dependent parameter is not uniform.
In addition to wishing to do direct operations on parameters, it is frequently useful to be able to put constraints on them. There are different ways of expressing constraints. You might for example know that a particular sample must have a date between AD 870 and AD 1066 and that it is also radiocarbon dated to 1120±27. This can be expressed in terms of a combination of two likelihoods:
R_Date(1180,27) & Date(U(870,1066));
However, more often the constraints are not precisely known and we know only the relative order of events. In order to deal with such situations four main functions are used:
However, before we look at how these are used, we need to consider the grouping of events. The fact that events are related in some way almost always means that they are part of some group of events which needs to be treated as a whole. Failure to do this will mean that the events are assumed to be entirely independent apart for the constraint applied. A model with more than two events that makes this assumption always results in a wider spread than is realistic (see Steier and Rom 2000). OxCal will generate a warning if no groups have been defined and yet constraints are imposed.
Supposing we have a single group of events we need to define that there is some start event, and an end event. All we know is that the start S occurs before the events A, B, C, D and that these all occur before the end event E. This can be expressed as:
Sequence() { Boundary("S"); Phase() { R_Date("A",3050,25); R_Date("B",3010,25); R_Date("C",3020,25); R_Date("D",3000,25); }; Boundary("E"); };
The Sequence() term is used to define elements or groups that are in a particular order. For shorter sequences such information can also be introduced using the operators < or >
Now we can start to include some internal constraints. We might for example know that A < B < C < D or in other words that A is older than B which is older than C which is older than D. This can be expressed directly as:
Sequence() { Boundary("S"); R_Date("A",3050,25); R_Date("B",3010,25); R_Date("C",3020,25); R_Date("D",3000,25); Boundary("E"); }; |
Sequence() { Boundary("S"); Sequence() { R_Date("A",3050,25); R_Date("B",3010,25); R_Date("C",3020,25); R_Date("D",3000,25); }; Boundary("E"); }; |
Sequence() { Boundary("S"); R_Date("A",3050,25) < R_Date("B",3010,25) < R_Date("C",3020,25) < R_Date("D",3000,25); Boundary("E"); }; |
We might instead wish to define that A is older than B and C, we don't know their relative ages but that they are both older than D. This can be expressed as:
Sequence() { Boundary("S"); R_Date("A",3050,25); Phase() { R_Date("B",3010,25); R_Date("C",3020,25); }; R_Date("D",3000,25); Boundary("E"); }; |
Sequence() { Boundary("S"); Sequence() { R_Date("A",3050,25); Phase() { R_Date("B",3010,25); R_Date("C",3020,25); }; R_Date("D",3000,25); }; Boundary("E"); }; |
Sequence() { Boundary("S"); R_Date("A",3050,25) < (R_Date("B",3010,25) | R_Date("C",3020,25)) < R_Date("D",3000,25); Boundary("E"); }; |
Now supposing we introduce a terminus post quem T for event D. This is introduced using the After() statement (equivalent to the old TPQ statement). as in:
Sequence() { Boundary("S"); Sequence() { R_Date("A",3050,25); Phase() { R_Date("B",3010,25); R_Date("C",3020,25); }; After(R_Date("T",3100,30)); R_Date("D",3000,25); }; Boundary("E"); };
Note that T is not assumed to be part of the overall grouping. If you wish it to be then it may be easier to split up the model into a likelihood definition section and a model section as in:
// parameter and likelihood definitions A=R_Date(3050,25); B=R_Date(3010,25); C=R_Date(3020,25); D=R_Date(3000,25); T=R_Date(3100,30); // set up the group Boundary("S") < (A|B|C|D|T) < Boundary("E"); // define the internal constraints A < (B|C) < D; T < D;
This type of definition allows complete freedom in model definition, however, it is consequently easier to make mistakes and harder to follow. The definition of constraints in terms of Sequence() and Phase() can be mixed with those defined using <, > and |.
Each constraint introduces another component in the prior probability function p(t). For example the constraint ta < tb introduces the factor:
where H(x) is the Heaviside function which is zero is x is less than zero, half if x is equal to zero and one if x is greater than zero.
More conveniently we define a function for multiple variables:
which is one if the arguments are in the correct order:
and zero if they are not (strictly this should be half if they are all equal, though in practice as the numbers are real this has an infinitesimally small probability and is ignored).
In the case of the example above where we have the code:
A < (B|C) < D; T < D;
the prior will have a factor:
The two functions Before() and After() are used in Bayesian models to define constraints as described above. In addition, during the calculation phase of the program the functions calculates a cumulative integration of the probability distribution operated on:
As well as specific constraints it is important in any model to consider implicit groupings. As a minimum each model that contains constraints should contain at least one grouping. If this is not included in the model definition, you are assuming that all of the events are unrelated and independent. With such an underlying assumption it is very unlikely that all of the events will be clustered together and the output from the model will reflect this assumption.
The most frequently used assumption is that a group of events are randomly sampled from a uniform distribution - that is a random scatter of events between a start boundary and an end boundary (based on the original work of Buck et al. 1992). We have already seen how this is expressed in the previous section with the example of a single phase:
Sequence() { Boundary("S"); Phase() { R_Date("A",3050,25); R_Date("B",3010,25); R_Date("C",3020,25); R_Date("D",3000,25); }; Boundary("E"); };
Here you can see that the Boundary() events define the group within the sequence. In the previous section it is explained how to define whether the members of the group are constrained to be in a particular order.
The uniform prior, is not the only one that can be applied however. Different pairings of specific Boundary commands:
allow a range of different priors to be applied to the enclosed groups. The following table shows the distributions and their related simple Boundary functions.
Boundary | Boundary | |
Sigma_Boundary | Sigma_Boundary | |
Sigma_Boundary | Boundary | |
Boundary | Sigma_Boundary | |
Tau_Boundary | Boundary | |
Boundary | Tau_Boundary | |
Zero_Boundary | Boundary | |
Boundary | Zero_Boundary |
To give a specific example the following gives the model for a group of events which is assumed to be exponentially distributed rising to a maximum event probability at the end event E:
Sequence() { Tau_Boundary("T"); Phase() { R_Date("A",3050,25); R_Date("B",3010,25); R_Date("C",3020,25); R_Date("D",3000,25); }; Boundary("E"); }; Tau=(E-T); Tau&= U(0,200);
In this example the last two lines define a parameter Tau which is the time constant for the exponential distribution, and a prior is assigned to this which is uniform between 0 and 200 years.
All boundary commands can take an additional argument which defines likelihood for the boundary. For example to limit the possible values of a boundary you can provide a uniform likelihood as in:
Boundary("E",Date(U(-1300,-1150)));
In addition to this simple Boundary models, which assume there are specific events controlling the underlying process, OxCal can also deal with more gradual transitions using what are usually called 'trapezium' priors (Karlsberg 2006, Lee and Bronk Ramsey 2012).
For these we use a model that is similar to the uniform phase model but modify it so that the Boundary has an associated Transition period (and optional Start and End queries. The following example shows its application to a single phase:
Sequence() { Boundary("MidStart") { Transition("Duration Start"); Start("Start Start"); End("End Start"); }; Phase() { R_Date("A",3050,25); R_Date("B",3010,25); R_Date("C",3020,25); R_Date("D",3000,25); R_Date("E",3140,25); R_Date("F",3060,25); R_Date("G",3110,25); R_Date("H",3080,25); R_Date("I",3250,25); R_Date("J",3110,25); R_Date("K",3070,25); R_Date("L",3200,25); }; Boundary("Mid End") { Transition("Duration End"); Start("Start End"); End("End End"); }; };
For each boundary in such a trapezium model, the program can return the start of the transition period, the end and the midpoint (which is returned as the boundary value itself). The associated commands are:
The project manager has facilities to automatically create models covering the main groupings; these can be found under [Tools > Models].
In addtion to these groupings a KDE_Model:
command can be used as outlined in Bronk Ramsey 2017. Although this is really too few dates for such an analysis the code should be along the lines of:KDE_Model() { R_Date("A",3350,25); R_Date("B",3310,25); R_Date("C",3320,25); R_Date("D",3200,25); R_Date("E",3340,25); R_Date("F",3260,25); R_Date("G",3210,25); R_Date("H",3180,25); R_Date("I",3350,25); R_Date("J",3210,25); R_Date("K",3070,25); R_Date("L",3200,25); };
The mathematical formulation of all of these groupings is similar. In all cases we have two boundaries which we will which define the group. These are assumed to be independent parameters ta and tb of the model with uniform priors and subject to the constraint ta < tb. The members of the group ti have priors which are dependent on ta and tb. In all cases the prior for the span of the group tb-ta is uniform.
Type of ta |
Prior for elements of the group ti |
Type of tb |
---|---|---|
Boundary | pH(ta, ti, tb)/(tb-ta) | Boundary |
Sigma_Boundary | [1/((tb-ta)√(π/2))] exp(-(2 ti-tb-ta)2/(2(tb-ta)2) | Sigma_Boundary |
Sigma_Boundary | [pH(ti, tb)/((tb-ta)√(2 π))] exp(-(ti-tb)2/(2(tb-ta)2) | Boundary |
Boundary | [pH(ta, ti)/((tb-ta)√(2 π))] exp(-(ti-ta)2/(2(tb-ta)2) | Sigma_Boundary |
Tau_Boundary | [pH(ti, tb)/(tb-ta)] exp(-(tb-ti)/(tb-ta)) | Boundary |
Boundary | [pH(ta, ti)/(tb-ta)] exp(-(ti-ta)/(tb-ta)) | Tau_Boundary |
Zero_Boundary | 2 pH(ta, ti, tb) ((ti-ta)/(tb-ta)2) | Boundary |
Boundary | 2 pH(ta, ti, tb) ((tb-ti)/(tb-ta)2) | Zero_Boundary |
See section on constraints for details of the pH function.
So for example if we have a single phase within two Boundary() elements, the overall prior for this group is given by;
which, subject to the constraints, is just proportional to (tb-ta)-n where there are n elements in the group (cf. Buck et al. 1992).
To take another example for an exponentially rising distribution of events (up to some terminating event tb), the overall prior is:
In many models there are multiple groups.
Where one group is nested within another, the outer boundaries of the inner group are treated in exactly the same way as any other elements within the outer group.
Where you have a whole series of boundaries, each group segment, between each pair of boundaries is treated in the same way, as described in the equations above. However, as it is desirable that the prior for the overall span of such a sequence of groups is independent of the number of sub-groups we use the following model.
Let the boundaries be described by parameters ta, tb ... tn such that there are n boundaries in total. As they are in a sequence of some sort we have the constraint that We assume that
We assume that ta and tn are otherwise independent and have uniform priors. The normalised prior for any of the other boundaries t is then just:
Overall this gives a factor in the prior (in addition to the constraints) which is:
This is as recommended by Nicholls and Jones 2001 and is included in the overall prior unless the option UniformSpanPrior is set to off.
In addition, if there are limits on the range of outer boundaries under consideration (because of constraints or simply because the program has to set limits somewhere) this can have an effect on the prior for the span of the overall group. This is because the number of possible combinations of solutions depends on the overall span. To make the prior for the overall span uniform despite these limits the following factor is added to the prior unless the UniformSpanPrior options is set to off:
where llima and ulima are the lower and upper limits on boundary ta. This is a further extension of a similar factor suggested by Nicholls and Jones 2001.
Oxcal provides a number of models relevant to depositional sequences. For more detail on these models see Bronk Ramsey 2008 (pre-print available).
The models cover the whole range from the defined sequence D_Sequence() where the exact age gap between elements of the model are known, to the general Sequence() where all that we know is the order of the events:
The D_Sequence() function, which is most often used for the wiggle-match dating of radiocarbon dated tree ring sequences, actually performs a special kind of combination (see Bronk Ramsey et al. 2001). By using the Gap() command, the gap between elements of the sequence are defined. In fact the Combine() function can be used to achieve the same result though in this case the Gap() command for each element should define the gap to the final combined event. This is illustrated in the following examples which perform the same calculation:
D_Sequence() { R_Date("A",2023,20); Gap(10); R_Date("B",1961,20); Gap(10); R_Date("C",1999,20); Gap(10); R_Date("D",1966,20); Gap(10); R_Date("E",1954,20); Gap(10); R_Date("F",1936,20); Gap(10); R_Date("G",1948,20); Gap(10); R_Date("H",1925,20); }; |
Combine() { R_Date("A",2023,20); Gap(70); R_Date("B",1961,20); Gap(60); R_Date("C",1999,20); Gap(50); R_Date("D",1966,20); Gap(40); R_Date("E",1954,20); Gap(30); R_Date("F",1936,20); Gap(20); R_Date("G",1948,20); Gap(10); R_Date("H",1925,20); }; |
In both of the above methods the overall function (D_Sequence() or Combine()) generates a probability distribution function for the final event (event H).
The V_Sequence method, carried forward from previous versions of OxCal extends the D_Sequence methodology to allow for uncertainty in the gaps between events. The events are also constrained to be in order (so the Normal uncertainty is truncated at zero). In the current version the same can be achieved by applying a prior to the interval between events in a normal sequence. Both of the following are equivalent.
V_Sequence() { Boundary("Start"); Gap(10,5); R_Date("A",2023,20); Gap(10,5); R_Date("B",1961,20); Gap(10,5); R_Date("C",1999,20); Gap(10,5); R_Date("D",1966,20); Gap(10,5); R_Date("E",1954,20); Gap(10,5); R_Date("F",1936,20); Gap(10,5); R_Date("G",1948,20); Gap(10,5); R_Date("H",1925,20); Gap(10,5); Boundary("End"); }; |
Sequence() { Boundary("Start"); Interval(N(10,5)); R_Date("A",2023,20); Interval(N(10,5)); R_Date("B",1961,20); Interval(N(10,5)); R_Date("C",1999,20); Interval(N(10,5)); R_Date("D",1966,20); Interval(N(10,5)); R_Date("E",1954,20); Interval(N(10,5)); R_Date("F",1936,20); Interval(N(10,5)); R_Date("G",1948,20); Interval(N(10,5)); R_Date("H",1925,20); Interval(N(10,5)); Boundary("End"); }; |
The latter method is potentially more powerful as any prior can be applied to the interval.
In many cases the information available does not directly relate to time intervals. Instead information might be available about depth of samples in a sequence. This information tells us about the relative length of intervals. In the D_Sequence case above we defined the time gap between events precisely (as we might be able to do in the case for tree-ring sequences, for example). Supposing instead we knew that the events were equally spaced in time. This is the situation if we assume a uniform deposition rate in a sedimentary sequence and have depth information relating to the dated events. This situation is dealt with using the U_Sequence() function and assigning depth information as in the following example:
U_Sequence() { Boundary(); R_Date("A",2023,20){ z=70; }; R_Date("B",1961,20){ z=60; }; R_Date("C",1999,20){ z=50; }; R_Date("D",1966,20){ z=40; }; R_Date("E",1954,20){ z=30; }; R_Date("F",1936,20){ z=20; }; R_Date("G",1948,20){ z=10; }; R_Date("H",1925,20){ z=0; }; Boundary(); };
If you view the results of this analysis as a plot against depth you will see the age-depth model generated.
In practice, of course an assumption of absolutely uniform deposition is not realistic. Neither is assuming that all we know is that the dated events occur in a specific order (as we get from the Sequence() algorithm). We need to allow for fluctuations in the deposition rate. Such a model can be generated using the Sequence algorithm by introducing an undated event at regular intervals in the sequence. The more finely spaced these events are the more rigidly uniform the deposition will be. In practice such an approach is very cumbersome as a model covering a depth of 1m with events every 1mm would have a thousand parameters. However, OxCal provides a function which emulates this - the P_Sequence function. This takes as a argument (k) the number of postulated events per unit length. For sedimentary sequences the 'event scale' is typically in the 1mm-1cm range. In order to generate an age-depth model the dated events can be interpolated automatically by specifying a second argument for the function. The following code gives two examples based on depths in centimetres; the first has an event spacing of 0.1cm (10 cm-1) and the second an event spacing of 1cm; the latter allows more major fluctuations in the deposition rate; in practice in this case the outputs are indistinguishable:
P_Sequence(10) { Boundary(); R_Date("A1",2023,20){ z=70; }; R_Date("B1",1961,20){ z=60; }; R_Date("C1",1999,20){ z=50; }; R_Date("D1",1966,20){ z=40; }; R_Date("E1",1954,20){ z=30; }; R_Date("F1",1936,20){ z=20; }; R_Date("G1",1948,20){ z=10; }; R_Date("H1",1925,20){ z=0; }; Boundary(); }; P_Sequence(1) { Boundary(); R_Date("A2",2023,20){ z=70; }; R_Date("B2",1961,20){ z=60; }; R_Date("C2",1999,20){ z=50; }; R_Date("D2",1966,20){ z=40; }; R_Date("E2",1954,20){ z=30; }; R_Date("F2",1936,20){ z=20; }; R_Date("G2",1948,20){ z=10; }; R_Date("H2",1925,20){ z=0; }; Boundary(); };
The P_Sequence() function in principle allows you to provide models of any level of rigidity - if the parameter k is made high enough the model will become very similar the the U_Sequence and if it is made very low it will become similar to the plain Sequence().
It is often useful to get ages distributions for points in a deposition sequence that are not directly dated. This can be done in two different ways: you can either use the Date() command with a specified depth. For example if you had no dating information at depth z=40 in the U_Sequence() example above you could replace the radiocarbon date with:
Date("D"){ z=40; };
OxCal also allows you to interpolate automatically at regular intervals using the optional Interpolation parameter in:
The interpolation parameter is in events per unit length (the same as k for the P_Sequence) so if your depth is in m and you want an output every 5cm you would set the Interpolation parameter to 20. If the depth is in cm the same could be achieved with a value of 0.2. So on the U_Sequence example above you would replace the first command with:
U_Sequence(0.2)
You can also extract the age depth model from the results of the analysis very easily. You just click on the raw view icon (≡) in the output table which is next the the U_Sequence or P_Sequence line. This will present you with a comma delimited list of all depths with their associated age measures. This can be saved in csv format to be openned in a spreadsheet.
In many cases it is hard to decide what the right value of the parameter k should be for a particular model. This is allowed for by allowing the model to average over different values of k. Because the possible values vary by many orders of magnitude the variation is defined by setting a nominal k value k0 (typically 1 for cm or 100 for m) and then defining a prior for log10(k/k0) which might allow variation by two orders of magnitude in either direction. Thus for the example above, allowing for interpolation twice per cm we could have:
P_Sequence("variable",1,2,U(-2,2)) { Boundary(); R_Date("A",2553,20){ z=70; }; R_Date("B",2541,20){ z=60; }; R_Date("C",2499,20){ z=50; }; R_Date("D",2366,20){ z=40; }; R_Date("E",2254,20){ z=30; }; R_Date("F",2136,20){ z=20; }; R_Date("G",2048,20){ z=10; }; R_Date("H",1925,20){ z=0; }; Boundary(); };
This approach allows the model to find the most appropriate value of k, and so you don't have to make any arbitrary assumptions. Of course there may be cases where there is specific information that defines the most appropriate value for k in which case specifying it, or narrowing the range is the right thing to do. Also keep in mind that you can use cross referencing to ensure that the same value of k is used for more than one P_Sequence.
The diagram below illustrates the underlying processes for a:
Click on any of the cores to simulate the deposition scenario.
D_Sequence | Sequence | P_Sequence(1) | P_Sequence(2) | U_Sequence |
---|---|---|---|---|
|
|
|
|
|
Key | |
---|---|
|
Sample level |
|
Known age gap |
|
Assumed step size |
The mathematical details for the deposition models are given in Bronk Ramsey 2008 (pre-print available)
So far the commands considered have been primarily for adding information to a model. Other commands are also available for extracting information.
One way to extract information is simply to calculate new dependent parameters. For example in the exponential example above the statement:
Tau=(E-T);
allows a probability distribution for the difference between E and T to be generated. The same thing can be achieved using the Difference() query:
Difference("Tau","E","T");
There are several commands specifically intended for queries:
Sum can also be used directly as function as described in the section about operations on probability distributions where you can also see a caveat about its use. However if applied without any arguments the sum of the distributions will be found for the enclosing group. The same is true for the KDE_Plot() function which will provide a kernel density distribution for the samples from the MCMC and Order() function which finds the probabilities of pairs of elements being in a particular order. The MCMC_Sample() function acts like the Order() function but writes a file with all, or a selection, of the MCMC samples for the group. This enables you to study the details of the MCMC analyis.
The three function First(), Last() and Span() can also be used in two ways. If they are given more than one parameter the operate as function, otherwise (as in normal use) they interrogate the surrounding group. These three commands can be given an additional argument which provides a prior for the quantity. Thus the command:
First("First in phase",Date(U(AD(1066),AD(1100))));
would constrain the first item in the group to lie somewhere between AD1066 and AD1100. The following piece of code uses all of these first five queries to find out about the phase:
Sequence() { Boundary(); Phase() { R_Date("A",2145,25); R_Date("B",2235,26); R_Date("C",2112,23); R_Date("D",2083,23); Sum(); Order(); First(); Last(); Span(); }; Boundary(); };
The Interval() finds the gap between events or groups of events in a sequence. A prior probability can also be applied as can be seen in the deposition example above.
Difference() and Shift() carried over from previous version of OxCal allow you to find the probability distributions for the difference between two parameters or for one parameter added to another. See the section on operations on probability distributions.
The Correlation() function simply allows you to plot one distribution against another to see the extend to which the parameters are correlated. The Correl_Matrix() and Covar_Matrix() functions on the other hand generate quantitative correlation and covariance matrices respectively. The following example shows their use in a simple example:
P_Sequence(1) { Boundary("Start"); R_Date("A",2023,20){ z=70; }; R_Date("B",1961,20){ z=60; }; R_Date("C",1999,20){ z=50; }; R_Date("D",1966,20){ z=40; }; R_Date("E",1954,20){ z=30; }; R_Date("F",1936,20){ z=20; }; R_Date("G",1948,20){ z=10; }; R_Date("H",1925,20){ z=0; }; Boundary("End"); Correlation("Correl","D","E"); Correl_Matrix(); Covar_Matrix(); };
The Outlier() command, tags an element as an outlier and takes it out of the model and in some cases (such as a sequence) the program will calculate the probability that a sample is at a particular point in the sequence. The following simple code fragment shows the syntax to be used.
Sequence() { Boundary("S"); Phase() { R_Date("A",3050,25); R_Date("B",3010,25){Outlier();}; R_Date("C",3020,25); R_Date("D",3000,25); }; Boundary("E"); };
Using the input utility individual items can be questioned in this way by selecting them and then inserting the Outlier() command. See the next section for more complex outlier analysis.
Most of the query commands here simply provide marginal densities for dependent parameters. In the following cases we will assume that the queries are applied to a group of events ta, tb, tc,...:
Function | Definition |
---|---|
R=Sum() | See Sum() function and | (OR) operator in operations on probability distributions |
Order() | Uses MCMC to find probability ta<tb, ta<tc... etc. |
R=First() | tr=min(ta, tb, tc,...) |
R=Last() | tr=max(ta, tb, tc,...) |
R=Span() | tr=max(ta, tb, tc,...) - min(ta, tb, tc,...) |
The following require specific arguments | |
Difference("R","A","B") | tr=ta-tb |
Shift("R","A","B") | tr=ta+tb |
Correlation("R","A","B") | Provides a probability density plot of tb against ta |
The Interval() query depends on its place in the sequence. The function returns the difference between the maximum of the preceding elements and the minimum of the elements following the query.
When there are observations pertaining directly to one of the queried parameters, a likelihood distribution p(yr|tr) can be defined. This will be used in calculation of the global posterior.
When an item has the Outlier() function applied, the prior is for that parameter is reduced to the simple uniform prior. Thus:
the prior and posterior probability densities are the same. In addition if the parameter would otherwise have been subject to a constraint, the probability of the constraint being true is calculated (during MCMC analysis) and reported (in column P of the output table). In the case of parameters questioned in this way in a Combine() group, or D_Sequence(), the same parameter can be estimated from all of the other information in the model. We will denote all of the information in the model except for yr as y(r). We can then compare the likelihood of these distributions:
This value is presented in column L of the output table.
The oultlier analysis methods included in OxCal are a development of those worked out by Andres Christen (Christen 1994) and are fully explained in Bronk Ramsey 2009.
The two commands which are used to invoke outlier analysis are:
The Outlier_Model() command sets up the outlier model and the Outlier() command is used to apply it to an individual radiocarbon date or other measurement. In order to set up the model we need to know the following things:
There are four example outlier model definitions built into the Model tool in OxCal these are:
Outlier_Model("General",T(5),U(0,4),"t"); Outlier_Model("SSimple",N(0,2),0,"s"); Outlier_Model("RSimple",N(0,100),0,"r"); Outlier_Model("TSimple",N(0,100),0,"t"); Outlier_Model("RScaled",T(5),U(0,4),"r"); Outlier_Model("Charcoal",Exp(1,-10,0),U(0,3),"t");
All of these are described in detail in Bronk Ramsey 2009 but here it is worth describing two of them as examples.
For each sample that is to be included in the outlier analysis we must give a prior probability for the measurement being an outlier - this is included in the Outlier command. The following example shows how this is used in practice.
Plot() { Outlier_Model("General",T(5),U(0,4),"t"); Sequence() { Boundary(""); Sequence("") { R_Date(3095,25) { Outlier(0.05); }; R_Date(3028, 25) { Outlier(0.05); }; R_Date(2951, 25) { Outlier(0.05); }; R_Date(2888, 25) { Outlier(0.05); }; R_Date(3134, 25) { Outlier(0.05); }; R_Date(2759, 25) { Outlier(0.05); }; R_Date(2590, 25) { Outlier(0.05); }; }; Boundary(""); }; };
The mathematical details for outlier analysis are given in Bronk Ramsey 2009