Printer Friendly
The Free Library
22,728,043 articles and books

Powder diffraction: least-squares and beyond.



This paper addresses some of the underlying statistical assumptions and issues in the collection and refinement of powder diffraction Powder diffraction is a scientific technique using X-Ray or neutron diffraction on powder or microcrystalline samples for structural characterization of materials.

Ideally, every possible crystalline orientation is represented equally in a powdered sample.
 data. While standard data collection and Rietveld analysis have been extremely successful in providing structural information on a vast range of materials, there is often uncertainty about the true accuracy of the derived structural parameters. In this paper, we discuss a number of topics concerning data collection and the statistics of data analysis. We present a simple new function, the cumulative chi-squared distribution, for assessing regions of misfit mis·fit  
n.
1. Something of the wrong size or shape for its purpose.

2. One who is unable to adjust to one's environment or circumstances or is considered to be disturbingly different from others.
 in a diffraction pattern diffraction pattern

The interference pattern that results when a wave or a series of waves undergoes diffraction, as when passed through a diffraction grating or the lattices of a crystal.
 and introduce a matrix which relates the impact of individual points in a powder diffraction pattern with improvements in the estimated standard deviation In statistics, the average amount a number varies from the average number in a series of numbers.

(statistics) standard deviation - (SD) A measure of the range of values in a set of numbers.
 of refined parameters. From an experimental viewpoint, we emphasise the importance of not over-counting at low-angles and the routine use of a variable counting scheme for data collection. Data analysis issues are discussed within the framework of maximum likelihood, which incorporates the current least-squares strategies but also enables the impact of systematic uncertainties in both observed and calculated data to be reduced.

Keywords: least squares analysis; powder diffraction; Rietveld analysis.

**********

1. Introduction

We can improve the quality of the structural results obtained from a powder diffraction pattern by a number of means. Firstly and most importantly Adv. 1. most importantly - above and beyond all other consideration; "above all, you must be independent"
above all, most especially
, sufficient care should be taken in performing a good experiment and the observed diffraction data should be as free from systematic errors as possible. Due attention should be given to all parts of the diffraction pattern. The relative importance of, for example, low- and high-angle regions of a diffraction pattern should be assessed before performing the experiment and consideration paid to the balance of data collection statistics across the diffraction pattern. With structure solution and refinement from x-ray powder diffraction data, we stress the importance of a variable counting scheme that puts substantially increased weight on the high-angle reflections and explain why over-counting low-angle reflections can be deleterious deleterious adj. harmful.  to obtaining accurate structural parameters.

After determining the best data collection protocol, the next consideration for obtaining good quality structural results is ensuring that the calculated diffraction pattern is modelled well. For example, a good understanding of the profile line shape through a fundamental parameters technique pays dividends in obtaining a good fit to the Bragg peak shape.

On first thought, it might be expected that the combination of a careful experiment followed by careful modelling of the diffraction data is all that needs be considered to obtain good structural information. However, there is an important third facet facet /fac·et/ (fas´it) a small plane surface on a hard body, as on a bone.

fac·et
n.
1. A small smooth area on a bone or other firm structure.

2.
 that is rarely actively considered and indeed generally taken for granted--the algorithm behind fitting the model to the data. We generally assume that least-squares analysis is sufficient and indeed it is often so. However, least-squares is usually employed "because that's the way it has always been done" rather than because of a positive consideration of its applicability. This mirrors the experimental situation mentioned earlier where constant-time data-collection approaches are still often preferred over variable counting-time strategies despite the fact that it has been known for at least a decade that the latter procedure gives better, more accurate results for x-ray powder diffraction data [1, 2].

The underlying principles of probability theory probability theory

Branch of mathematics that deals with analysis of random events. Probability is the numerical assessment of likelihood on a scale from 0 (impossibility) to 1 (absolute certainty).
 indicate that least-squares analysis is appropriate only if (i) the data points have an associated Gaussian error distribution and (ii) the proposed model is a complete representation of the observed data. Although these conditions appear to be rather restrictive, they are nevertheless broadly satisfied in most Rietveld analyses. One exception to standard least-squares analysis that was discussed several years ago is the situation where the counts per data point are low ([less than or equal to]20) and followed a Poisson rather than a Gaussian distribution A random distribution of events that is graphed as the famous "bell-shaped curve." It is used to represent a normal or statistically probable outcome and shows most samples falling closer to the mean value. See Gaussian noise and Gaussian blur. . Antoniadis et al. showed that a maximum likelihood refinement with due account given to Poisson counting statistics was the correct approach [3]. Indeed, maximum likelihood and Bayesian probability Bayesian probability is an interpretation of the probability calculus which holds that the concept of probability can be defined as the degree to which a person (or community) believes that a proposition is true.  theory offer the correct formalism Formalism
 or Russian Formalism

Russian school of literary criticism that flourished from 1914 to 1928. Making use of the linguistic theories of Ferdinand de Saussure, Formalists were concerned with what technical devices make a literary text literary, apart
 for considering all data and model uncertainties; least-squares analysis is just one, albeit relatively general, instance of maximum likelihood. Careful consideration of the physical origins of uncertainties in either data errors or insufficiencies in the structural model leads to probability distribution Probability distribution

A function that describes all the values a random variable can take and the probability associated with each. Also called a probability function.


probability distribution 
 functions that must be optimised through maximum likelihood methods.

The fundamental statistics approach that looks for a physical understanding of the uncertainties in a powder diffraction pattern is in many ways analogous analogous /anal·o·gous/ (ah-nal´ah-gus) resembling or similar in some respects, as in function or appearance, but not in origin or development.

a·nal·o·gous
adj.
 to the fundamental parameters approach used in peak shape analysis. Both methods of analysis lead to more reliable results. In this paper, several applications of maximum likelihood that go beyond least-squares analysis are discussed. These include dealing with unknown systematic errors in the data, unattributable Adj. 1. unattributable - not attributable
unascribable

attributable - capable of being attributed; "the collapse of the movement was attributable to a lack of morale"; "an idea attributable to a Russian"
 impurity im·pu·ri·ty  
n. pl. im·pu·ri·ties
1. The quality or condition of being impure, especially:
a. Contamination or pollution.

b. Lack of consistency or homogeneity; adulteration.

c.
 phases and incomplete structural model descriptions.

2. Assessing the Quality of a Rietveld Refinement Rietveld refinement is a technique devised by Hugo Rietveld for use in the characterisation of crystalline materials. The neutron and x-ray diffraction of powder samples results in a pattern characterised by peaks in intensity at certain positions.

Before considering how we can optimise optimise - To perform optimisation.  our chances of success using improved data collections methods or alternative statistical approaches, it is worth benchmarking the statistical quality of the Rietveld fit to a powder diffraction pattern. The conventional goodness-of-fit quantities used in the Rietveld method are the standard R-factors and [chi square chi square (kī),
n a nonparametric statistic used with discrete data in the form of frequency count (nominal data) or percentages or proportions that can be reduced to frequencies.
] quantities. The following four R-factors are generally quoted in most Rietveld refinement programs:

expected R-factor:

[R.sub.E] = [square root of ((N - P + C)/([N.summation summation n. the final argument of an attorney at the close of a trial in which he/she attempts to convince the judge and/or jury of the virtues of the client's case. (See: closing argument)  over (i=1)][w.sub.i][y.sub.i.sup.2]))] (1a)

weighted profile R-factor:

[R.sub.wP] = [square root of (([N.summation over (i=1)][w.sub.i]([y.sub.i] - [M.sub.i])[.sup.2])/([N.summation over (i=1)][w.sub.i][y.sub.i.sup.2]))] (1b)

profile R-factor:

[R.sub.P] = [square root of (([N.summation over (i=1)]([y.sub.i] - [M.sub.i])[.sup.2])/([N.summation over (i=1)][y.sub.i.sup.2]))] (1c)

Bragg R-factor:

[MATHEMATICAL EXPRESSION A group of characters or symbols representing a quantity or an operation. See arithmetic expression.  NOT REPRODUCIBLE re·pro·duce  
v. re·pro·duced, re·pro·duc·ing, re·pro·duc·es

v.tr.
1. To produce a counterpart, image, or copy of.

2. Biology To generate (offspring) by sexual or asexual means.
 IN ASCII ASCII or American Standard Code for Information Interchange, a set of codes used to represent letters, numbers, a few symbols, and control characters. Originally designed for teletype operations, it has found wide application in computers. ] (1d)

The expected R-factor is basically as good as the weighted profile R-factor can get since the weighted sum of the squares of the difference between observed and calculated profile values, [N.summation over (i=1)][w.sub.i]([y.sub.i] - [M.sub.i])[.sup.2], can at best be equal to the number of independent data, (N-P N-P Special Projects Officer, NACO staff +C), in the diffraction pattern since each weighted squared profile difference in a best fit to the data should be equal to unity. In a standard x-ray powder diffraction pattern, the weight, [w.sub.i], is equal to 1/[y.sub.i]. Since N is generally much larger that either P or C, then the expected profile R-factor can be rewritten as

[R.sub.E] = [square root of ((N - P + C)/([N.summation over (i=1)][w.sub.i][y.sub.i.sup.2]))] [approximately equal to] [square root of (N/([N.summation over (i=1)][y.sub.i.sup.2]/[y.sub.i]))] [approximately equal to] 1/[square root of (<y>)]. (2)

The expected profile R-factor is thus equal to the reciprocal Bilateral; two-sided; mutual; interchanged.

Reciprocal obligations are duties owed by one individual to another and vice versa. A reciprocal contract is one in which the parties enter into mutual agreements.
 of the square root of the average value of the profile points. A small expected profile R-factor is simply a statement about quantity and means that the average number of counts in a profile is large--it bears no relationship to the quality of a profile fit. In particular, if the diffraction pattern consists of weak peaks on top of a high background, then the expected R-factor can be very low. For an average background count of 10 000, for example, the expected R-factor will be 1 % or lower irrespective of irrespective of
prep.
Without consideration of; regardless of.

irrespective of
preposition despite 
 the height of the Bragg peaks. This has led to a preference for quoting background-subtracted (b-s) R-factors,

(b-s) expected R-factor:

[R.sub.(b-s)E] = [square root of ((N - P + C)/([N.summation over (i=1)][w.sub.i]([y.sub.i] - [b.sub.i])[.sup.2]))] (3a)

(b-s) weighted profile R-factor:

[R.sub.(b-s)wP] = [square root of (([N.summation over (i=1)][w.sub.i]([y.sub.i] - [M.sub.i])[.sup.2])/([N.summation over (i=1)][w.sub.i]([y.sub.i] - [b.sub.i])[.sup.2]))]. (3b)

The (b-s) expected R-factor gives a much more realistic measure of the quality of the data ([R.sub.(b-s)E] [approximately equal to] 1/[square root of ([(y - b)[.sup.2]/y])]) and the (b-s) weighted R-factor to both the quality of the data and the quality of the fit to the data. However, even still there are caveats. Very fine profile steps in a diffraction pattern lead to higher expected R-factors. For a given diffraction pattern, doubling the step size (i.e., grouping points together in pairs) will lead to an expected R-factor that is roughly [square root of 2] smaller than before. Additionally, R-factors may also be quoted for either the full profile or only those profile points that contribute to Bragg peaks. In themselves, therefore, profile R-factors treated individually are at best indicators of the quality of the data and the fit to the data. However, the ratio of weighted profile to expected profile R-factors is a good measure of how well the data are fitted. Indeed, the normalised normalised - normalisation  [chi square] function is simply the square of the ratio of [R.sub.wp] and [R.sub.exp exp
abbr.
1. exponent

2. exponential
]:

[chi square] = [N.summation over (i=1)][w.sub.i]([y.sub.i] - [M.sub.i])[.sup.2]/(N - P + C) = ([R.sub.wP]/[R.sub.E])[.sup.2] = ([R.sub.(b-s)wP]/[R.sub.(b-s)E])[.sup.2] (4)

(Note that the R-factor ratio holds whether or not the background has been subtracted in the calculation of the R-factor. The [chi square] value will change, however, if only those points that contribute to Bragg peaks are considered instead of the full diffraction pattern.)

Bragg R-factors are quoted as an indicator of the quality of the fit between observed and calculated integrated intensities. It has been shown that the correct integrated intensity R-factor can be obtained from a Pawley or Le Bail analysis [4] where the extracted "clumped" integrated intensities, ([J.sub.h]) = [summation]([I.sub.h]), are matched against the calculated "clumped" intensities, [J.sub.h] = [summation][I.sub.h], through the following equations:

expected [R.sub.1]-factor:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5a)

[R.sub.I]-factor:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5b)

where a "clump" is a group of completely overlapped reflections and the weight matrix [W.sub.hk] is the associated Hessian matrix In mathematics, the Hessian matrix is the square matrix of second-order partial derivatives of a function. Given the real-valued function

 from the Pawley analysis. It is easily shown that

[W.sub.hk] = [summation over (i)][w.sub.i]p([x.sub.i] - [x.sub.h])p([x.sub.i] - [x.sub.k])

where p ([x.sub.i] - [x.sub.k]) is the normalised peak shape for reflection k which is situated at [x.sub.k] These weights are calculated as part of the Pawley analysis but are easily calculated independently and therefore the above R-factors may also be derived from a Le Bail analysis. The integrated intensities [chi square] is again simply the square of the ratio of weighted and expected R-factors:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (6)

There is a strong argument that the estimated standard deviations of the structural parameters obtained from a Rietveld analysis should be multiplied mul·ti·ply 1  
v. mul·ti·plied, mul·ti·ply·ing, mul·ti·plies

v.tr.
1. To increase the amount, number, or degree of.

2. Mathematics To perform multiplication on.
 by the square root of this [chi square] function rather than, as is conventional, the square root of the Rietveld [chi square]. This usually leads to an additional inflation of between a factor of 2 and 4 for the estimate of the standard deviations of the structural parameters [4]. Interestingly, [[chi square].sub.1] can be evaluated indirectly from a combination of Rietveld and Pawley analyses on a dataset. Within statistical errors the numerator numerator

the upper part of a fraction.


numerator relationship
see additive genetic relationship.


numerator Epidemiology The upper part of a fraction
 of the Rietveld [chi square] function (i.e., the unnormalised Rietveld [chi square] function) is equal to the sum of the unnormalised Pawley and integrated intensity [chi square] functions [4], i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (7)

In this section, we have shown that there are a plethora plethora /pleth·o·ra/ (pleth´ah-rah)
1. an excess of blood.

2. by extension, a red florid complexion.pletho´ric


pleth·o·ra
n.
1.
 of R-factors and [chi square] functions that may be used to evaluate the quality of and the quality of fit to a powder diffraction pattern. Probably the most useful set of quantities to use are the following:

* the background-subtracted, expected profile R-factors evaluated over (a) full profile and (b) Bragg peaks only (two quantities)

* the background-subtracted, weighted profile Rietveld and Pawley (or Le Bail) R-factors evaluated over (a) full profile and (b) Bragg peaks only (four quantities)

* the Rietveld and Pawley (or Le Bail) [chi square] functions evaluated over (a) full profile and (b) Bragg peaks only (two quantities)

* the expected and weighted integrated intensity R-factors and associated [chi square] (three quantities)

These quantities together give an indication of how well the profile data are fitted using (a) only the unit cell, peak shape and other profile parameters (Pawley/Le Bail quantities) and (b) a structural model (Rietveld quantities). The quantities associated with the integrated intensities allow a broad comparison to be made with single crystal results.

As a final point in the discussion of R-factors, it is worth noting that while expected Rietveld R-factors will always improve with additional counting time, t, (indeed it is straightforward to show from Eq. (2) that [R.sub.c][proportional] 1/[square root of t]) the weighted profile R-factor bottoms out at a constant value that does not improve with time. This happens because the model cannot fit the data any better and it is systematic errors that are dominating the misfit. Indeed, David and Ibberson have shown that counting times are often an order of magnitude A change in quantity or volume as measured by the decimal point. For example, from tens to hundreds is one order of magnitude. Tens to thousands is two orders of magnitude; tens to millions is three orders of magnitude, etc.  longer than necessary and that most datasets are probably over-counted--these conclusions corroborate To support or enhance the believability of a fact or assertion by the presentation of additional information that confirms the truthfulness of the item.

The testimony of a witness is corroborated if subsequent evidence, such as a coroner's report or the testimony of other
 earlier work by Baharie and Pawley [5,6].

3. The Cumulative [chi square] Distribution

In the previous section, we showed that the Rietveld [chi square] function was a good measure of the quality of fit to a powder diffraction pattern. Examing, Eq. (4), it is clear that [chi square] is the weighted sum of the squares of the difference between observed and calculated powder diffraction patterns. An auxiliary auxiliary

In grammar, a verb that is subordinate to the main lexical verb in a clause. Auxiliaries can convey distinctions of tense, aspect, mood, person, and number.
 plot of the "difference/esd" underneath a fitted powder diffraction pattern gives a good idea of where the pattern is fitted well and where it is fitted poorly. Figure 1a shows the fitted diffraction pattern for cimetidine cimetidine /ci·met·i·dine/ (si-met´i-den) a histamine H2 receptor antagonist, which inhibits gastric acid secretion; used as the base or the monohydrochloride salt in the treatment and prophylaxis of gastric or duodenal ulcers,  collected on station 2.3 at Daresbury. From the "difference/esd" plot, regions of misfit can clearly be seen around some of the strongest Bragg peaks between 22[degrees] and 24[degrees]. However, the "difference/esd" plot only gives a qualitative impression of how poor the fit is, even when the plot of the diffraction pattern is expanded (Fig. 1b). To assess the impact of a Bragg peak or a region of the diffraction pattern on the overall fit to the data, we need to assess the cumulative impact over that region. This can be achieved by plotting the cumulative chi-squared function which is the weighted sum of the squares of the difference between observed and calculated powder diffraction patterns up to that point in the diffraction pattern. The cumulative chi-squared function at the nth point in the diffraction pattern is given by

[[chi square].sub.n] = [n.summation over (i=1)][w.sub.i]([y.sub.i] - [M.sub.i])[.sup.2]/(N - P + C). (8)

Examination of Fig. 1c shows that this function gives a clear indication of where the principal areas of misfit are in the powder diffraction pattern of cimetidine. The region from 22[degrees] and 24[degrees] is indeed the worst area of profile fit in the powder diffraction pattern. Around one third of the total [chi square] value is attributable to this small region. Moreover, the first half of the pattern contributes to [approximately equal to]17/19 = 90% of the total misfitting. The cumulative chi-squared plot clearly highlights the problems in fitting the cimetidine data and provides pointers to improving the fit to the data and hence obtaining an improved more accurate structural model. Indeed, there are three directions that we can take to improve the quality of profile fitting:

[FIGURE 1 OMITTED]

[FIGURE 1 OMITTED]

(i) redo To reverse an undo operation. See undo.  the experiment to count for shorter times at low two-theta values and for longer at higher two-theta values. This will reduce the cumulative [chi square] contribution in the 22[degrees] and 24[degrees] region and up-weight the well-fitted high angle data (see Sec. 4.1).

(ii) develop an improved model to describe the diffraction pattern--a good example of this might be the inclusion of anisotropic Refers to properties that differ based on the direction that is measured. For example, an anisotropic antenna is a directional antenna; the power level is not the same in all directions. Contrast with isotropic.  line broadening.

(iii) downweight the regions of misfit if it proves difficult to obtain a simple model. (In the 22[degrees] and 24[degrees] region, the misfitting may occur as a consequence of disorder diffuse diffuse /dif·fuse/
1. (di-fus´) not definitely limited or localized.

2. (di-fuz´) to pass through or to spread widely through a tissue or substance.


dif·fuse
adj.
 scattering--most codes do not include this effect.) In such cases, downweighting the misfitting points appropriately will lead to improved, less biased structural parameters (see Sec. 5.1 and Ref. [7]).

4. Assessing the Impact of Specific Regions of a Powder Diffraction Pattern

In the previous section, we discussed global measures of the quality of a Rietveld fit to a powder diffraction pattern. Ideally, we would like to be able to go further and devise an optimal methodology for collecting diffraction data. What parts of a powder diffraction pattern have the maximum impact on improving the quality of a crystal structure refinement? What parts of a diffraction pattern, for example, contribute most to the precise determination of anisotropic displacement displacement, in psychology: see defense mechanism.


Same as offset. See base/displacement.
 parameters? The intuitive answer is that high angle reflections will be the most important but peak overlap will reduce this impact. In fact, both low and high angles regions (but, to a lesser extent, intermediate regions) are generally important. The counterintuitive coun·ter·in·tu·i·tive  
adj.
Contrary to what intuition or common sense would indicate: "Scientists made clear what may at first seem counterintuitive, that the capacity to be pleasant toward a fellow creature is ...
 importance of the low angle reflections results from the correlation of anisotropic displacement parameters with the scale factor. How does one then assess the impact of a single point in a diffraction pattern on the precision of a particular structural parameter (1) Any value passed to a program by the user or by another program in order to customize the program for a particular purpose. A parameter may be anything; for example, a file name, a coordinate, a range of values, a money amount or a code of some kind. ? Prince and Nicholson showed for single crystal diffraction that the impact of individual reflections may be assessed statistically using standard least squares analysis [8]. Their procedure is easily extended to powder diffraction data [9].

The covariance matrix In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. It is the natural generalization to higher dimensions of the concept of the variance of a scalar-valued random variable. , V, obtained from Rietveld analysis is the best measure of the precision and correlation of the refined parameters, [p.sub.j], j = 1,..., [N.sub.par], from a powder diffraction pattern containing [N.sub.obs] points; [x.sub.i], [y.sub.i] and [e.sub.i] are, respectively, the position profile value and estimated standard deviation of the ith point in the pattern which is modelled by a function value, [M.sub.i]. The covariance matrix, V, is the inverse (mathematics) inverse - Given a function, f : D -> C, a function g : C -> D is called a left inverse for f if for all d in D, g (f d) = d and a right inverse if, for all c in C, f (g c) = c and an inverse if both conditions hold.  of the Hessian matrix, H, which may be expressed as H = [A.sup.T]wA where the elements of A are [A.sub.ij] = [partial derivative partial derivative

In differential calculus, the derivative of a function of several variables with respect to change in just one of its variables. Partial derivatives are useful in analyzing surfaces for maximum and minimum points and give rise to partial differential
][M.sub.i]/[partial derivative][p.sub.j] and w is the weight matrix which is usually diagonal with elements [w.sub.ii] = 1/[[sigma].sub.i.sup.2]. Forming the matrix Z with elements [Z.sub.ij] = (1/[[sigma].sub.i])[partial derivative][M.sub.i]/[partial derivative][p.sub.j] means that the Hessian matrix may be written as H = [Z.sup.T]Z. From this Z matrix, the projection matrix, P, may be formed from the equation P = Z([Z.sup.T]Z)[.sup.-1][Z.sup.T] [8]. This matrix, although not often discussed in least squares analysis, has a number of important properties. Most notably, the on-diagonal element, [P.sub.ii], is the leverage of a data point and has a value between zero and one. A high leverage means that a data point plays an important role in the overall model fitting and vice-versa. There is, however, another significant quantity for the analysis of the variance of a particular parameter.

Consider the impact on a particular element [V.sub.rs] of the covariance matrix if the ith data point is collected for a fraction [[alpha].sub.i] longer. The Hessian matrix is modified as follows: H' = H + [[alpha].sub.i][z.sup.T]z where the row vector In linear algebra, a row vector is a 1 × n matrix, that is, a matrix consisting of a single row:



The transpose of a row vector is a column vector.
 z has elements [z.sub.j] = (1/[[sigma].sub.i])[partial derivative][M.sub.i]/[partial derivative][p.sub.j]. Since the Hessian and covariance Covariance

A measure of the degree to which returns on two risky assets move in tandem. A positive covariance means that asset returns move together. A negative covariance means returns vary inversely.
 matrices are the inverses of each another, the change in the covariance matrix may be shown to be

V' = V - [[alpha].sub.i](V[z.sup.T]zV)/(1 + [[alpha].sub.i][z.sup.T]Vz) (9)

This equation may be simplified when it is recognised that [z.sup.T]Vz = [P.sub.ii]. Putting the vector t = zV implies that (V[z.sup.T]z[V.sub.)rs] = (zV)[.sup.T.sub.r](zV)[.sub.s] = [t.sub.r][t.sub.s] and thus, as long as [alpha] is small, all the elements of the parameter covariance matrix are altered as follows:

[V'.sub.rs] = [V.sub.rs] - [[alpha].sub.i]([t.sub.r][t.sub.s])/(1 + [[alpha].sub.i][P.sub.ii]) [congruent con·gru·ent  
adj.
1. Corresponding; congruous.

2. Mathematics
a. Coinciding exactly when superimposed: congruent triangles.

b.
 to] [V.sub.rs] - [[alpha].sub.i][t.sub.r][t.sub.s]. (10)

The product [t.sub.r][t.sub.s] is thus a measure of the impact of the ith point on element rs of the covariance matrix. In particular, [t.sub.j.sup.2] is a measure of the importance of the ith data point on the jth parameter; a large value of [t.sub.j.sup.2] leads to a substantial reduction in the parameter variance and a concomitant concomitant /con·com·i·tant/ (kon-kom´i-tant) accompanying; accessory; joined with another.
concomitant adjective Accompanying, accessory, joined with another
 improvement in precision. The quantity

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

is perhaps more informative than its square as it provides information about the sense of the ith data point contribution to the covariance terms. Its relationship to the covariance matrix is essentially identical to the relationship between the residual (observed-calculated)/(estimated standard deviation) and the overall [chi square] goodness of fit Goodness of fit means how well a statistical model fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e. . A specific example (1) of the use of the t-matrix to determine the significance of different parts of a powder diffraction is discussed in Ref. [9].

4.1 Variable Counting Time Protocols for X-Ray Powder Diffraction Data Collection

The use of [t.sub.r](i) as a diagnostic for determining accurate structural parameters depends on whether we believe that the errors in our data are well understood or not. If we are sure that the sources of the errors in our data are all known--the simplest case is the belief that the only sources of uncertainty are from counting statistics--then we will target those points in the diffraction pattern that have the maximum values of [t.sub.r](i) since these will be the points that reduce the estimated standard deviations of a parameter by the greatest amount. It is intuitively obvious that we will get the most precise assessment of the area of a peak by counting for longest at the top of the peak and that we will get the best indication of the peak position by counting at the points of maximum gradient gradient

In mathematics, a differential operator applied to a three-dimensional vector-valued function to yield a vector whose three components are the partial derivatives of the function with respect to its three variables. The symbol for gradient is ∇.
 change on the peak. These conclusions, however, do depend on us knowing with complete confidence what the peak shape is. This point, in turn, means that we can only use these maximum impact points if we not only know that source of all our experimental errors but also have complete confidence in our model. While this may often be true for neutron neutron, uncharged elementary particle of slightly greater mass than the proton. It was discovered by James Chadwick in 1932. The stable isotopes of all elements except hydrogen and helium contain a number of neutrons equal to or greater than the number of protons.  powder diffraction data, it is generally not the case for x-ray diffraction and patterns such as those shown for cimetidine in Fig. 1 are the norm rather than the exception. If we were entirely confident about the sources of misfit in our low-angle diffraction data then we would count for longer at low angles since this offers the prospects of reducing the terms in the covariance matrix by the largest amount. If we are uncertain about our data errors and the sufficiency of our model then we have to take an alternative approach to the problem that is effectively opposite to the argument when the errors are known. If we have an intense Bragg peak at low angles and are uncertain about our errors then [t.sub.r](i) tells us that the variance terms will reduce substantially but unfortunately in an incorrect way. We will have a more precise result but a less accurate one. Indeed, as the variance terms reduce, we will be faced with a result that may be increasingly more precise while at the same time decreasingly accurate. To obtain accurate results in the face of uncertain errors, our best approach is to distribute the errors as evenly as possible across all the Bragg peaks. This means counting for substantially longer at higher angles. There are two published methods for deciding how to vary the counting time across the diffraction pattern [1,4,10]. Both approaches lead to essentially identical protocols and also both lead to the important conclusion that higher angle parts of the diffraction pattern may have to counted for often more than 30 times longer than low-angle regions. In order to explain the rationale for longer counting times, we follow the approach of David [4] and Shankland, David and Sivia [10] that was developed with a view to improving the chances of structure solution. The rationale is based upon one of the central formulae of Direct methods, the tangent tangent, in mathematics.

1 In geometry, the tangent to a circle or sphere is a straight line that intersects the circle or sphere in one and only one point.
 formula which determines the probable relationship between the phases, [phi](h), [phi](k) and [phi](h-k):

tan[[phi](h)] [congruent to] [[summation over (k)][[[sigma].sub.3]/[[sigma].sub.2.sup.3/2]]E(h)E(k)E(h - k)sin[[phi](k) + [phi](h - k)]]/[[summation over (k)][[[sigma].sub.3]/[[sigma].sub.2.sup.3/2]]E(h)E(k)E(h - k)cos[[phi](k) + [phi](h - k)]] (12)

where [[sigma].sub.n] = [n.summation over (i=1)][[f.sub.i](|h| = 0)][.sup.n] and the normalised structure factor, E(h), is related to the integrated intensity, I[(h)] = j(h)|F (h)[.sup.2]| by the equation |E(h)|[.sup.2] = I(h)/[n.summation over (j=1)][g.sub.j.sup.2](h). (2)

We simply require that the fractional fractional

size expressed as a relative part of a unit.


fractional catabolic rate
the percentage of an available pool of body component, e.g. protein, iron, which is replaced, transferred or lost per unit of time.
 error in E(h) should be independent of where the reflection is in the diffraction pattern. This, in turn, leads to the fact that all components of the summations in the tangent formulae will on average be determined with equal precision. When we collect a powder diffraction pattern, the Bragg peak area, A(h), is not the integrated intensity itself but is modified by geometrical ge·o·met·ric   also ge·o·met·ri·cal
adj.
1.
a. Of or relating to geometry and its methods and principles.

b. Increasing or decreasing in a geometric progression.

2.
, absorption and extinction extinction, in biology, disappearance of species of living organisms. Extinction occurs as a result of changed conditions to which the species is not suited.  terms. If we know that absorption and extinction effects are severe, then we should include their effects in evaluating the variable collection strategy. However, if we work under the simpler assumption that these effects are small, then A (h) = [L.sub.p]I (h), where [L.sub.p] is the Lorentz polarisation correction and we will count normalised structure factors, E (h), with equal precision across a powder diffraction pattern if we offset the combined effects of [L.sub.p], the form-factor fall-off and the Debye-Waller effects of thermal motion Thermal motion is motion on the scale of molecules caused by heat. Brownian motion is an example of a phenomenon caused by thermal motion. , i.e., t(2[theta Theta

A measure of the rate of decline in the value of an option due to the passage of time. Theta can also be referred to as the time decay on the value of an option. If everything is held constant, then the option will lose value as time moves closer to the maturity of the option.
]) [proportional] 1/[L.sub.p](2[theta])[summation][g.sub.j.sup.2](2[theta]) where we have explicitly used a 2-theta dependence. For the case of Bragg-Brentano geometry on a laboratory-based x-ray powder diffractometer A Diffractometer (Main Entry: dif·frac·tom·e·ter Pronunciation: di-"frak-'tä-m&-t&r Function: noun) is a measuring instrument for analyzing the structure of a usually crystalline substance from the scattering pattern produced when a beam of radiation or particles (as X rays or , this becomes

t([theta]) [proportional] [[(sin[theta]sin2[theta])(1 + [cos.sup.2]2[alpha])]/[(1 + [cos.sup.2]2[alpha][cos.sup.2]2[theta])[f.sub.av.sup.2]([theta])exp(-2B[.sub.av][sin.sup.2][theta]/[[lambda].sup.2])]] (13a)

where [f.sub.av] is a representative atomic scattering scattering

In physics, the change in direction of motion of a particle because of a collision with another particle. The collision can occur between two charged particles; it need not involve direct physical contact.
 factor (e.g., carbon), [B.sub.av] is an estimated overall Debye-Waller factor The Debye-Waller factor (DWF), named after Peter Debye and Ivar Waller, is used in condensed matter physics to describe the attenuation of x-ray scattering or neutron scattering caused by thermal motion or quenched disorder. , [lambda] is the incident wavelength and 2[alpha] is the monochromator A monochromator is an optical device that transmits a mechanically selectable narrow band of wavelengths of light or other radiation chosen from a wider range of wavelengths available at the input.  take-off angle. For the case of Debye-Scherrer geometry on a synchrotron synchrotron: see particle accelerator.
synchrotron

Cyclic particle accelerator in which the particle is confined to its orbit by a magnetic field. The strength of the magnetic field increases as the particle's momentum increases.
 x-ray powder diffractometer, this simplifies to

t[theta] [proportional] (sin[theta] sin[theta])/[[f.sub.av.sup.2]([theta])exp(-2[B.sub.av][sin.sup.2][theta]/[[lambda].sup.2])]. (13b)

The variable counting time scheme for these two typical diffractometer settings are shown in Fig. 2. Both laboratory and synchrotron variations show that the counting times at intermediate angles should be substantially longer than at low-angles and extreme backscattering. Interestingly, the 2-theta variations of the variable counting time schemes are dominated as much by the Lorentz polarisation correction as the form-factor fall-off and Debye-Waller variation. Indeed at low-angles, the principal effects are associated with the Lorentz polarisation correction. All three effects combine together to create a substantial variation in counting time as a function of 2-theta. Figure 3 compares the constant counting time pattern (Fig. 3a) compared with the raw counts using the variable counting time protocol (Fig. 3b) for the drug compound, chlorothiazide chlorothiazide /chlo·ro·thi·a·zide/ (klor?o-thi´ah-zid) a thiazide diuretic used in the form of the base or the sodium salt to treat hypertension and edema. . The Bragg peaks at high angle appear to be of the same intensity as the low-angle reflections--all the Bragg peaks in this diffraction pattern have been reliably determined. This proved crucial in the successful structure solution of the compound using Direct methods as large numbers of reliable triplet triplet /trip·let/ (trip´let)
1. one of three offspring produced at one birth.

2. a combination of three objects or entities acting together, as three lenses or three nucleotides.

3.
 phase relationships could be formed [10]. A further indication of the importance of using a variable counting time scheme can be seen from the analysis of the cumulative chi-squared distribution for the refinement of the structure of famotidine (Figure 4). The overall chi-squared is low (~1.6) showing that a good fit has been achieved over the full diffraction pattern. Moreover, the cumulative chi-squared distribution forms an essentially straight line over the full pattern indicating that all regions are fitted equally well and, as a corollary corollary: see theorem. , that the errors are also even distributed over all the reflections. This is an important point as it follows from this that the effects of systematic errors must be substantially diminished over, for example, the case of cimetidine (see Fig. 1c).

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

5. Beyond Least-Squares Analysis

In the previous sections, we discussed from a statistical point of view how to assess the limitations of a Rietveld analysis and overcome these problems through the use of, for example, variable counting time protocols. What happens when we still have areas of the diffraction pattern that are not fitted well despite performing a careful experiment? If the misfit results from additional scattering from an unattributed un·at·trib·ut·ed  
adj.
Not attributed to a source, creator, or possessor: an unattributed opinion. 
 impurity phase then we can formulate this within the context of Bayesian probability theory and develop an appropriate refinement procedure. If we have no real idea what has caused the misfitting--it may, for example, be lineshape effects, imperfect imperfect: see tense.  powder statistics or diffuse scattering--then we have to develop a catch-all probabilistic (probability) probabilistic - Relating to, or governed by, probability. The behaviour of a probabilistic system cannot be predicted exactly but the probability of certain behaviours is known. Such systems may be simulated using pseudorandom numbers.  procedure for addressing this problem. If the misfitting involves a small proportion of the data, then we can develop a robust method of improving the accuracy of our results. At the same time, however, our precision decreases because we have allowed the possibility of more sources of uncertainty than in a standard least-squares analysis. The approach used in this paper follows that of Sivia who aptly discussed the problem as one of "dealing with duff data" [11].

5.1 Dealing With Duff Data

When we observe misfitting in a powder diffraction pattern, our first assumption is that the structural model that we have used to describe the data is not quite optimised. However, we often find that despite our best attempts, the data never fit well across the full diffraction pattern and we are left with regions of misfit that may well be introducing systematic errors into our data. If we understand the source of this misfit--it may for example be an unattributable impurity phase--then we may be able to develop a suitably specific maximum likelihood refinement protocol. However, when we are unable to postulate postulate: see axiom.  a suitable explanation for misfitting, then we must develop a very general probabilistic approach, as has been done previously [11,12]. If we take a standard point in our diffraction pattern that has, say, 400 counts we know from Gaussian counting statistics that our expected standard deviation will be around 20 counts. If we proceed through to the end of our least squares analysis with this assumption, then we are making a very definite statement about our errors. We are saying categorically that we know all the sources of our errors and that they results only from counting statistics. Put in these terms, this is a bold assertion. Fortunately, in most Rietveld analyses (and particularly in the area of neutron powder diffraction) this is a fair statement to make. However, we will show that even with good refinements, we can improve our accuracy (at the expense of some precision) by using a more robust algorithm.

One of the things that we can say for sure when we have collected a point in our diffraction pattern with [mu] = 400 counts is that the uncertainty in our measurement cannot be less than 20 counts--but it could be more. We must generate a probability distribution for our uncertainty--after all, we are no longer certain about our uncertainties. A good distribution, because it has the properties of scale invariance In physics and mathematics, scale invariance is a feature of objects or laws that do not change if length scales (or energy scales) are multiplied by a common factor. The technical term for this transformation is a dilatation (also known as dilation , is the Jeffrey's distribution, 1/[sigma], for all values [sigma] [greater than or equal to] [square root of [mu]]. This probability distribution for our uncertainty is shown in Fig. 5a. The corresponding likelihood for the data is obtained by integrating over this distribution

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

which leads, not to a Gaussian likelihood but an error-function distribution

p(D|[mu],[sigma] [greater than or equal to] [[sigma].sub.min]) [proportional] [1/[2(D - [mu])]]erf[[(D - [mu])]/[[[sigma].sub.min][square root of 2]]]. (15)

This is shown in Fig. 5b. The negative log-likelihood, which gives a direct comparison with the least-squares distribution, is shown in Fig. 5c. For large positive and negative deviations between observed and calculated data, the penalty no longer follows a quadratic form In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. The term quadratic form is also often used to refer to a quadratic space, which is a pair (V,q) where V is a vector space over a field k  but rather a logarithmic distribution In probability and statistics, the logarithmic distribution (also known as the logarithmic series distribution) is a discrete probability distribution derived from the Maclaurin series expansion

. Large deviations have less impact on this robust modified [chi square] function while small deviations are treated just like the standard least-squares (albeit with a shallower distribution arising from our poorer state of knowledge about our uncertainties).

We illustrate the use of this robust statistic statistic,
n a value or number that describes a series of quantitative observations or measures; a value calculated from a sample.


statistic

a numerical value calculated from a number of observations in order to summarize them.
 for the case of a high resolution x-ray powder diffraction pattern of urea collected on BM16 at the ESRF ESRF European Synchrotron Radiation Facility (Grenoble, France)
ESRF Environmental Studies Research Funds (Canada)
ESRF Endstage Renal Failure (kidney failure) 
, Grenoble. Standard least-squares analysis leads to a satisfactory weighted profile [chi square] of ~3.7. However, examination of the cumulative [chi square] plot (Fig. 6), shows that almost a quarter of the misfit is associated with the strongest Bragg peak. This could result from preferred orientation, detector saturation saturation, of an organic compound
saturation, of an organic compound, condition occurring when its molecules contain no double or triple bonds and thus cannot undergo addition reactions.
 or particle statistics--we don't know Don't know (DK, DKed)

"Don't know the trade." A Street expression used whenever one party lacks knowledge of a trade or receives conflicting instructions from the other party.
. The cumulative robust [chi square] distribution, on the other hand, contains no such bias towards this single peak. Indeed, the linear variation of the cumulative robust [chi square] distribution across the full pattern gives a reassuring re·as·sure  
tr.v. re·as·sured, re·as·sur·ing, re·as·sures
1. To restore confidence to.

2. To assure again.

3. To reinsure.
 degree of confidence to this modified least-squares approach. However, a comparison of the structural parameters for the conventional and robust least-squares approaches with single crystal data convincingly shows the benefits of the robust metric which automatically downweights bad data. With conventional least-squares, the results are good and the estimated standard deviations are small. However, nine of the fourteen structural parameters are more than four standard deviations different from their single crystal counterparts indicating that the accuracy of the parameters obtained from the least squares analysis does not measure up to their precision. On the other hand, only one of the structural parameters from the robust analysis is more than 4 [sigma] away from the corresponding single crystal value. The parameters changes are modest between least-squares and robust analyses. However, the differences are real and the improvements in precision when benchmarked against the single crystal parameters are significant. While it is dangerous to extrapolate extrapolate - extrapolation  from a single example, the underlying statistical framework is sound and suggests that, when significant jumps are found in a cumulative chi-squared plot, then a robust analysis is worthwhile.

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

5.2 Refinement in the Presence of Unattributable Impurity Phases

What do you do when you want to perform a Rietveld analysis of a particular material but have a substantial impurity phase and despite all your best attempts you can neither remove it from your sample nor index it from your diffraction pattern? Conventional wisdom would state that your chances of obtaining unbiased structural parameters are poor and that the best you can do is to manually exclude the offending of·fend  
v. of·fend·ed, of·fend·ing, of·fends

v.tr.
1. To cause displeasure, anger, resentment, or wounded feelings in.

2.
 impurity peaks. Standard Rietveld programs that are based upon a least-squares refinement algorithm cannot cope in an unbiased manner with an incomplete model description of the data. This is just the situation where Bayesian probability theory can come to the rescue. We can ask the question, "How do I perform a refinement on a powder diffraction pattern when I know that there is an impurity phase present but have no idea what that impurity phase may be?" This question is equivalent to stating that my diffraction pattern contains a component that I can model (known phases + background) and an additional positive, unknown contribution. It turns out that enforcing the positivity of the unknown component as an additive additive

In foods, any of various chemical substances added to produce desirable effects. Additives include such substances as artificial or natural colourings and flavourings; stabilizers, emulsifiers, and thickeners; preservatives and humectants (moisture-retainers); and
 contribution is sufficient to produce excellent results [7].

The mathematical development of these ideas has been presented elsewhere and results in a modified [chi square] goodness of fit function that is shown in Fig. 7 [7,13].

For observed data that are less than the model function, the new goodness of fit behaves essentially identically to the standard [chi square]. This is to be expected since such points are unlikely to be associated with an impurity contribution. On the other hand, when the observed data value is substantially greater than the fitted model value, then the new goodness of fit brings a substantially smaller penalty (the function varies logarithmically log·a·rithm  
n. Mathematics
The power to which a base, such as 10, must be raised to produce a given number. If nx = a, the logarithm of a, with n as the base, is x; symbolically, logn a = x.
) than the quadratic quadratic, mathematical expression of the second degree in one or more unknowns (see polynomial). The general quadratic in one unknown has the form ax2+bx+c, where a, b, and c are constants and x is the variable.  behaviour of the standard [chi square]. Again this is just what is required to minimise the impact of any impurity phase. Note also that the curvature curvature

Measure of the rate of change of direction of a curved line or surface at any point. In general, it is the reciprocal of the radius of the circle or sphere of best fit to the curve or surface at that point.
 of the new goodness of fit is shallower than the standard [chi square]. This means that quoted standard deviations will be higher for refinements using the new goodness of fit. This is to be expected as the allowance for an impurity phase brings a greater uncertainty into the model parameter values.

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

Diffraction patterns of yttria yt·tri·a  
n.
See yttrium oxide.



[New Latin, after Ytterby, a town in Sweden.]
 and rutile rutile, mineral, one of three forms of titanium dioxide (TiO2; see titanium). It occurs in crystals, often in twins or rosettes, and is typically brownish red, although there are black varieties.  were collected on HRPD HRPD High Rate Packet Data
HRPD Human Rights Policy Department (UK)
HRPD High Resolution Pulse Doppler Radar
HRPD High Repetition-rate Pulsed Doppler (radar) 
 at ISIS. Results from the 5% yttria: 95% rutile are shown in Fig. 9. (The fitted diffraction pattern of pure yttria is shown in Fig. 8 for comparison.) In order to accentuate ac·cen·tu·ate  
tr.v. ac·cen·tu·at·ed, ac·cen·tu·at·ing, ac·cen·tu·ates
1. To stress or emphasize; intensify:
 the difference between the new goodness of fit function and standard least-squares analysis, we have chosen to refine the minority yttria phase treating the majority phase as the impurity (see Fig. 9a). The excellent fit to the data for the modified [chi square] is shown in Fig. 9b where we have graphically down-weighted the observed points, which contribute least to the goodness of fit. This emphasises what the algorithm is effectively doing--large positive (obs-calc)/esd values are essentially ignored. In effect, the algorithm is optimally excluding those regions that do not contribute to the model. The relative calculated peak intensities agree very well with the results for pure yttria (Fig. 8). Least squares analysis (Fig. 9c) produces a completely different result--all points are considered with no downweighting for possible impurities. The first obvious effect is that the refined background is too high. The reason for this is obvious since the strong impurity peaks lift up the model fit. The relative peak intensities are however also very different from the correct values suggesting that the refined structural parameters are substantially in error. This is indeed the case and is borne out by analysis of the refined zirconium zirconium (zərkō`nēəm), metallic chemical element; symbol Zr; at. no. 40; at. wt. 91.22; m.p. about 1,852°C;; b.p. 4,377°C;; sp. gr. 6.5 at 20°C;; valence +2, +3, or +4.  and oxygen coordinates, which are shown graphically in Fig. 10 as a function of yttia content. We briefly consider the other refined parameters (a fuller analysis is given in Ref. [7]). The scale factor is correct within estimated standard deviation (esd) for the robust analysis but behaves wildly for the standard least squares, exceeding 1000% for 25% yttria content. The least-squares analysis of the lattice constant The lattice constant refers to the constant distance between unit cells in a crystal lattice. Lattices in three dimensions generally have three lattice constants, referred to as a, b, and c.  also becomes increasingly unreliable as the refinement locks into peaks associated with rutile as well as yttria. On the other hand, the lattice constant from the robust refinement is satisfyingly stable; the esds increase as the yttria content decreases (the 5% esd is some five times larger than the 100% value) but all results lie within a standard deviation of the correct result.

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

[FIGURE 10 OMITTED]

5.3 Summary of Maximum Likelihood Refinement Algorithms

Least-squares Rietveld analysis is the best and least-biased method of structure refinement from a powder diffraction pattern when the data can be fully modelled. However, when there is an unmodelled contribution in the diffraction pattern, least-squares analysis gives biased results. In the impurity phase example discussed in this contribution, significant deviations from the correct parameter values occur when there is as little as a 10% impurity contribution. At higher impurity levels, least-squares analysis is completely unreliable. These problems may, however, be overcome if the existence of an unknown impurity contribution is built into the refinement algorithm. While it might seem to be a logical inconsistency in·con·sis·ten·cy  
n. pl. in·con·sis·ten·cies
1. The state or quality of being inconsistent.

2. Something inconsistent: many inconsistencies in your proposal.
 to build in information about an unknown contribution, Bayesian probability theory provides a framework for doing just this. Only two broad assumptions are necessary to derive an appropriate modified probability distribution function. These are (i) that the impurity contribution must be intrinsically positive and (ii) that its magnitude, A, is unknown and thus best modelled by a Jeffreys' prior, given by p(A|I) [proportional] 1/A for A > 0 and p(A|I) = 0 for A [less than or equal to] 0. This produces a modified "[chi square]" function (see Fig. 1) that effectively excludes the impact of impurity peaks.

The results discussed in briefly in this contribution and more extensively in Ref. [13], show that the improvement over conventional least-squares analysis is dramatic. Indeed, even in the presence of very substantial impurity contributions (see Fig. 4) the refined structural parameters are within a standard deviation of their correct values.

It must, however, be stated as a final caveat that care should be taken with this approach and the use of an algorithm that can cope with the presence of impurities should be seen as a last resort. Indeed, every effort should be made to determine all the phases in a sample. It is much more desirable to include the impurity phase in a standard Rietveld refinement.
Table 1. Structural parameters obtained for urea from single crystal
results (column 2) and high-resolution powder diffraction data. Two
separate analyses were performed on the powder diffraction data. Results
from a standard least-squares analysis are shown in column 2 and
compared with the single crystal results in column 3. The results from
the robust analysis are listed in column 5 and compared with the single
crystal results in the final sixth column. The shaded cells indicate
discrepancies that are beyond 4 [sigma]

                   SXXD      Least squares    LS-SXXD       Robust

C1 z            0.3328(3)    0.3236(9)      -0.0092(10)   0.3319(13)
O1 z            0.5976(4)    0.6013(5)       0.0037(6)    0.5984(7)
N1 x            0.1418(2)    0.1405(3)      -0.0013(4)    0.1423(7)
   z            0.1830(2)    0.1807(5)      -0.0023(6)    0.1813(7)
C1 [U.sub.11]   0.0353(6)    0.0348(20)     -0.0005(20)   0.0329(40)
   [U.sub.33]   0.0155(5)    0.0396(30)      0.0241(30)   0.0413(40)
   [U.sub.12]   0.0006(9)    0.0205(30)      0.0199(30)   0.0128(40)
O1 [U.sub.11]   0.0506(9)    0.0749(16)      0.0243(18)   0.0617(30)
   [U.sub.33]   0.0160(6)    0.0080(14)     -0.0080(15)   0.0090(20)
   [U.sub.12]   0.0038(18)   0.0052(20)      0.0014(30)  -0.0011(35)
N1 [U.sub.11]   0.0692(6)    0.0627(15)     -0.0065(18)   0.0697(25)
   [U.sub.33]   0.0251(4)    0.0460(22)      0.0211(22)   0.0365(30)
   [U.sub.12]  -0.0353(7)   -0.0252(18)      0.0101(20)  -0.0361(30)
   [U.sub.13]  -0.0003(3)   -0.0015(11)     -0.0012(12)  -0.0029(15)

                 R-SXXD

C1 z           -0.0009(14)
O1 z            0.0008(8)
N1 x            0.0005(7)
   z           -0.0017(7)
C1 [U.sub.11]   0.0024(40)
   [U.sub.33]   0.0258(40)
   [U.sub.12]   0.0122(40)
O1 [U.sub.11]   0.0111(30)
   [U.sub.33]  -0.0070(20)
   [U.sub.12]  -0.0049(35)
N1 [U.sub.11]   0.0005(25)
   [U.sub.33]   0.0114(30)
   [U.sub.12]  -0.0008(30)
   [U.sub.13]  -0.0026(15)


Acknowledgments

The author wishes to acknowledge Dr. A. J. Markvardsen, Dr. K. Shankland and Dr. D. S. Sivia for stimulating discussions about probability theory and powder diffraction.

Accepted: April 11, 2003

Available online: http://www.nist.gov/jres

(1) This example concerns the analysis of orientational order in [C.sub.60] from neutron powder diffraction data. The t-matrix is used to show that the deviations from spherical spher·i·cal
adj.
Having the shape of or approximating a sphere; globular.
 symmetry symmetry, generally speaking, a balance or correspondence between various parts of an object; the term symmetry is used both in the arts and in the sciences.  of the orientation distribution function of [C.sub.60] in the high temperature phase can be well modelled using neutron powder diffraction data and that powder averaging is quite different from spherical averaging.

(2) F(h) = [n.summation over (j=1)][g.sub.j](h)exp(2[pi]ih * r) and [g.sub.j](h) = [f.sub.j](h)exp(-[B.sub.av]/4[d.sup.2]).

6. References

[1] I. C. Madsen and R. J. Hill, J. Appl. Cryst. 27, 385-392 (1994).

[2] W. I. F. David, Accuracy in Powder Diffraction-II, Abstract P2.6 NIST Special Publication 846, 210, NIST, Gaithersburg, MD, USA (1992).

[3] A. Antoniadis, J. Berruyer, and A. Filhol, Acta Cryst. A46, 692-711 (1990).

[4] W. I. F. David (submitted to J. Appl. Cryst.).

[5] W. I. F. David and R. M. Ibberson, Accuracy in Powder Diffraction-III, Abstract P2.6 (2001).

[6] E. Baharie and G. S. Pawley, J. Appl. Cryst. 16, 404-406 (1983).

[7] W. I. F. David, J. Appl. Cryst, 34, 691-698 (2001).

[8] E. Prince and W. L. Nicholson, Structure and Statistics in Crystallography, A. J. C. Wilson, ed., Adenine adenine (ăd`ənĭn, –nīn, –nēn), organic base of the purine family. Adenine combines with the sugar ribose to form adenosine, which in turn can be bonded with from one to three phosphoric acid units, yielding the three  Press (1985) pp. 183-195.

[9] W. I. F. David, R. M. Ibberson, and T. Matsuo, Proc. Roy. Soc. London A442 129-146 (1993).

[10] K. Shankland, W. I. F. David, and D. S. Sivia, J. Mater, Chem. 7, 569-572 (1997).

[11] D. S. Sivia, Dealing with Duff Data, in Proceedings of the Maximum Entropy Conference, M. Sears, V. Nedeljkovic, N. E. Pendock & S. Sibisi, eds., Port Elizabeth Port Elizabeth, city (1991 pop. 670,653), Eastern Cape, SE South Africa, on Algoa Bay, an arm of the Indian Ocean. It is a tourist center and a major seaport that ships diamonds, wool, fruit, and other items. , South Africa South Africa, Afrikaans Suid-Afrika, officially Republic of South Africa, republic (2005 est. pop. 44,344,000), 471,442 sq mi (1,221,037 sq km), S Africa. : NMB NMB

new methylene blue.
 printers (1996) pp. 131-137.

[12] G. E. P. Box and C. G. Tiao, Biometrika 55, 119-129 (1968).

[13] W. I. F. David and D. S. Sivia, J. Appl. Cryst. 34, 318-324 (2001).

W. I. F. David

ISIS Facility, Rutherford Appleton Laboratory The Rutherford Appleton Laboratory (RAL) at the Chilton/Harwell Science Campus is a UK scientific research laboratory near Didcot in Oxfordshire. It has a staff of around 1,200 who support the work of over 10,000 scientists and engineers, mainly from the university research , Chilton, Oxon, OX11 0QX, U.K.

Bill.David@rl.ac.uk

About the author: Bill David is David I, king of Scotland
David I, 1084–1153, king of Scotland (1124–53), youngest son of Malcolm III and St. Margaret of Scotland. During the reign of his brother Alexander I, whom he succeeded, David was earl of Cumbria, ruling S of the Clyde
 currently the Senior Research Fellow at the ISIS spallation neutron source The Spallation Neutron Source (SNS) is an accelerator-based neutron source being built in Oak Ridge, Tennessee, USA, by the U.S. Department of Energy (DOE). SNS is being designed and constructed by a unique partnership of six DOE national laboratories: Argonne, Lawrence Berkeley,  at the Rutherford Appleton Laboratory and is also the Associate Director of Research Networks for CLRC CLRC Central Laboratory of the Research Councils (UK)
CLRC Copyright Law Review Committee (Australia)
CLRC Canadian Livestock Records Corporation (Ottawa, Ontario, Canada) 
. His research career spans over 25 years from his early work on ferroelastic materials in the Clarendon Laboratory The Clarendon Laboratory in Oxford, England (not to be confused with the Clarendon Building, also in Oxford) is part of the Physics Department at Oxford University. It houses the atomic and laser physics and condensed matter physics groups within the Department, although four other , Oxford, to his current research in the fields of neutron and x-ray scattering, structural physics, and crystallography.
COPYRIGHT 2004 National Institute of Standards and Technology
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:David, W.I.F.
Publication:Journal of Research of the National Institute of Standards and Technology
Date:Jan 1, 2004
Words:7859
Previous Article:Multidataset refinement resonant diffraction, and magnetic structures.
Next Article:Direct methods optimised for solving crystal structure by powder diffraction data: limits, strategies, and prospects.
Topics:



Related Articles
Fraunhofer diffraction effects on total power for a planckian source.
JCPDS-ICDD research Associateship (cooperative program with NBS/NIST).
Optical diffraction in close proximity to plane apertures, part 2, comparison of half-plane diffraction theories.
Accuracy in Powder Diffraction III--Part 1: preface.
Fundamental parameters line profile fitting in laboratory diffractometers.
Direct space structure solution applications.
Diffraction line broadening analysis if broadening is caused by both dislocations and limited crystallite size.
Multidataset refinement resonant diffraction, and magnetic structures.
The high resolution powder diffraction beam line at ESRF.
Optical diffraction in close proximity to plane apertures. III. Modified, self-consistent theory.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters