In the design of experiments, optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith.^{[2]}^{[3]}
In the design of experiments for estimating statistical models, optimal designs allow parameters to be estimated without bias and with minimumvariance. A nonoptimal design requires a greater number of experimental runs to estimate the parameters with the same precision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation.
The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variancematrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments.
Optimal designs are also called optimum designs.^{[4]}
Contents

Advantages 1

Minimizing the variance of estimators 2

Implementation 3

Practical considerations 4

Model dependence and robustness 4.1

Choosing an optimality criterion and robustness 4.2

Flexible optimality criteria and convex analysis 4.2.1

Model uncertainty and Bayesian approaches 4.3

Model selection 4.3.1

Bayesian experimental design 4.3.2

Iterative experimentation 5

Sequential analysis 5.1

Responsesurface methodology 5.2

System identification and stochastic approximation 5.3

Specifying the number of experimental runs 6

Using a computer to find a good design 6.1

Discretizing probabilitymeasure designs 6.2

History 7

See also 8

Notes 9

References 10

Further reading 11

Textbooks for practitioners and students 11.1

Textbooks emphasizing regression and responsesurface methodology 11.1.1

Textbooks emphasizing block designs 11.1.2

Books for professional statisticians and researchers 11.2

11.3 Articles and chapters

Historical 11.4
Advantages
Optimal designs offer three advantages over suboptimal experimental designs:^{[5]}

Optimal designs reduce the costs of experimentation by allowing statistical models to be estimated with fewer experimental runs.

Optimal designs can accommodate multiple types of factors, such as process, mixture, and discrete factors.

Designs can be optimized when the designspace is constrained, for example, when the mathematical processspace contains factorsettings that are practically infeasible (e.g. due to safety concerns).
Minimizing the variance of estimators
Experimental designs are evaluated using statistical criteria.^{[6]}
It is known that the least squares estimator minimizes the variance of meanunbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an ("efficient") estimator is called the "Fisher information" for that estimator.^{[7]} Because of this reciprocity, minimizing the variance corresponds to maximizing the information.
When the statistical model has several parameters, however, the mean of the parameterestimator is a vector and its variance is a matrix. The inverse matrix of the variancematrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the informationmatrix using realvalued summary statistics; being realvalued functions, these "information criteria" can be maximized.^{[8]} The traditional optimalitycriteria are invariants of the information matrix; algebraically, the traditional optimalitycriteria are functionals of the eigenvalues of the information matrix.

Aoptimality ("average" or trace)

One criterion is Aoptimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients.

Doptimality (determinant)

Eoptimality (eigenvalue)

Another design is Eoptimality, which maximizes the minimum eigenvalue of the information matrix.

Toptimality

This criterion maximizes the trace of the information matrix.
Other optimalitycriteria are concerned with the variance of predictions:

Goptimality

A popular criterion is Goptimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix X(X'X)^{−1}X'. This has the effect of minimizing the maximum variance of the predicted values.

Ioptimality (integrated)

A second criterion on prediction variance is Ioptimality, which seeks to minimize the average prediction variance over the design space.

Voptimality (variance)

A third criterion on prediction variance is Voptimality, which seeks to minimize the average prediction variance over a set of m specific points.^{[9]}
Contrasts
In many applications, the statistician is most concerned with a "parameter of interest" rather than with "nuisance parameters". More generally, statisticians consider linear combinations of parameters, which are estimated via linear combinations of treatmentmeans in the design of experiments and in the analysis of variance; such linear combinations are called contrasts. Statisticians can use appropriate optimalitycriteria for such parameters of interest and for more generally for contrasts.^{[10]}
Implementation
Catalogs of optimal designs occur in books and in software libraries.
In addition, major statistical systems like SAS and R have procedures for optimizing a design according to a user's specification. The experimenter must specify a model for the design and an optimalitycriterion before the method can compute an optimal design.^{[11]}
Practical considerations
Some advanced topics in optimal design require more statistical theory and practical knowledge in designing experiments.
Model dependence and robustness
Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design is model dependent: While an optimal design is best for that model, its performance may deteriorate on other models. On other models, an optimal design can be either better or worse than a nonoptimal design.^{[12]} Therefore, it is important to benchmark the performance of designs under alternative models.^{[13]}
Choosing an optimality criterion and robustness
The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that
since the [traditional optimality] criteria . . . are varianceminimizing criteria, . . . a design that is optimal for a given model using one of the . . . criteria is usually nearoptimal for the same model with respect to the other criteria.
— ^{[14]}
Indeed, there are several classes of designs for which all the traditional optimalitycriteria agree, according to the theory of "universal optimality" of Kiefer.^{[15]} The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in the optimalitycriterion is much greater than is robustness with respect to changes in the model.
Flexible optimality criteria and convex analysis
Highquality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimalitycriterion or may program a custommade criterion.
All of the traditional optimalitycriteria are convex (or concave) functions, and therefore optimaldesigns are amenable to the mathematical theory of convex analysis and their computation can use specialized methods of convex minimization.^{[16]} The practitioner need not select exactly one traditional, optimalitycriterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimalitycriteria and nonnegative combinations of optimality criteria (since these operations preserve convex functions). For convex optimality criteria, the KieferWolfowitz equivalence theorem allows the practitioner to verify that a given design is globally optimal.^{[17]} The KieferWolfowitz equivalence theorem is related with the LegendreFenchel conjugacy for convex functions.^{[18]}
If an optimalitycriterion lacks convexity, then finding a global optimum and verifying its optimality often are difficult.
Model uncertainty and Bayesian approaches
Model selection
When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in the biostatistics supporting pharmacokinetics and pharmacodynamics, following the work of Cox and Atkinson.^{[19]}
Bayesian experimental design
When practitioners need to consider multiple models, they can specify a probabilitymeasure on the models and then select any design maximizing the expected value of such an experiment. Such probabilitybased optimaldesigns are called optimal Bayesian designs. Such Bayesian designs are used especially for generalized linear models (where the response follows an exponentialfamily distribution).^{[20]}
The use of a Bayesian design does not force statisticians to use Bayesian methods to analyze the data, however. Indeed, the "Bayesian" label for probabilitybased experimentaldesigns is disliked by some researchers.^{[21]} Alternative terminology for "Bayesian" optimality includes "onaverage" optimality or "population" optimality.
Iterative experimentation
Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments.
Sequential analysis
Sequential analysis was pioneered by Abraham Wald.^{[22]} In 1972, Herman Chernoff wrote an overview of optimal sequential designs,^{[23]} while adaptive designs were surveyed later by S. Zacks.^{[24]} Of course, much work on the optimal design of experiments is related to the theory of optimal decisions, especially the statistical decision theory of Abraham Wald.^{[25]}
Responsesurface methodology
Optimal designs for responsesurface models are discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. The blocking of optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos.
The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, by J. D. Gergonne in 1815 (Stigler). In English, two early contributions were made by Charles S. Peirce and Kirstine Smith.
Pioneering designs for multivariate Box–Behnken design requires excessive experimental runs when the number of variables exceeds three.^{[26]} Box's "centralcomposite" designs require more experimental runs than do the optimal designs of Kôno.^{[27]}
System identification and stochastic approximation
The optimization of sequential experimentation is studied also in G. E. P. Box in responsesurface methodology.^{[29]}
Adaptive designs are used in clinical trials, and optimal adaptive designs are surveyed in the Handbook of Experimental Designs chapter by Shelemyahu Zacks.
Specifying the number of experimental runs
Using a computer to find a good design
There are several methods of finding an optimal design, given an a priori restriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin and Sloane. Of course, fixing the number of experimental runs a priori would be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ.
Discretizing probabilitymeasure designs
In the mathematical theory on optimal experiments, an optimal design can be a probability measure that is supported on an infinite set of observationlocations. Such optimal probabilitymeasure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probabilitymeasure designs can be discretized to furnish approximately optimal designs.^{[30]}
In some cases, a finite set of observationlocations suffices to support an optimal design. Such a result was proved by Kôno and Kiefer in their works on responsesurface designs for quadratic models. The Kôno–Kiefer analysis explains why optimal designs for responsesurfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional in response surface methodology.^{[31]}
History
The prophet of scientific experimentation, Francis Bacon, foresaw that experimental designs should be improved. Researchers who improved experiments were praised in Bacon's utopian novel New Atlantis:
Then after divers meetings and consults of our whole number, to consider of the former labors and collections, we have three that take care out of them to direct new experiments, of a higher light, more penetrating into nature than the former. These we call lamps.
In 1815, an article on optimal designs for polynomial regression was published by Joseph Diaz Gergonne, according to Stigler.
Charles S. Peirce proposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture at Johns Hopkins University, Peirce introduced experimental design with these words:
Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation.
[....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly.^{[32]}
Like Bacon, Peirce was aware that experimental methods should strive for substantial improvement (even optimality).
Kirstine Smith proposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statistician Thorvald N. Thiele and was working with Karl Pearson in London.)
See also
Notes

^ Nordström (1999, p. 176)

^

^

^ The adjective "optimum" (and not "optimal") "is the slightly older form in English and avoids the construction 'optim(um) + al´—there is no 'optimalis' in Latin" (page x in Optimum Experimental Designs, with SAS, by Atkinson, Donev, and Tobias).

^ These three advantages (of optimal designs) are documented in the textbook by Atkinson, Donev, and Tobias.

^ Such criteria are called objective functions in optimization theory.

^ The Fisher information and other "information" functionals are fundamental concepts in statistical theory.

^ Traditionally, statisticians have evaluated estimators and designs by considering some summary statistic of the covariance matrix (of a meanunbiased estimator), usually with positive real values (like the determinant or matrix trace). Working with positive realnumbers brings several advantages: If the estimator of a single parameter has a positive variance, then the variance and the Fisher information are both positive real numbers; hence they are members of the convex cone of nonnegative real numbers (whose nonzero members have reciprocals in this same cone).
For several parameters, the covariancematrices and informationmatrices are elements of the convex cone of nonnegativedefinite symmetric matrices in a partially ordered vector space, under the Loewner (Löwner) order. This cone is closed under matrixmatrix addition, under matrixinversion, and under the multiplication of positive realnumbers and matrices. An exposition of matrix theory and the Loewnerorder appears in Pukelsheim.

^ The above optimalitycriteria are convex functions on domains of symmetric positivesemidefinite matrices: See an online textbook for practitioners, which has many illustrations and statistical applications:
Boyd and Vandenberghe discuss optimal experimental designs on pages 384–396.

^ Optimality criteria for "parameters of interest" and for contrasts are discussed by Atkinson, Donev and Tobias.

^ Iterative methods and approximation algorithms are surveyed in the textbook by Atkinson, Donev and Tobias and in the monographs of Fedorov (historical) and Pukelsheim, and in the survey article by Gaffke and Heiligers.

^ See Kiefer ("Optimum Designs for Fitting Biased Multiresponse Surfaces" pages 289–299).

^ Such benchmarking is discussed in the textbook by Atkinson et al. and in the papers of Kiefer. Modelrobust designs (including "Bayesian" designs) are surveyed by Chang and Notz.

^ (Pages 400401)

^ An introduction to "universal optimality" appears in the textbook of Atkinson, Donev, and Tobias. More detailed expositions occur in the advanced textbook of Pukelsheim and the papers of Kiefer.

^ Computational methods are discussed by Pukelsheim and by Gaffke and Heiligers.

^ The KieferWolfowitz equivalence theorem is discussed in Chapter 9 of Atkinson, Donev, and Tobias.

^ Pukelsheim uses convex analysis to study KieferWolfowitz equivalence theorem in relation to the LegendreFenchel conjugacy for convex functions The minimization of convex functions on domains of symmetric positivesemidefinite matrices is explained in an online textbook for practitioners, which has many illustrations and statistical applications:
Boyd and Vandenberghe discuss optimal experimental designs on pages 384–396.

^ See Chapter 20 in Atkinison, Donev, and Tobias.

^ Bayesian designs are discussed in Chapter 18 of the textbook by Atkinson, Donev, and Tobias. More advanced discussions occur in the monograph by Fedorov and Hackl, and the articles by Chaloner and Verdinelli and by DasGupta. Bayesian designs and other aspects of "modelrobust" designs are discussed by Chang and Notz.

^ As an alternative to "Bayesian optimality", "onaverage optimality" is advocated in Fedorov and Hackl.

^

^ Chernoff, H. (1972) Sequential Analysis and Optimal Design, SIAM Monograph.

^ Zacks, S. (1996) "Adaptive Designs for Parametric Models". In: Ghosh, S. and Rao, C. R., (Eds) (1996). Design and Analysis of Experiments, Handbook of Statistics, Volume 13. NorthHolland. ISBN 0444820612. (pages 151–180)

^ Henry P. Wynn wrote, "the modern theory of optimum design has its roots in the decision theory school of U.S. statistics founded by Abraham Wald" in his introduction "Jack Kiefer's Contributions to Experimental Design", which is pages xvii–xxiv in the following volume:
Kiefer acknowledges Wald's influence and results on many pages – 273 (page 55 in the reprinted volume), 280 (62), 289291 (7173), 294 (76), 297 (79), 315 (97) 319 (101) – in this article:

^ In the field of response surface methodology, the inefficiency of the Box–Behnken design is noted by Wu and Hamada (page 422).
Optimal designs for "followup" experiments are discussed by Wu and Hamada.

^ The Box's "centralcomposite" designs are discussed by according to Atkinson, Donev, and Tobias (page 165). These authors also discuss the blocking of Kônotype designs for quadratic responsesurfaces.

^ In system identification, the following books have chapters on optimal experimental design:

^ Some stepsize rules for of Judin & Nemirovskii and of Polyak are explained in the textbook by Kushner and Yin:

^ The discretization of optimal probabilitymeasure designs to provide approximately optimal designs is discussed by Atkinson, Donev, and Tobias and by Pukelsheim (especially Chapter 12).

^ Regarding designs for quadratic responsesurfaces, the results of Kôno and Kiefer are discussed in Atkinson, Donev, and Tobias. Mathematically, such results are associated with Chebyshev polynomials, "Markov systems", and "moment spaces": See

^ Peirce, C. S. (1882), "Introductory Lecture on the Study of Logic" delivered September 1882, published in Johns Hopkins University Circulars, v. 2, n. 19, pp. 11–12, November 1882, see p. 11, Google Books Eprint. Reprinted in Collected Papers v. 7, paragraphs 59–76, see 59, 63, Writings of Charles S. Peirce v. 4, pp. 378–82, see 378, 379, and The Essential Peirce v. 1, pp. 210–14, see 210–1, also lower down on 211.
References
Further reading
Textbooks for practitioners and students
Textbooks emphasizing regression and responsesurface methodology
The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses.
Textbooks emphasizing block designs
Optimal block designs are discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews the linear algebra used by Bailey (or the advanced books below). Bailey's exercises and discussion of randomization both emphasize statistical concepts (rather than algebraic computations).

Draft available online. (Especially Chapter 11.8 "Optimality")

(Chapter 5 "Block designs and optimality", pages 99–111)
Optimal block designs are discussed in the advanced monograph by Shah and Sinha and in the surveyarticles by Cheng and by Majumdar.
Books for professional statisticians and researchers
Articles and chapters

R. H. Hardin and N. J. A. Sloane, , vol. 37, 1993, pp. 339369Journal of Statistical Planning and Inference"A New Approach to the Construction of Optimal Designs",

Historical

(Appendix No. 14). NOAA PDF Eprint. Reprinted in paragraphs 139–157, and in Abstract at JSTOR.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.