Professor JNR Jeffers
D.Sc. (Lancaster), CStat, CIBiol, FIBiol, CIFor

Design of Experiments

Stating the objectives
1. Have you stated clearly and explicitly the objectives of the experiment and the reasons for undertaking it?
2. Have you translated these objectives into precise questions that the experiment can be expected to answer?

Defining the population about which inferences are to be made
3. Have you defined carefully the population about which you are seeking to make inferences from the results of the experiment?
4. Is the site or location of the experiment representative of that defined population
5. If not, what do you need to do to find a representative site?
6. Is the experimental material to be used in the experiment, e.g. plants, animals, soil, water, etc., representative of the defined population?
7. If not, how can representative material be obtained? Selection of experimental treatments
8. If either the location or the experimental material is not representative of the population about which you wish to make inferences, is it worth doing the experiment at all?

Selection of experimental treatments
9. Have the experimental treatments been defined sufficiently precisely for them to be applied correctly by the experimenter or by those wishing to repeat the experiment, and are they realistic?
10. If the "treatments" consist of species, varieties, or strains of organisms, are they representative of some defined population of organisms?
11. Can the experimental treatments be expressed as "factors", that is by: groups of treatments at two or more: levels?
12. If so, can all combinations of factors be achieved and are these combinations realistic?
13. Is the number of combinations of factor levels within each factor restricted to two or three?
14. If not, is there any real advantage in using more than three levels to determine the shape of the response curve?
15. Do the levels of any one factor change by a constant amount or in a constant ratio?
16. If not, is there a good reason for departing from linear relationships, or relationships that can be made linear by an appropriate transformation?
17. Is the number of factorial combinations so large that there would be some advantage in considering only some of those combinations, perhaps sequentially?
18. Is there a naturally defined control treatment which should be included in the experiment?

Plot size and shape
19. Is the plot size for the experiment defined by the nature of the experimental material or the site?
20. If not, will the proposed plot size enable the treatments to be applied and allow the desired records to be made?
21. Is the plot shape defined by the nature of the experimental material or treatments?
22. If not, will the proposed plot shape enable the treatments to be applied and allow the desired records to be made?
23. Are the experimental plots all of the same size and shape?
24. If not, are you aware of the problems that may be encountered during the analysis of the results of the experiment?
25. Is there likely to be any interference between the individual plots of the experiment?
26. Can this interference be reduced by increasing the space between the plots, or by surrounding each plot by a buffer zone?
27. Are the plots of the experiment of the smallest size practically possible?

Number of replications
28. Do you have any preliminary estimates of the precision likely to be achieved by the experiment (expressed as a coefficient of variation, for example)?
29. Is it possible to conduct a pilot experiment to determine the coefficient of variation likely to be encountered, and to test the experimental procedures?
30. Have you determined the size of the difference between treatment means that you would regard as of practical importance, if such a difference were to exist?
31. Have you calculated the number of replications that will be necessary to detect differences of a size that you regard as being of practical importance?
32. If there is insufficient land or experimental material for the number of replications required to give significant differences, is it worth doing the experiment at all?
33. Do the controls need to be replicated more or less frequently than the other treatments in order to place greater emphasis on particular comparisons?

Layout of the experiment

34. Is it possible to divide the site or the experimental material into blocks with less variation than over the experiment as a whole?
35. Are these blocks large enough to hold one plot of each treatment and control?
36. Is a randomised block design desirable for its ease of analysis and robustness?
37. Are the important comparisons estimated with the greatest precision?
38. If the treatment comparisons ate not orthogonal do you know how the data can be analysed and will that analysis answer questions the experiment is designed to pose?
39. Are there any regular trends across the experimental site or material? If so, are these trends in one or both directions?
40. Have you considered the use of row and column designs to remove the effects of one or two-way trends?
41. Is there likely to be any advantage in the use of a split plot design, perhaps because certain treatments cannot be applied uniformly to small plots?
42. If so, are the treatments applied to the sub-plots the ones for which the greatest precision is required?
43. Will confounding the treatment factors or interactions with block differences improve the efficiency of the design?
44. Have you planned to use the blocks of the experiment to absorb as much as possible of the extraneous variation in the conduct of the experiment?
45. Is it possible that plots may be lost through accidents or mishaps?
46. If so, does your choice of experimental layout allow for a meaningful interpretation of the results?

Randomisation

47. Are the treatments and controls to be allocated to the plots of the experiment by an explicit randomising procedure?
48. Is a separate randomisation to be made for each block or row of the experiment?
49. Are the constraints on the randomisation correctly applied?
50. Are you tempted to re-randomise any part of the allocation of treatments and controls to plots because of apparently unfortunate coincidences?
51. If so, do you have some knowledge of variation in the site or experimental material that has not been incorporated into the design of the experiment?
52. Does a plan exist showing the allocation of the treatments and controls to the individual plots?

Recording of results

53. Does each plot of the experiment have a clear number or designation, linking it unambiguously to the plan of the experiment?
54. Have you defined the time intervals at which assessments are to be made?
55. Have you defined the variables or attributes to be counted or measured?
56. If so, are the measurements meaningful and relevant to the objectives?
57. Are any of the assessments to be made from samples taken from the plots?
58. If so, has the efficiency of the sampling been tested?
59. Are any of the assessments to be used as covariates to correct for unavoidable but measurable differences between the plots?
60. If so, will these assessments be made before any of the treatments are applied?
61. Have you planned to use the blocks of the experiment to absorb any unwanted variation in assessment, e.g. different observers, etc.
62. Have you designed a record form on which the assessments will be entered?
63. Have you indicated on the record form the units to be used for each assessment?
64. Have you indicated on the record form the required precision of the assessments?
65. Have the assessors been trained to make the necessary assessments correctly?
66. Is there space on the record form for observations to be recorded of unexpected changes or effects, and assessors encouraged to look for these effects?

Planning for analysis
67. Have the hypotheses to be tested been defined a priori.
68 Are these tests expressed, as far as possible, as null hypotheses?
69. Have you defined the contrasts for which estimates are to be derived from the result of the experiment?
70. Have any special contrasts to be tested or estimated in the analysis been defined in advance of a first inspection of the results of the experiment?
71. Do you understand the methods of analysis that will need to be used, and made arrangements for the computations to be done on a computer, or elsewhere?
72. If the computations are to be done on a computer, does the necessary program exist and do you understand the constraints imposed by that program?
73. If not, have you obtained advice from a qualified statistician on the analysis and interpretation of the results, preferably before starting the experiment?

The final (and most important) question

74. If you are in doubt about the purpose of any of the questions in this checklist, should you not obtain some advice from a statistician with experience of your field of research before continuing with the experiment?

There is usually little that a statistician can do to help you once you have committed yourself to a particular experimental design or procedure.

Bibliography

If any of the questions in this checklist refer to aspects of statistical theory with which you are unfamiliar, further information can be found in the following texts:-

Cochran W.G. and Cox G.M. (1957) Experimental designs. 2nd ed. Wiley, New York
Cox D.R. (1958) Planning of experiments. Wiley, New York.
Dyke G. (1974) Comparative experiments with field crops. Butterworths, Sevenoaks.
Federer W.T (1955) Experimental design: theory and application. Macmillan, London.
Finney D.J. (1955) Experimental design and its statistical basis. CUP. Cambridge.
Fisher R.A. (1935) The design of experiments. Oliver and Boyd, Edinburgh.
John JA. and Quenouille M.H. (1977) Experiments: design and analysis. Griffin, London
Lunn A.D and McNeil D.R. Computer-interactive data analysis. Wiley, London.
Mead R. and Curnow R.N. (1983) Statistical methods in agriculture and experimental biology. Chapman and Hall, London.
Pearce S.C. (1965) Biological statistics: an introduction. McGraw Hill Book Co., New York.
Pearce S.C. (1976) An agricultural field experiment: a statistical examination of theory and practice. Commonwealth Agricultural Bureau, Farnham Royal.
Quenouille M.H. (1953) The design and analysis of experiments. Griffin, London.
Scheffe H. (1959) The analysis of variance. Wiley, New York.

<-- Overview
Sampling -->