Submitted to: Book Chapter
Publication Type: Book / Chapter
Publication Acceptance Date: 12/30/2015
Publication Date: N/A
Technical Abstract: Biological research is expensive, with monetary costs to granting agencies and emotional costs to researchers. As such, biological researchers should always follow the mantra, "failure is not an option." A failed experimental design is generally manifested as an experiment with high P-values, leaving the researcher with uncertain or equivocal conclusions: are the treatments really not different from each other, is my experimental design faulty due to poor planning and decision-making, or was there some unknown and unseen disturbance that occurred to the experiment, causing errors to be inflated? Rarely can these questions be answered when P-values are high, often resulting in difficult-to-publish results and wasted time and money. To borrow an experimental design term, these causal explanations are confounded with each other when treatment effects are non-significant. It is generally impossible to assign cause to one or the other explanation. Designing experiments with sufficient power to detect real treatment differences is the best way to avoid these problems. Consider the following ideas and guidelines in planning future experiments. • Be able to understand and define an experimental unit in any experimental situation in which you are involved. • Strive to always replicate experiments at the experimental unit level and include additional levels or types of replication as necessary to improve precision or to broaden inferences. • Always know the costs associated with various forms of replication so that you can easily conduct cost-benefit analyses of various replication strategies and make conscious decisions about which strategy gives the highest probability of success. • Have a realistic idea of the treatment differences that you expect from any given experiment so that you can conduct a meaningful power analysis before making final decisions on the form and amount of replication. • Do not become complacent with regard to replication and power. Conduct retrospective analyses of experiments, comparing your ability to detect differences among treatment means with your a prior power analyses, adjusting both experimental designs and replication strategies as needed to improve power of future experiments.