Parametric analysis is a procedure that relies on the assumption that the distribution of the responses we are measuring, once any fixed parts have been taken into account, reflects a certain probability distribution. Then we make statistical inferences about the parameters that characterise this distribution. For example, we may conclude that the response we are measuring is normally distributed (a distribution defined by the sample means and sample variance). Under this assumption we can evaluate the differences between the sample means by comparing the size of these differences to the sample variance. However, we need not restrict ourselves to the normal distribution.
Parametric analysis is a process whereby the output of a system is observed when a single test parameter is varied over a range while all other test parameters are held fixed. It helps the modeller to understand how the problem solutions change as a role of the singular input parameters. The overall process is assigned to as sensitivity analysis because it shows the sensitivity of the output of the model to changes in input.
As well as making distributional assumptions, when making a parametric analysis, we may also need to assume:
- The variability is the same across all collection (homogeneity of variance);
- The responses are numeric and continuous;
- The observations are independent;
- There are no outlying observations unduly affecting the results;
- The results behave in an additive way.
The main objective of parametric analysis is to describe or predict growth, or differences in growth, as a function of chronological age.
We distinguish different types of tests that provide a set of tools for analysing data. Among other things, there are:
- t-test - This test should be used if you have a single factor at two levels (for example you wish to examine treatment and control). We do not advise practising the t-test in more complicated situations.
- One-way ANOVA - This test is suitable if your experiment consists of a single factor at more than two levels (for example three portions of a test compound and control). The ANOVA provides an overall test to recognise if the experimental factor means are various. If you require to make pairwise comparisons between the unique factor means then you need to use "post hoc" tests, multiple comparison methods or planned comparisons. However, you should note that all of these tests - the overall test and any pairwise tests - use an estimate of the variability obtained from all of the data. This estimate is a more solid and reproducible estimate of the variability because all of the data have been adopted to measure it.
When addressing sample size, there are two general approaches to the underlying statistics:
- the parametric - this approach assumes that the functional form of the frequency distribution is known and is concerned with testing hypotheses about parameters of the distribution, or estimating the parameters.
- the nonparametric - this approach does not assume the form of the frequency distribution (i.e. it applies distribution free statistics).
"Parametric analysis is the most powerful. Non-parametric analysis in the most flexible".
Ordinarily, non-parametric procedures are less powerful than equivalent parametric approaches when the assumptions of the latter are valid. The assumptions present the parametric approach with extra information, which the non-parametric approach must find. The more relabelling, the better the potential of the non-parametric approach relative to the parametric approach. In a sense, the method has more data from more relabelling and discovers the null distribution assumed in the parametric approach. Nevertheless, if the assumptions needed for a parametric analysis are not reliable, a non-parametric approach becomes the only valid method of analysis.
- S. T. Bate, R. A. Clark, 2014, page 151
- D. L. Giadrosich, 1995, page 78
- S. T. Bate, R. A. Clark, 2014, page 152
- R. C. Hauspie, N. Cameron, L. Molinari, 2004, page 234
- S. T. Bate, R. A. Clark, 2014, page 151
- D. L. Giadrosich, 1995, page 121
- D. W. Scott, 2015
- W. D. Penny, K. J. Friston, J. T. Ashburner, S. J. Kiebel, T. E. Nichols, 2011, pages 260,261
- Bate S. T., Clark R. A., (2014), The Design and Statistical Analysis of Animal Experiments, Cambridge University Press, Cambridge, England
- Giadrosich D. L., (1995), Operations Research Analysis in Test and Evaluation, American Institute of Aeronautics and Astronautics, Reston, Wirginia, United States of America
- Hauspie R. C., Cameron N., Molinari L., (2004), Methods in Human Growth Research, Cambridge University Press, Cambridge, England
- Penny W. D., Friston K. J., Ashburner J. T., Kiebel S. J., Nichols T. E., (2011), Statistical Parametric Mapping: The Analysis of Functional Brain Images, Elsevier, St. Louis, Missouri, United States of America
- Scott D. W., (2015), Multivariate Density Estimation: Theory, Practice, and Visualization, John Wiley & Sons, Hoboken, New Jersey, United State of America
Author: Monika Mendak