• Nem Talált Eredményt

2. 12.2 Post Hoc Tests

In document Research Methodology (Pldal 99-107)

Recall from earlier that the ANOVA test tells you whether you have an overall difference between your groups but it does not tell you which specific groups differed - post-hoc tests do. Because post-hoc tests are run to confirm where the differences occurred between groups, they should, therefore, only be run when you have a shown an overall significant difference in group means (i.e. a significant one-way ANOVA result).

This handout provides information on the use of post hoc tests in the Analysis of Variance

(ANOVA). Post hoc tests are designed for situations in which the researcher has already obtained a significant omnibus F-test with a factor that consists of three or more means and additional exploration of the differences among means is needed to provide specific information on which means are significantly different from each other5.

Post-hoc tests attempt to control the experimentwise error rate usually alpha = 0.05) in the same manner that the one-way ANOVA is used instead of multiple t-tests. Post-hoc tests are termed a posteriori tests - that is, performed after the event (the event in this case being a study). (Figure 45)

5 pages.uoregon.edu/stevensj/posthoc.pdf

12.8. ábra - Figure 45. Post Hoc test

There are a great number of different post-hoc tests that you can use, however, you should only run one post-hoc test - do not run multiple post-hoc tests. For a one-way ANOVA, you will probably find that just one of four tests need to be considered. If your data meet the assumption of homogeneity of variances then either use the Tukey's honestly significant difference (HSD) or Scheffé post-hoc tests.

Often, Tukey's HSD test is recommended by statisticians as it is not as conservative as the Scheffe test (which means that you are more likely to detect differences if they exist with Tukey's HSD test). Note that if you use SPSS, Tukey's HSD test is simply referred to as ‘Tukey‘ in the post-hoc multiple comparisons dialogue box).

If your data did not meet the homogeneity of variances assumption then you should consider running either the Games Howell or Dunnett's C post-hoc test. The Games Howell test is generally recommended.

2.1. 12.2.1 In cased of the homogentity of variance of the tested variables

Inspection of the source table shows that both the main effects and the interaction effect are significant. The gender effect can be interpreted directly since there are only two levels of the factor. Interpretation of either the Experience main effect or the Gender by Experience interaction is ambiguous, however, since there are multiple means in each effect. We will delay testing and interpretation of the interaction effect for a later handout. The concern now is how to determine which of the means for the four Experience groups (see table below) are significantly different from the others6.

1. Fisher‘s LSD (smallest significant difference): In statistics, a method for comparing treatment group means after The analysis of variance (ANOVA) null hypothesis that the expected means of all treatment groups under study are equal, has been rejected using the ANOVA F-test. If the F-test fails to reject the null hypothesis this procedure should not be used7.

2. Bonferroni Test: A type of multiple comparison test used in statistical analysis. When an experimenter performs enough tests, he or she will eventually end up with a result that shows statistical significance, even if there is none. If a particular test yields correct results 99% of the time, running 100 tests could lead to a

6 pages.uoregon.edu/stevensj/posthoc.pdf

7 Hayter, A. J. (1986): The Maximum Familywise Error Rate of Fisher's Least Significant Difference Test. Journal of the American Statistical Association.

false result somewhere in the mix. The Bonferroni test attempts to prevent data from incorrectly appearing to be statistically significant by lowering the alpha value8.

3. Scheffe's Test: A statistical test that is used to make unplanned comparisons, rather than pre-planned comparisons, among group means in an analysis of variance (ANOVA) experiment. While Scheffe's test has the advantage of giving the experimenter the flexibility to test any comparisons that appear interesting, the drawback of this flexibility is that the test has very low statistical power9.

4. Tukey's test, also known as the Tukey range test, Tukey method, Tukey's honest significance test, Tukey's HSD (honestly significant difference) test, or the Tukey–Kramer method, is a single-step multiple comparison procedure and statistical test generally used in conjunction with an ANOVA to find which means are significantly different from one another. It compares all possible pairs of means, and is based on a studentized range distribution (q) (this distribution is similar to the distribution of t from the t-test)10.

A number of other post hoc procedures are available. There is a Tukey-Kramer procedure designed for the situation in which n-sizes are not equal. Brown-Forsythe‘s post hoc procedure is a modification of the Scheffe test for situations with heterogeneity of variance. Duncan‘s Multiple Range test and the Newman-Keuls test provide different critical difference values for particular comparisons of means depending on how adjacent the means are. Both tests have been criticized for not providing sufficient protection against alpha slippage and should probably be avoided. Further information on these tests and related issues in contrast or multiple comparison tests is available from Kirk (1982) or Winer, Brown, and Michels (1991)11.

Open the sample data (database ‘one-way ANOVA) and click on the Post Hoc… button, then select the options indicated below (Figure 45). Click on Continue and then OK. The output includes two tables, one titled ‘Post Hoc Tests‘ (Figure 46) and one ‘Homogeneous Subsets‘ (Figure 47). The ‘Post Hoc Tests‘ table, shown below, includes the results of a simultaneous set of tests of hypotheses of the form

1. H(0) : The two means are equal 2. H(1): The two means are not equal.

Also included in this table are 95% confidence intervals for each pair of means. Relevant rows, for tests and intervals are shaded; all other rows are repetitions of these rows12.

12.9. ábra - Figure 46. Post Hoc test / Output / Tukey

8 en.wikipedia.org/wiki/Bonferroni_correction

9 investment_terms.enacademic.com/12356/Scheffe's_Test

10 Lowry, Richard. One Way ANOVA – Independent Samples. Vassar.edu. Retrieved on December 4th, 2008

11 pages.uoregon.edu/stevensj/posthoc.pdf

12 uashome.alaska.edu/~cnhayjahans/Resources/SPSS/One%20Way%20ANOVA.pdf

The ‘Homogeneous Subsets‘ table, shown below, groups the treatments into subsets within which the treatment means are considered not significantly different.

The estimated means for the two homogeneous subsets are shaded. This table is a handy summary of the major differences among the means. It organizes the means of the three groups into ‘homogeneous subsets‘ - subsets of means that do not differ from each other at p < 0,05 go together, and subsets that do differ go into separate columns. Groups that don't show up in the same column are significantly different from each other at p < 0,05 according to the Tukey multiple comparison procedure. Notice how the "regular" group and the "fun" group show up in separate columns. This indicates that those groups are significantly different. The king-size group shows up in each column, indicating that it is not significantly different from either of the other two groups.

12.10. ábra - Figure 47. Post Hoc test / Output / Homogeneous Subsets

3. 12.3 Multi-Way ANOVA

When more than one factor is present and the factors are crossed, a multifactor ANOVA is appropriate. Both main effects and interactions between the factors may be estimated13.

It differs from the univariate analysis of variance in that at the same time it measures the effect of several independent variables on one dependent variable, and it also examines interaction between the independent variables.

Let us examine the interest towards cultural and architectural sights in the light of the tourist‘s sex and qualification.

The conditions of the multiattribute analysis of variance completely coincide with those of the one(singl)-attribute analysis of variance.

When you choose to analyse your data using a two-way ANOVA, part of the process involves checking to make sure that the data you want to analyse can actually be analysed using a two-way ANOVA. You need to do this because it is only appropriate to use a two-way ANOVA if your data ‘passes‘ six assumptions that are required for a two-way ANOVA to give you a valid result. In practice, checking for these six assumptions means that you have a few more procedures to run through in SPSS when performing your analysis, as well as spend a little bit more time thinking about your data, but it is not a difficult task. In practice, checking for these six assumptions means that you have a few more procedures to run through in SPSS when performing your analysis, as well as spend a little bit more time thinking about your data, but it is not a difficult task14.

Before we introduce you to these six assumptions, do not be surprised if, when analysing your own data using SPSS, one or more of these assumptions is violated (i.e., is not met).

13 www.statgraphics.com/analysis_of_variance.htm

14 statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php

Open the sample data (download: ‘Post Hoc15‘). Click Analyze / General Linear Model / Univariate... on the top menu as shown below. You will be presented with the ‘Univariate‘ dialogue box (Figure 48).

12.11. ábra - Figure 48. Analyze / General Linear Model / Univariate...

You need to transfer the dependent variable ‘afffection‘ into the Dependent Variable: box, and transfer both independent variables, ‘sex‘ and „qualification‖, into the Fixed Factor(s): box. You can do this by drag-and-dropping the variables into the respective boxes or by using the button. The result is shown below (For this analysis, you will not need to worry about the ‘Random Factor(s)‘; ‘Covariate(s)‘ or ‘WLS Weight‘: boxes).

Click on the Plots button. You will be presented with the ‘Univariate: Profile Plots‘ dialogue box (Figure 49).

12.12. ábra - Figure 49. Analyze / General Linear Model / Univariate... / Plots

15 http://portal.agr.unideb.hu/oktatok/drvinczeszilvia/oktatas/oktatott_targyak/statisztika_kutatasmodszertan_index/index.html

Transfer the independent variable Edu_Level from the Factors: box into the Horizontal Axis: box, and transfer the Gender variable into the Separate Lines: box. You will be presented with the following scren. (Put the independent variable with the greater number of levels in the Horizontal Axis: box) (Figure 49). Click the button. You will see that ‘sex*qualification‘ has been added to the Plots: box.

Click the button. This will return you to the ‘Univariate‘ dialogue box. Click the button. You will be presented with the ‘Univariate: Post Hoc Multiple Comparisons for Observed...‘ dialogue box, as shown below. Transfer

‘qualification‘ from the Factor(s): box to the Post Hoc Tests for: box. This will make the -Equal Variances Assumed- area become active (loose the ‘grey sheen‘) and present you with some choices for which post-hoc test to use. For this example, we are going to select Tukey, which is a good, all-round post-hoc test. You only need to transfer independent variables that have more than two levels into the Post Hoc Tests for: box. This is why we do not transfer ‘sex‘. You will finish up with the following screen (Figure 50).

12.13. ábra - Figure 50. Analyze / General Linear Model / Univariate... / Post Hoc...

Click the button to return to the ‘Univariate‘ dialogue box. Click the button. This will present you with the

‘Univariate: Options‘ dialogue box, as shown below. Transfer ‘sex‘, ‘qualification‘ and ‘sex*qualification‘ from the Factor(s) and Factor Interactions: box into the Display Means for: box. In the -Display- area, tick the Descriptive Statistics option. You will presented with the following screen (Figure 51).

12.14. ábra - Figure 51. Analyze / General Linear Model / Univariate... / Options...

Click the button to return to the "Univariate" dialogue box. Click the button to generate the output.

3.1. 12.3.1 Descriptives Table

The ‘Descriptives‘ table table is very useful because it provides the mean and standard deviation for the groups that have been split by both independent variables. In addition, the table also provides ‘Total‘ rows, which allows means and standard deviations for groups only split by one independent variable or none at all to be known16 (Figure 52).

12.15. ábra - Figure 52. Analyze / General Linear Model / Univariate... / Output:

In document Research Methodology (Pldal 99-107)