• Nem Talált Eredményt

Comparing CBSEM and PLS

In document The growth of SMEs in the ICT sector (Pldal 161-166)

Henseler et al. (2009, p. 296) define covariance-fitting-based SEM (CBSEM) and variance-based SEM (PLS) approaches as being “complementary rather than competitive” modelling techniques. CBSEM and PLS are two techniques used to

objectives (Chin & Newsted 1999). CBSEM is better suitable for theory testing and development, whereas PLS path modelling is more suited to predictive modelling with research situations of “high complexity but low theoretical information” (Henseler, et al. 2009, p. 296.) Reisinger and Mavondo (2006) also consider CBSEM to have potential in both theory testing and development. PLS can also be used for theory confirmation, as well as testing the existence of relationships between constructs (Chin 1998; Henseler et al. 2009). Table 7.1 summarises the most important distinctions between PLS and CBSEM.

Table 7.1 Comparison of PLS and CBSEM

Criterion PLS CBSEM

Objective Prediction-oriented Parameter-oriented

Approach Variance-based Covariance-based

Assumptions Predictor specification (nonparametric)

Typically multivariate normal distribution and independent observations (parametric)

Parameter estimates Consistent as indicators and sample size increase (i.e., consistency at large)

Consistent Latent variable (LV)

scores

Explicitly estimated Indeterminate Epistemic relationship

between LVs and its measures

Can be modelled in either formative or relative mode

Typically only with reflective indicators Implications Optimal prediction accuracy Optimal parameter

accuracy Model complexity Large complexity (e.g., 100

constructs and 1000 indicators)

Small to moderate model complexity (e.g., fewer than 100 indicators) Sample size Power analysis based on the

portion of the model with the largest number of predictors.

Minimal recommendations range from 30 to 100 cases.

Ideally based on power analysis of specific model.

Minimal recommendations range from 200 to 800.

Source: Chin & Newsted (1999, p. 314) and Chin (2010) With reference to Table 7.1, the general objectives of the research project need to be examined to inform the choice between the CBSEM and PLS methods. In order to respond to the research questions, the structural model needs to be validated. Both modelling techniques offer the opportunity for model validation through a specific set

of criteria. However, due to the nature of the questions, a certain causality can be identified between the dependent and the independent constructs.

With regard to the set of assumptions on the data, applying PLS makes it easier to fulfil the modelling requirements, as it poses no parametric assumptions. This potentially makes using PLS a more flexible choice, especially when considering the uncertainties of the data collection.

The latent variable scores are important from the perspective of the research, as they are required to analyse the firm life-cycle perspective. CBSEM does not provide the latent variable scores. Thus, applying PLS is clearly favourable in implementing this research perspective. The appropriateness of this selection is confirmed when the difficulty of establishing measurement equivalence at the second-order construct level using the CBSEM modelling technique is considered (Cheung 2008). The comparison of simple averages across groups may end up being misleading (Cheung 2008), thus the weighted latent variable scores calculated by PLS allow a better assessment of model fit for the combined population.

Model complexity needs to be assessed in order to enable an informed decision to be made between the modelling techniques. With potentially five second-order and 22 first-order constructs incorporating 94 manifest variables, the model can be considered relatively complex, although it does not reach the high complexity level defined in Table 7.1. This makes both CBSEM and PLS applicable.

A further problem is the correlation between factors (first-order constructs) when applying a higher-order construct structure, as correlation between the lower-order construct scores is required to establish the reliability and validity of the higher-order constructs (McGartland Rubio, Berg-Weger & Tebb 2001). Allowing first-order construct scores to correlate in CBSEM results in the correlation of the error-terms (McGartland Rubio et al. 2001). The correlation of error terms, however, is not advised, as it may provide misleading fit (Bollen 1989). This does not mean that higher-order constructs cannot be used in CBSEM, but simply that they are only applicable together with oblique factor rotation in confirmatory factor analysis (CFA) at the level of the

The research is conducted from a firm life-cycle perspective, aiming at fitting a uniform model on populations from two countries. In order to assess the structural model for different sub-sets of the sample (Australia, Hungary and particular firm life-cycle stages), the sample size requirement of the modelling technique applied becomes a critical parameter. As PLS is applicable for lesser sample sizes as well (although at the cost of sacrificing statistical power), applying this modelling technique potentially allows comparative analysis of the sub-samples within the framework of the same structural model.

In summary, it can be stated, that although both PLS and CBSEM modelling techniques are applicable for the purposes of this project, in some respects PLS provides a better solution and a more flexible framework for validating the model and analysing the data in order to answer different aspects of the research questions.

Modelling strategy choice

Three distinct modelling approaches can be taken when applying SEM: confirmatory, model developmental and alternative model testing (Reisinger & Mavondo 2006). Hair et al. (2006) refer to these as modelling strategies: confirmatory; competing models; and model development strategy.

While a confirmatory modelling strategy or approach aims at assessing the fitness of a particular model, the other two aim at coming up with an adjusted or alternative model that better fits the data. The main difference between the latter two is that while competing or alternative modelling approaches start assessing the fitness of the data with pre-defined model variants, the developmental strategy gradually adjusts the starting model using an iterative approach (Hair et al. 2006; Reisinger & Mavondo 2006). Both PLS and CBSEM require firm conceptual support, thus being equally applicable modelling techniques for every modelling strategy.

In order to answer the research questions, given that there is a firm conceptual foundation to the firm growth phenomenon, a model confirmatory strategy is most appropriate. This means that the applicability of a particular model structure can be

tested using the data collected on the population. The model confirmatory strategy will be augmented by the investigation of mediation and moderation effects hypothesised in this research.

7.2.2 Assumptions and Requirements of Multivariate Data Analysis This section discusses statistical issues related to modelling. The question of scaling is addressed, to demonstrate that indicators measured on Likert-type scales can be employed in SEM modelling, as long as a multivariate measurement approach is followed. The specific requirements of PLS regarding the data are also discussed in detail.

Indicator scaling

The necessity of the data being measured on interval or ratio scales can be derived from the requirements of parametric methods. Thus SEM modelling techniques require indicators measured on an interval scale. This is a problem, as the Likert response format provides data measured on an ordinal scale. The matter of, whether or not ordinal data measured on a Likert-type scale is appropriate for SEM analysis needs to be considered. Specialists in quantitative methodology, as well as applied researchers from different disciplines, have investigated this issue. Carifio and Perla (2007) discuss the confusion in terminology and application, and state that the Likert response format and the Likert scale variables are substantially different. “The Likert response format is only a problem … [if researchers analyse] … each individual item on a scale or questionnaire separately.” In fact, “a single item is not a scale in the sense of a measurement scale”

(Carifio & Perla 2007, p. 110). Thus they conclude that variables measured on a Likert-type scale can be treated as interval scale data for the purposes of modelling.

Empirical evidence has also been presented which indicates that, for instance, F-tests are fairly robust against the violation of the interval data assumption for 5 to 7 point Likert-type scales (Carifio & Perla 2007). Barrett (2010) investigated the difference between parametric and non-parametric correlation values in ordinal scaled data, and

deviation from true correlation scores in case an ordinal scale consists of five or more levels. In conclusion, Carifio and Perla (2007) state that at a scale level – if the measure is constructed appropriately (Lyons 1998) – it is acceptable to treat 5 to 7 point Likert-type scale questions as interval scale variables for parametric testing. Furthermore, several studies show that it is acceptable to treat Likert scale responses or the sum of these as interval level data and analyse them univariately and multivariately (Carifio &

Perla 2008). Even promoters of the ordinal interpretation of Likert scale data acknowledge that parametric statistics are widely used to analyse this type of data (Gob, McCollin & Ramalhoto 2007; Liu & Agresti 2005). Scaling methodology books, such as Dunn-Rankin (2004), also prescribe parametric treatment for summated Likert scale data. Thus, it can be seen as appropriate to apply parametric methods to data measured on a five-point Likert scale format in this study.

In document The growth of SMEs in the ICT sector (Pldal 161-166)