• Nem Talált Eredményt

2. Components of research design

2.1. The research question

2.1.1. Research questions of impact assessments

It is the research question of the investigation that defines the differences between the genres of evaluation and impact assessment. While in case of evaluation one searches for the value of some policy intervention in relation to criteria defined earlier, in case of impact assessment the task is to reveal for some causal explanation about the expected impacts of a set of measures.

Impact assessments are analyses, explanations about the expected results and consequences of certain social-political interventions, policies and measures, based on empirical evidence. Therefore the research question of an impact assessment is always related to the existence and intensity of causal relations. In other words, impact assessors must demonstrate that the observed changes can be attributed to the measures examined or at least that these measures have contributed to them.

The definition of impact. Social researchers have, in connection with impact assessment, increasingly emphasised the requirement that impact assessments should apply the conceptual framework of causality as defined by social sciences. 2, 3 A researcher is justified to claim that a change that has occurred within the circle of affected enterprises is the consequence of a certain measure, if he or she clarifies first, what is regarded as a change and what are the key variables measuring this change. Theoretically, an inference about the existence of an impact of an intervention can be made only by comparing two scenarios: (a) the one in which the assessed measure was taken and (b) another scenario in which the measure was not taken. 4 One of these scenarios cannot

1 [King-Keohane-Verba 1994]

2 [Moksony 1999]

3 [Bartus and others 2005]

4 The previous sentence applies for the case of ex post impact assessment only. In case of ex ante impact assessment future tense must be applied: in this case (a) the scenario in which the measure will not be taken is compared with the scenario in which the measure will be taken.

be observed, and therefore it is called the counterfactual scenario. In some cases the impact of an intervention is measurable: this is the difference between the values of a key outcome indicator expressing change under the two scenarios.

Probabilistic impacts. The notion of impact can also be defined in probabilistic settings in the following manner: a cause raises the probability of an event, and the increase of this probability is the impact.

• In the framework of SME development policy this approach can be exemplified in the following interpretation: small enterprise development initiatives have positive impact to the extent that their implementation raise the probability of certain previously defined aims being attained among SMEs of the target group.

Complex structures of causes and impacts. In more complex settings causes and impacts can build up inter-dependent and interwoven structures. An example for such structure is the causal chain (the so-called "domino effect"). Another example for such inter-dependent causal structure is when a cause leads to the desired impact only if some other condition is also met (i.e. another cause, which alone could not trigger the desired impact) also occurs.

• An example for the ―causal chain" structure is the chain of non-payment, i.e. when each firm within a group of companies owes money to the previous one, and the bankruptcy of one company may trigger the failure of all other companies that follow this firm in the line of non-payment.

• An example for a more complex causal structure in SME development is the joint effect of a soft loan scheme and a credit guarantee scheme. The establishment of a soft loan scheme for start-up companies can facilitate the financing of these companies only if an affordable credit guarantee scheme is established as well.

The fundamental problem of impact assessment. Every impact assessment must cope with the basic problem of causal inference. Since the outcomes under the counterfactual (hypothetical) scenario are not observable, the comparison of the two sets of outcomes under the two scenarios cannot be based on a solid empirical basis. This leads to the so-called fundamental problem of causal inference, i.e. that causality is not directly observable.

However, a series of impact assessment methods have been developed to make indirect inferences to causality.

Examples of impact assessment research questions. The basic research question of impact assessments appears explicitly in the methodological apparatus of many impact assessments conducted about small business development measures.

• Research designs involving control groups may attempt to respond to research questions which tackle the difference between changes that have occurred in the beneficiary group and the control group. Example:

• Did the competitiveness of subsidy receiving beneficiary companies improve, in comparison with other, comparable, similar applicant companies that have not received subsidies?

• In case of research designs where no such control group can be constructed, the members of the target group can be asked about the difference between the observed and the counterfactual scenario. Examples:

• What would have happened to your firm if you had not received the subsidy?

What difference would it have made to your company, if you had received the required support?

You have hired two new employees since having received the subsidy: did you employ them as a consequence of having received the subsidy?

The government plans to introduce this regulation: will this be advantageous to your company or rather disadvantageous? Why?

2.1.2. Research questions of evaluations

Evaluation is a systematic determination of value, merit, and worth of interventions by using previously defined criteria or standards. Therefore the basic research question to be answered by evaluators is, whether the examined policy, programme or project is (or was, or will turn out to be) good or bad according to some criteria of success. Evaluators are expected to give their opinion about these interventions and in most evaluation cultures this opinion should be summarised in numerical ratings. Evaluators should judge the success or failure

those of other factors, and should find ways to improve the programme through a modification of current operations.

Evaluations are supposed to deliver expert opinions and the corresponding ratings about the success or failure of interventions, about the positive and negative characteristics of laws and enforcing organisations, about the advantages and disadvantages of aid delivery regimes.

Evaluations are prepared not only in order to establish the impact of policies and programmes. Such studies inform decision makers and taxpayers about the allocation of public funds. Evaluations may facilitate the improvements in the design and administration of programmes if these reports are used as feedback into current policy making and stimulate informed debate.

Impact assessments as part of evaluations. Evaluations of policies, programmes or projects are frequently revealing causal relations between interventions and outcomes. However, evaluators are expected not only to name the consequences and assess the extent of the expected impacts but also to evaluate these impacts whether they are satisfactory or not. Evaluation reports should answer not only the question of whether interventions or treatments work, but also a wide range of questions about when, how, under what circumstances the impacts occur, and what lessons can be learned from this particular intervention. And besides naming, assessing and evaluating impacts, evaluations must give an account of the relevance, efficiency, effectiveness and sustainability of the examined intervention.

In order to account for this complexity, the genre of impact evaluation or impact assessment has been complemented by the methods of process evaluation as well. While the methods of impact evaluation are identical with those of impact assessment, process evaluation is a form of monitoring designed to determine whether the measures under the policy have been delivered as intended to the targeted recipients. Process evaluation (also known as implementation assessment) can be based on an analysis of the attitudes and behaviour of beneficiaries as they interact with the donor organisation or with the intermediary organisation during their involvement in the programme. 5

Process evaluations. In many cases impacts are not the priority focus of the evaluation project, rather the major aim of the researchers is to qualify the design and the process of aid delivery, e.g. the administrative and client service work of the intermediary institutions on behalf of the beneficiaries.

Examples of research questions of evaluations. The basic research question of evaluations appears in the methodological apparatus of many evaluations devoted to small business development measures. These questions are directly or indirectly concerned with the evaluation criteria against which the measure has to be assessed.

Relevance . You have financed from the subsidy a new technology which you have recently introduced in your firm: do you consider this an innovation?

Efficiency . Why were the results of your project delivered with such a big delay?

Effectiveness . Have the training materials of the subsidised course been published on the Internet?

Impact . Who else is going to profit from the subsidised project besides your company?

Sustainability . The development of the Enterprise Resource Programme IT system of your company has been co-financed by the donor organisation. What kind of contractual guarantees has your company received from the contractor software developer?

The fundamental problem of evaluation is that it inevitably leads to normative statements. Evaluation judgments explicitly or implicitly involve statements about which aims, strategies, project designs or outcomes are good or bad, which operations are right or wrong. 6 Since project and programme evaluation is regarded as an exercise in applied social science, and evaluators arrive to value statements, it raises the issue of value in social science. 7 The main question connected with value statements is to what extent these can be verified of falsified. However, in case of project and programme evaluation, the findings are expressed in form of so-called

5 [Purdon – Lessof – Woodfield – Bryson 2001]

6[House 1999]

7 [Szántó 1992]

instrumental statements, which qualify certain lines of actions according to whether they are suitable to reach some previously defined aims or not. Such statements are combinations of norms and scientific statements and can be supported (if not proven) with the help of empirical data and with the application of a valid inference mechanism.

The opinions, conclusions and recommendations of evaluators influence decisions either (a) by affirming and encouraging certain procedures or outcomes for policies, regulations, projects programmes, or (b) by defining other procedures or outcomes as anomalies, discouraging policy makers to take these directions. Although evaluators increasingly use a wide range of data and a large apparatus of descriptive models and explanations to justify their judgments, however, a certain risk of arriving at subjectively or even emotionally influenced opinions still remains. Evaluation scores express normative judgments and it is nearly impossible to create universal, rationally applicable standards for all domains of evaluated activities. In principle there is no way to attain a perfectly rational falsification or proof for evaluation findings.

2.2. The underlying theory: impact mechanism of SME