DISSERTATION PROPOSAL Alternative Assessment Methods in English as a Foreign Language and in English Medium Content Classes in Hungarian Public Secondary Schools

26  Download (0)

Full text

(1)

DISSERTATION PROPOSAL

Alternative Assessment Methods in English as a Foreign Language and in English Medium Content Classes in Hungarian

Public Secondary Schools

Márta Barbarics

Supervisor: Ildikó Lázár, PhD Language Pedagogy Ph.D. Program

ELTE, Budapest 2019

(2)

2

Table of contents

1. Introduction 3

2. Literature Review 3

2.1 Definitions of assessment types 3

2.2 A review of rules and regulations 5

2.3 Empirical background 7

3. Research niche 9

4. Research questions 10

5. Methods 10

5.1 Participants 11

5.2 Research instruments 12

5.3 Procedures and quality control 13

5.4 Methods of data analysis 14

5.5 Summary table of research questions and methods 15

6. Ethical and legal considerations 17

7. Expected outcomes and limitations 17

8. Preliminary results 18

9. Schedule 19

References 20

Appendix A 23

Appendix B 25

Appendix C 26

(3)

3 1. Introduction

Assessment plays a major role in teaching and learning processes. According to Hungarian rules and regulations, teachers are obliged to evaluate students’ knowledge, behaviour, and diligence in the form of grades on a scale from one to five, as a consequence of which, assessment is mainly based on these numerical values. These grades have a substantial impact on students’

lives as they define their enrolment possibilities in all levels of education. Therefore, it is no surprise that stakeholders (schools, parents, and students) lay considerable emphasis on grades.

However, in theory, school assessment should also serve the development of students. However, as there are no guidelines or output requirements, assessment serving the development of students, depends entirely on teachers’ assessment practices. Only few studies have been published in connection with assessment in Hungary focusing on ways that aim to fulfil other roles than grade giving (Hubai & Lázár, 2018). Therefore the proposed research aims to gain insights into the use of alternative assessment methods in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary.

The literature review contains the theoretical background necessary to understand the research including definitions of assessment types and main issues such as the review of rules and regulations concerning assessment in Hungary. The literature review ends with the description of empirical research in connection with alternative assessment. After presenting the problem, the research questions are stated, which are followed by the description of the methods to be used to answer them. The research proposal details ethical and legal considerations, expected outcomes and limitations, and the results of my earlier research related to the topic. It ends with the intended time schedule, references, and the appendices.

2. Literature Review

2.1 Definitions of assessment types

Assessment “denotes any appraisal (or judgment, or evaluation) of a student’s work or performance” (Sadler, 1989, p. 120). There are several categorizations of the types of assessment.

One is the formative and summative distinction first used by Scriven (1966). Although he argues for the primary importance of a final evaluation of a project or a person, which he defines as summative assessment, he also acknowledges that it is necessary to evaluate the process of development as well, which he calls formative assessment. There is also diagnostic assessment that analyses a situation in order to gather detailed information before making a pedagogical

(4)

4 decision (Vidákovich, 1990). The three types of assessment have different goals. Diagnostic assessment aims to diagnose a situation in order to make informed decisions about course content;

formative assessment aims to keep track of students’ progress and identify ways of helping it along; and summative assessment aims to identify overall levels of achievement and measure students’ results against them (Rea-Dickins, 2000).

The different purposes of assessment are also emphasized in the categories of assessment for learning, assessment as learning, and assessment of learning (Earl, 2006). The goal of assessment for learning is to support students’ learning (Black et al., 2004; Leahy et al., 2005;

Wiliam et al., 2004; Wiliam, 2011). Assessment as learning provides opportunities for students to monitor and critically reflect on their own learning processes and assessment of learning compares students’ proficiency to curriculum learning outcomes (Earl, 2006). Comparing the above mentioned two categorizations, formative assessment and assessment for learning share the same goals, while summative assessment and assessment of learning also have the same aims.

Other categorizations are based on mode: oral or written, intention: formal or informal, purpose: formative or summative, interpretation: norm-referenced or criterion-referenced, and administration: classroom-based or large scale assessment (Mihai, 2010). It is also important to note that in some cases assessment is used as a synonym for testing (Cohen, 2001), measurement, or evaluation (Tran, 2012). According to Brown and Abeywickrama (2010), evaluation involves the interpretation of testing results to make decisions for example “if a student achieves a score of 75 percent (measurement) on a final classroom examination, he or she may be told that the score resulted in a failure (evaluation) to pass the course” (p. 5).

According to Tsagari (2004), alternative assessment appeared as an answer to the following concerns about language testing. High-stakes, standardized tests affect the school curriculum by making teachers concentrate only on those subjects and skills that are included in the examinations, restricting teachers’ methods to employ exam preparation practices, and encouraging students “to adopt ‘surface’ approaches to learning as opposed to ‘deep’ approaches”

(p. 3). On a psychological level, students take the role of “passive recipients of knowledge and their needs and intentions are generally ignored [which has] detrimental consequences on students’

intrinsic motivation, self-confidence, effort, interest and involvement [moreover, these processes]

induce negative feelings […] such as anxiety, boredom, worry and fear” (Tsagari, 2004, p. 4).

Kohn (2011) argues that the even presence of grades decreases students' interest in whatever they

(5)

5 are learning, increases preferences for easier tasks, and negatively affects the quality of students' thinking. Hamyan (1995) defines alternative assessment as “procedures and techniques which can be used within the context of instruction and can be easily incorporated into the daily activities of the school or classroom” (p. 213) and contrasts it to standardized testing, stating that alternative assessment does not provide a comparison of an individual to a larger group. Kohonen (1997) emphasizes that assessment reflects what is valued in education, so using the term authentic assessment, Kohonen’s definitions is the following: “forms of assessment that reflect student learning, achievement, motivation and attitudes on instructionally-relevant classroom activities [...] Its results can be used to improve instruction, based on the knowledge of learner progress” (p.

13). Alderson and Banerjee (2001) characterize alternative assessment as “procedures which are less formal than traditional testing, which are gathered over a period of time [...], which are usually formative rather than summative in function, are often low-stakes in terms of consequences, and are claimed to have beneficial washback effects” (p. 228). After describing the Hungarian public educational context in connection with assessment, I present my definition of alternative assessment.

2.2 A review of rules and regulations

The official rules and regulations of student assessment in Hungary are described in the current Act on National Public Education (Act CXC, 2011). According to the chapter entitled Fulfilment of Student Obligations, teachers should “regularly evaluate the student’s performance and progress in form of grades throughout the teaching year and rate it in forms of marks at the end of the term and the teaching year” (Act CXC, 2011, Section 54/1). The act continues to detail the grading system by distinguishing three categories:

Grades and marks should be as follows:

a) evaluation and assessment of the student’s knowledge: excellent (5), good (4), average (3), satisfactory (2), unsatisfactory (1),

b) evaluation and assessment of the student’s behaviour: exemplary (5), good (4), variable (3), bad (2),

c) evaluation and assessment of the student’s diligence: exemplary (5), good (4), variable (3), negligent (2). (Act CXC, 2011, Section 54/2)

Therefore, the three categories of knowledge, behaviour, and diligence are all assessed on a scale from one to five. Admission to secondary and tertiary education is heavily dependent on grades.

(6)

6 The Ministerial Decree 20/2012 (VIII. 31.) on the operation of public education institutions by the Ministry of Human Capacities states that secondary schools can decide whom to accept based on the given student’s previous academic record, meaning the end of term grades, and the results of a centralized written exam on mathematics and Hungarian language. According to Act CCIV of 2011 on National Higher Education, admission to tertiary education, similarly to secondary education, depends on the earlier school performance of the applicants, meaning the end of term grades, and the results of the centralized secondary school leaving examination (Act CCIV, 2011, Article 40). Although in both cases “institutions may make admission subject to the fulfilment of reasonable and non-discriminatory requirements” (Act CCIV, 2011, Article 40), the decision is mainly based on students’ grades and exam results. The OECD report of reviews of evaluation and assessment in education also notes that “Hungary, along with many other European countries, relies primarily on numerical marks for formal reporting” (2013, p. 204). As a consequence of this, grades and exam preparation play an important role for all stakeholders (schools, parents, and students).

On the other hand, the act on education starts with the section entitled “Purpose and Principles of the Act” stating that schools should provide “comprehensive evaluation adjusted to the requirements and ensuring the development of children / students” (Act CXC, 2011, Section 1/1), i.e., according to the purpose and principles, assessment should serve the development of students. Moreover, Section 64 deals with the promotion system of teachers that is based on eight competences of teachers specified in section 7 of Governmental Decree 326/2013. (VIII. 30.) , where the third teacher competence is “providing learning support”; the fourth is “developing students’ personality, individual treatment […]”, the sixth is “ongoing evaluation and analysis of educational processes and the development of students’ personality” are all in line with the purpose and principles of the educational act stating that teachers should develop students’ personalities with the help of evaluating and analysing educational processes. Although section 62 (Act CXC, 2011) mentions that assessment should include explanation in the form of oral or written feedback, as Hubai and Lázár (2018) point out, without any guidelines or examples regarding the execution of it, teachers do not have formal rules about how they should explain the grades they give to their students.

Taking into consideration Sadler’s (1989) definition of assessment in general: “any appraisal (or judgment, or evaluation) of a student’s work or performance” (p. 120) and the

(7)

7 Hungarian public educational context where teachers should evaluate students’ knowledge, behaviour, and diligence in the form of grades from one to five and also develop students with the help of assessment, my own definition of alternative assessment is as follows: The term

“alternative assessment” is used in this study to refer to any appraisal, judgment, or evaluation of a student’s work or performance that contains different elements in addition to or instead of grades with the purpose of supporting students' development.

2.3 Empirical background

My definition of alternative assessment has two components. The first component is that the appraisal, judgment, or evaluation of a student’s work or performance should contain different elements in addition to or instead of grades. According to this first component of my definition, empirical research focusing on the following types of assessment are all relevant (for containing elements other than grades): formative assessment, assessment for learning, and within the

“movement of alternative assessment” (Alderson and Banerjee, 2001, p. 228) authentic assessment, performance assessment, continuous assessment, on-going assessment, informal assessment, descriptive assessment, direct assessment, dynamic assessment, instructional assessment, responsive evaluation, complementary assessment, portfolio assessment, situated or contextualised assessment, and assessment by exhibition as collected by Tsagari (2004). The list can also be continued with sustainable assessment (Boud, 2000) and assessment based on gamification (Hubai & Lázár, 2018). Moreover, there is also extensive empirical research on the effectiveness of certain pedagogical methods used in order to carry out alternative assessment.

Some examples listed by Tsagari (2004) are the following: conferences, debates, demonstrations, diaries or journals, dramatizations, exhibitions, games, observations, peer-assessment, portfolios, projects, self-assessment, story retelling, and think-alouds. Instead of recording grades, Hamayan (1995) suggests to record alternative assessment information with the help of the following tools:

anecdotal records, checklists, learner profiles, progress cards, questionnaires, and rating scales. As it is beyond the scope of the present paper, they will not be discussed in further detail.

The second component of my definition of alternative assessment is that it is carried out with the purpose of supporting students' development. There is empirical research aiming to identify how assessment can fulfil this purpose. Wiliam (2011) in the meta-analysis of almost 800 studies arrives to the conclusion that “for assessment to support learning, it must provide guidance about the next steps in instruction and must be provided in way that encourages the learner to direct

(8)

8 energy towards growth, rather than well-being” (p. 13). Nicol and Macfarlane-Dick (2006) believe that the goal of assessment is to “help students take control of their own learning” (p. 199), and in order to achieve that goal through good feedback practice, they carried out the analysis of research on feedback in higher education. They established the following seven principles supporting each of them with empirical research.

Good feedback practice:

1. helps clarify what good performance is (goals, criteria, expected standards);

2. facilitates the development of self-assessment (reflection) in learning;

3. delivers high quality information to students about their learning;

4. encourages teacher and peer dialogue around learning;

5. encourages positive motivational beliefs and self-esteem;

6. provides opportunities to close the gap between current and desired performance;

7. provides information to teachers that can be used to help shape teaching. (p. 205) Finally, research conducted by Wiliam, Lee, Harrison, and Black (2004) is relevant from two aspects. The first aspect is that they address criticism aiming alternative assessment that although “increased use of formative assessment (or assessment for learning) leads to higher quality learning, it is often claimed that the pressure in schools to improve the results achieved by students in externally-set tests and examinations precludes its use” (p. 49). For this reason, their study aims to investigate the test results of students who are taught by teachers developing alternative assessment practices. Thus, the second aspect why this empirical research is relevant, is that it describes the process of developing alternative assessment from the teachers’ point of view.

A total of 24 teachers took part in a half-a-year training period in formative assessment developing their own plans, which were then put into action with selected classes. Comparison groups were established for each class: “either an equivalent class taught in the previous year by the same teacher, or a parallel class taught by another teacher. The mean effect size in favour of the intervention was 0.32” (Wiliam et al., 2004, p. 49). The researchers describe the limitations of the study for example that the “comparisons are not equally robust” (p. 62) or that positive effects in comparing classes taught by different teachers might be caused by the personality of the teachers. Despite these limitations, they believe that they have proved that teachers do not “have to choose between teaching well and getting good results” (Wiliam et al., 2004, p. 64). The six-

(9)

9 month period training of the teachers had two main components. There were sessions, during which teachers were introduced to the principles of formative assessment and developed their own plans for applying it, as well as class observations of teachers already using formative assessment practices, after which they had the chance to discuss their ideas. Although it is not detailed in the article what basic principles they made teachers to acquire, Leahy, Lyon, Thompson, and Wiliam (2005) refer to this article as an example of implementing their non-negotiable principles, so it can be assumed that teachers based their action plans on the following principles: “Clarifying and sharing learning intentions and criteria for success. Engineering effective classroom discussions, questions, and learning tasks. Providing feedback that moves learners forward. Activating students as the owners of their own learning. Activating students as instructional resources for one another”

(p.20). Leahy et al. (2005) details this type of feedback – that moves learners forward – by saying that “to be effective, feedback needs to cause thinking. Grades don't do that. Scores don't do that.

And comments like ‘Good job’ don't do that either. What does cause thinking is a comment that addresses what the student needs to do to improve, linked to rubrics where appropriate” (p. 20).

Teacher autonomy played an important role in the research of Wiliam et al. (2004).

Teachers had to develop their own action plans how to implement the principles of formative assessment. Wiliam et al. (2004) conclude that “for the vast majority of our teachers, involvement in the project has not just spread to all their classes, but has fundamentally altered their views of themselves as professionals” (Wiliam et al., 2004, p. 62). This change in connection with teachers and the positive effect sizes in connection with students’ results imply that supporting teachers in developing their own alternative assessment practices seems to be effective.

3. Research niche

In Hungarian public schools, teachers are required to assess students’ knowledge, behavior, and diligence in the form of grades (Act CCIV on National Higher Education, 2011). In connection with assessment, they are also expected to provide learning support, to develop students, and to carry out ongoing evaluation and analysis of educational processes (Act CXC, 2011, Section 64).

As there are no output requirements, it depends entirely on teachers how they fulfil these expectations. Only few research studies have been conducted in connection with assessment in Hungary focusing on ways that aim to fulfil other roles than grade giving (Hubai & Lázár, 2018).

Thus the proposed research aims to gain insights into the use of alternative assessment methods in

(10)

10 English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary with the following research questions.

4. Research questions

1. What do teachers mean by assessment and alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

2. What alternative assessment methods do teachers claim to use in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

3. What are teachers’ views of using alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

4. What are teachers’ motivations and purposes for using alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

5. What are the necessary and sufficient conditions for teachers to use alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

6. What processes can be identified in developing ways of alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

7. What are students’ perceptions of alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

5. Methods

The research consists of two main phases. In order to answer the first five research questions, the focus of the first phase is an interview study with Hungarian public secondary school teachers already using alternative assessment methods. Following up on emerging issues, further interviews will be conducted with other stakeholders, and class observations and document analysis will complement the interviews. In the second phase, to be able to answer research questions six and seven, an action research project built on the results of the first phase will be conducted. As the purpose of the research is to gain insight into the use of alternative assessment methods, all the research questions are of exploratory nature, which requires qualitative data

(11)

11 collection. The choice of participants and instruments is detailed below followed by procedures, quality control, and data analysis.

5.1 Participants

The participants of the interviews will be chosen through purposive sampling in order to interview teachers already using alternative assessment methods. For example, a Facebook group was launched for teachers who were interested in the use of gamification in education in 2014 and by the beginning of 2019, it had more than 2800 members, many of whom use alternative assessment methods. I also plan to advertise the research in other online groups of innovative teachers. Using snowball sampling, I will ask the teachers whom they have heard the methods from and whether they know any other teachers using alternative assessment methods. I will also search the internet for schools advertising their assessment methods. As the research question specifies, I am interested in methods that are used in secondary public school settings, so I will not investigate assessment in alternative schools. I intend to conduct about 15 interviews depending on the types and number of assessment methods emerging until I reach saturation (Dörnyei, 2007).

Following up on any emerging issues, I will conduct interviews with other stakeholders such as decision makers and possibly carry out class observations in groups where the interview participants practice their alternative assessment methods.

I will conduct the action research project with a group (16 students) of 11th graders (in the 2019/2020 school year) studying in a public bilingual vocational secondary school in Budapest, Hungary. They study in English medium content classes. They have been chosen because they are participants of an ongoing experiment of the Content Pedagogy Research Program of the Hungarian Academy of Sciences, which has the following advantage: As the stakeholders (the school, the students, and their parents) have already agreed to take part in research conducted by me (including questionnaires, class observations, photo and video recordings of lessons), it can be extended to incorporate aspects of alternative assessment as well. The supported Content Pedagogy Research Program in question is about the methodology of mathematics teaching. It has no components in connection with assessment or English medium content teaching; however, the output requirements (given number of grades and exams) must be met; therefore, it is an ideal context to carry out the action research project on developing ways of alternative assessment.

(12)

12 5.2 Research instruments

As the main aim of the research is to gain insight into the use of alternative assessment methods in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary, this inquiry is of exploratory nature and thus follows the qualitative research paradigm using verbal data. In order to give space for emerging themes, semi- structured interviews will be conducted in the first phase of the study. The instrument has already been piloted. It was first based on a list of preliminary questions, and was later developed into an interview guide. The first version of the interview guide was subjected to expert opinion, after which the questions were grouped, reordered, and some of them rephrased. Four interviews were transcribed, and following expert opinion, after each one of them, smaller modifications were made to the interview guide. The final interview guide contained an introduction, questions on background data, and the main questions with possible sub-questions in brackets (see Appendix A). Some of the main questions were the following: “What does assessment mean to you? What would you call traditional assessment? Compared to this, what would you call alternative assessment? What ways of assessment do you use? Can you describe how it happens? Why did you introduce this form of assessment? What was the reaction of your students when you introduced it? How long have you been using it? What kind of advantages and disadvantages of it do you see? How effective do you think this way of assessment is? Has it brought about any changes in students’ behaviour?” (Appendix A)

For method triangulation (Dörnyei, 2007) following up on the emerging themes of the interviews, I will develop interview guides for other stakeholders and carry out class observations.

To answer research question five – What are the necessary and sufficient conditions for teachers to use alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary? – document analysis will be carried out in order to map the Hungarian public secondary school context from the point of view of assessment.

During the second phase of the research, the action research project will be complemented by field notes, self-reflections, and for investigator triangulation (Maxwell, 1992) observers’

reports (based on experts’ class observations of my lessons), and a student questionnaire. My MA thesis paper entitled Possible uses of the assessment system of gamification in ELT (2014) contains a quasi-experiment of introducing an alternative assessment method based on gamification. Since then I have been experimenting with alternative assessment methods and collecting data from field

(13)

13 notes and student questionnaires. As I had not been exposed to research methodology training, my instruments lacked quality control, so I will also compare the instruments and analysis of the present research with the ones prior to my doctoral studies.

5.3 Procedures and quality control

The four pilot interviews took place between April and June 2018. The background data of the participants can be found in Appendix B. All the participants applied through online social media groups of innovative teachers. The interviews were carried out in Hungarian as it is the mother tongue of each participant. After the consent of the participants, the interviews were recorded, transcribed, and analyzed with the constant comparative method.

There can be several threats to the validity of qualitative research. To discuss them I will use the collection of threats described by Dörnyei (2007, pp. 46–52). Insipid data is one of the threats and I tried to fight it off in the following way. In the pilot interviews, the four participants listed 119 ways of alternative assessment that they claim to use (each one of them between 25 and 35 ways). The emerging themes also contained quality data answering the research questions. The research questions linked with the emerged categories from the pilot interviews can be found in Appendix C. I intend to carry out interviews until I reach saturation (Dörnyei, 2007).

In connection with the action research, answering research question six – What processes can be identified in developing ways of alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary? – I intend to identify processes that are transferable to similar contexts. The reason why I would like to interview teachers in public secondary schools is also motivated by the criterion of transferability.

I plan to analyze how the different methods could be used by other teachers in other public secondary schools in Hungary.

I should mention here potential researcher bias as I have high expectations of alternative assessment methods, so I should be careful considering the issue of transferability. In order to do so I should examine “outliers, extreme or negative cases and alternative explanations” (Dörnyei, 2007, p. 51). I also intend to consult expert opinion and in the action research phase, ask for and analyze observer reports.

Anecdotalism is another threat. Piloting the interview guide, based on the transcribed interviews and consulting expert opinion, I have gained practice in using “probe questions”

(Dörnyei, 2007, p. 211) to guide the interview towards quality data. I also intend to use

(14)

14 triangulation by involving further instruments such as interviews with other stakeholders and class observations.

In addition, to create researcher integrity, I intend to leave a detailed audit trail with contextualization and thick description of the data. To ensure interpretive validity of the interview data, I will use validation interviews to member check my interpretations with the interviewees (Dörnyei, 2007).

According to Maxwell (1992) descriptive validity is the main aspect of validity as all the other validity categories depend on it. To ensure descriptive validity investigator triangulation is recommended. Expert opinion proved to be immensely helpful when piloting the interview guide, so it is something that I would like to continue consulting. In connection with the analysis of the data, I intend to involve a second coder to compare our coding results. Furthermore, I would like to include observers’ reports while conducting the action research project. In order to answer research question seven – What are students’ perceptions of alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary? – I will also include questionnaires from which students’ perceptions of alternative assessment can also be seen.

5.4 Methods of data analysis

Piloting the interviews, I recorded, transcribed, and analyzed the data by using grounded theory (Dörnyei, 2007). Initial coding happened parallel with transcribing the interviews to find emerging themes and see if the questions elicited sufficiently rich data. The constant comparative method was used to compare data with codes, and codes with codes, so that it would result in sorting of the initial codes into more elaborate ones (Lapan, Quartaroli, & Riemer, 2012). The data were copied to the atlas.ti program for focused coding. Categories that summarized groups of open codes were created. These categories linked with the initial research questions can be found in Appendix C.

Due to the iterative nature of qualitative research, there will be a constant movement between data collection, analysis, and interpretation. I mentioned saturation in connection with sampling, and this is also true for the process of analysis. As I have reached saturation in analyzing the data, I will plan the action research project based on the results of the interview study.

During the action research project, self-reflections, field notes, observers’ reports will be subject to content analysis using grounded theory (Dörnyei, 2007). Action research has even more

(15)

15 of an iterative nature; thus, the constant movement between data collection, analysis, and interpretation will characterize the research. Based on Kemmis and McTaggart’s (1982) model of action research, I intend to follow the steps of focusing, planning, acting, observing, reflecting, revising, and refocusing, while maintaining thick description of the data.

As mentioned above, I have already been experimenting with alternative assessment methods, collecting and analyzing data from student questionnaires without in-depth knowledge about research methodology. For this reason, I would like to revise and validate my earlier questionnaire. According to emerging themes, the questionnaire might contain open ended questions gathering qualitative data that will be analyzed with the constant comparative method, and if hypotheses are generated, it might also contain quantifiable data to test the hypothesis that will be subject to statistical data analysis.

5.5 Summary table of research questions and methods

Research questions Methods of data collection Methods of data analysis 1. What do teachers mean by

assessment and alternative

assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

Semi-structured interviews with innovative teachers (about 15) already using alternative assessment methods,

Following up any emerging issues: developing

interviews with other stakeholders,

Class observations

Grounded theory,

Content analysis with the constant comparative method

Highlighting emerging themes in both sets of interview data in the class observation notes, field notes and observers’

reports 2. What alternative assessment

methods do teachers claim to use in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

3. What are teachers’ views of using alternative assessment in English as a Foreign Language and in English

(16)

16 Medium Content Classes in public

secondary education in Hungary? Document analysis Coding content in documents

Descriptive analysis of questionnaire results and content analysis with the constant comparative method

4. What are teachers’ motivations and purposes for using alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

5. What are the necessary and sufficient conditions for teachers to use alternative assessment in English as a Foreign Language and in

English Medium Content Classes in public secondary education in Hungary?

6. What processes can be identified in developing ways of alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

Field notes, Self-reflections, Observers’ reports

7. What are students’ perceptions of alternative assessment in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary?

Field notes,

Questionnaire for students (16 students in the class of the action research project)

(17)

17 6. Ethical and legal considerations

Permission and consent will be asked from all the participants. At the beginning of the interview the participants will be informed about the aims and the process. The statement about the purpose of the research contains that I intend to investigate the different methods used for assessment purposes in public secondary schools in Hungary, so it is an exploratory study and that is why there are no correct or incorrect answers, I am interested in the honest opinion of the participant. Confidentiality and anonymity are granted. Pseudonyms will be used, so that

participants will not be identifiable. Their “participation is voluntary and the participant can withdraw and refuse to participate at any time with no penalty” (Dörnyei, 2007, p. 62). The data will be kept confidential and used only for research purposes. If the participants have any questions concerning the interview, Iwill be happy to answer.

In connection with the action research project, the permission of the school and parents and consent from the participating students will be extended to the usage of alternative

assessment. Confidentiality and anonymity will also be granted and pseudonyms will be used. I will do my best to protect the safety and dignity of all participants.

7. Expected outcomes and limitations

The expected outcomes of the research are the following: better understanding of teachers’ views of assessment in Hungarian public secondary education; collection of good practices in connection with alternative assessment; reflections on exact applications of alternative assessment from the stakeholders’ point of view. As practitioner action research is characterized by professional self-reflection and work-related focus, it is intended to improve practice (Lapan, Quartaroli, & Riemer, 2012), in this case assessment practices in public secondary education in Hungary.

The main limitation of the research is that it is examining immensely complex issues through small case studies. However, as the main aim of the research is to gain insights into the use of alternative assessment methods in English as a Foreign Language and in English Medium Content Classes in public secondary education in Hungary, it is an exploratory research for which small scale, in-depth studies, using a variety of research tools are appropriate ways of investigation. As the purpose is exploratory, the research does not intend to result in

generalisable data; however, a deeper understanding of the stakeholders’ views of assessment in

(18)

18 public secondary education in Hungary will hopefully be transferable to broader professional contexts.

8. Preliminary results

Piloting the interview guide has resulted in the following outcomes. Participants of the semi-structured interviews all defined traditional assessment as grade giving, and alternative assessment as ways of assessment that are different from grade giving, and fulfil several roles, such as comparing students to their own previous performance (instead of a centralized standard) in order to show their personal development. The four participants listed 119 ways of alternative assessment that they use. The research questions linked with the emerged categories can be found in Appendix C.

Preliminary findings show that the main purpose of teachers using alternative assessment methods is to provide constructive feedback that helps students to improve, and also to motivate and engage students in learning. Raising students’ interest in their subjects and making the learning process enjoyable also emerged in all the interviews. According to each participant, assessment based on grades is usually stressful for students, which hinders effective learning, so another purpose that they all articulated was to reduce stress and to develop students’ different skills. One of the most important skills appeared to be autonomy. Other skills that teachers listed were for example ICT-use, real life problem solving, knowledge building, creativity,

communication, cooperation, self-knowledge, and self-regulation.

The participants’ main views on assessment are the following: they all believe that students should feel safe in order to be able to learn, and this is closely related to assessment.

One way it can be achieved is through the transparency of assessment. Students should know the criteria, what is expected from them, how the assessment process will happen, what the

consequences will be, and what they should do about it. This should result in a relaxed classroom atmosphere in which students have the freedom to choose and to experiment with different task types, topics (for example that they can contribute according to their own interests). This leads to the concept emphasized by all of them: facilitating autonomous learning. According to the participants, it is also required that students take part in their own assessment processes. To be able to do that, it is necessary to have a good relationship with students, honest and positive communication, and trust. Students should be provided with the opportunity to take

responsibility for their own learning processes. All of the participants agree that students vary

(19)

19 greatly, so teachers should individualize their assessment as much as possible to facilitate each student’s improvement. They all ask for detailed, anonymous feedback from their students, so that they can improve their work together based on students’ feedback. They always reflect on the feedback and modify things accordingly, which students highly appreciate.

As the preliminary results show, there is rich data in the teachers’ interviews based on which it is possible to answer the first five research questions and develop the action research plan.

9. Schedule

2018 April–June: Pilot interviews

2018 September–December: Research Plan 2019 January–March: Interview study

2019 April–August: Analysis of interviews, developing action research plan 2019 September – 2020 June: Action research

2020–2021: Analysis of data and dissertation writing

2021 December: Dissertation submission for in–house defence 2022 June: Submission of the dissertation

(20)

20 References

Act CXC on National Public Education (2011). Budapest: Ministry of Human Capacities.

Retreived from:

https://www.oktatas.hu/pub_bin/dload/nyelvvizsga_honositas/elismertetes_honositas/engl ish/act_national_education.doc

Act CCIV on National Higher Education (2011). Budapest: Ministry of Human Capacities.

Retreived from:

https://www.oktatas.hu/pub_bin/dload/nyelvvizsga_honositas/elismertetes_honositas/HU _NHEA_2Sept2016.pdf

Alderson, J. C., & Banerjee, J. (2001). Language testing and assessment. Language Teaching, 34(4), 213–236.

Barbarics, M. (2014). Possible uses of the assessment system of gamification in ELT.

(Unpublished MA dissertation). Eötvös Loránd University, Budapest.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box – assessment for learning in the classroom. Phi Delta Kappan, 86(1), 9–21.

Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.

Brown, H. D., & Abeywickrama, P. (2010) Language assessment: principles and practices.

White Plains, NY: Pearson.

Cohen, A. D. (2001). Second language assessment. In M. Celce-Murcia (Ed.), Teaching English as a second or foreign language. (pp. 515–534). Boston, MA: Heinle & Heinle/Thomson Learning.

Deterding, S., Dixon, D., Khaled, R., & Nacke, L. E. (2011). Proceedings from Mindtrek 2011: From Game Design Elements to Gamefulness: Defining

‘Gamification’. Tampere: ACM Press.

Dörnyei, Z. (2007). Research methods in applied linguistics. Oxford: Oxford University Press.

Earl, L. M. (2006). Rethinking classroom assessment with purpose in mind: assessment for learning, assessment as learning, assessment of learning. Manitoba: Government of Manitoba.

Golnhofer, E. (2003). A pedagógiai értékelés. In I. Falus (Ed.), Didaktika. Budapest: Nemzeti Tankönyvkiadó.

(21)

21 Hamayan, E. V. (1995). Approaches to Alternative Assessment. Annual Review of Applied

Linguistics, 15. 212–226.

Hubai, K. & Lázár, I. (2018 – in press). Assessment of learning in the Hungarian education system – with a special focus on language teachers’ views and practices. Working Papers in Language Pedagogy, 12.

Kemmis, S., & McTaggart, R. (Eds.). (1982). The action research planner. Geelong, Victoria, Australia: Deakin University Press.

Kohn, A. (2011). The Case Against Grades. Effective Grading Practices. 69(3), 28–33.

Kohonen, V. (1997). Authentic Assessment as an Integration of Language Learning, Teaching, Evaluation and the Teacher’s Professional Growth. In A. Huhta, V. Cohonen, L. Kurki- Suonio & S. Luoma (Eds.), Proceedings of LTRC ‘96: Current Developments and Alternatives in Language Assessment (pp. 7– 22). Jyvaskyla: University of Jyvaskyla.

Lapan, S. D., Quartaroli, M. T., & Riemer, F. J. (Eds.). (2012). Qualitative research. San Francisco, CA: Jossey-Bass.

Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment: minute by minute, day by day. Educational Leadership, 63(3), 18–24.

Maxwell, J. A. (1992). Understanding the validity in qualitative research. Harvard Educational Review 62(3), 279–300.

Mihai, F. M. (2010). Assessing English language learners in the content areas: A research- into- practice guide for educators. Ann Arbor, MA: University of Michigan Press.

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

OECD (2013). Synergies for Better Learning: An International Perspective on Evaluation and Assessment. OECD Reviews of Evaluation and Assessment in Education. Paris: OECD Publishing.

Olechowski, R. (2003). Alternatív teljesítményértékelés – az iskola humanizálása. In I. Bábosik,

& R. Olechowski (Eds.), Tanítás – tanulás – értékelés (pp. 215–235). Frankfurt am Main:

Peter Lang.

Rea-Dickins, P. (2000). Classroom Assessment. In: T. Hedge (Ed.) Teaching and Learning in the Language Classroom (pp. 375–401). Oxford: Oxford University Press.

(22)

22 Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional

Science, 18, 119–144.

Scriven, M. (1966). The methodology of evaluation. Lafayet, IN: Purdue University.

Tsagari, D. (2004). Is there life beyond language testing? Introduction to alternative language assessment. CRILE Working Papers, 58. 1–23.

Vidákovich Tibor (1990). Diagnosztikai pedagógiai értékelés. Budapest: Akadémiai Kiadó.

Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: impact on student achievement. Assessment in Education: Principles, Policy &

Practice, 11(1), 49–65.

Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37, 3–

14.

(23)

23 Appendix A

The piloted interview guide – English translation Dear Colleague,

Thank you very much for participating in this interview contributing to my PhD research. I am Márta Barbarics. I study at ELTE, Faculty of Education in the Language Pedagogy Programme.

My research is about assessment in secondary schools. I would like to get acquainted with the views of teachers on assessment, so there are no right or wrong answers, as I am interested in your personal experience and opinion. The data will be used for research purposes only. You remain anonymous. If you are interested, I am happy to share the results with you. If you agree to the the interview being recorded, we can start. Participation is voluntary. Thank you very much!

First of all, let me ask you some background data.

● How old are you?

● Where did you graduate (which university, which programme)?

● Which subject(s) do you teach?

● Which grades (age groups)?

● Where (which school)?

● How long have you been teaching there?

● Did you teach somewhere else before?

● Have you ever lived, worked abroad?

We will be talking about assessment in more detail. People mean different things by assessment.

What does assessment mean to you?

● What would you call traditional assessment?

● Compared to this, what would you call alternative assessment?

● What ways of assessment do you use?

From these ones which ones would you call alternative assessment? (From here, we go through the different ways one by one with the following questions.)

● Can you describe how it happens? (What do you assess? What do you give feedback on?)

● Why did you introduce this form of assessment?

● What would you like to achieve by using it?

(24)

24

● What was the reaction of your students when you introduced it?

● How have you been using it? (How long have you been using it? Have you ever modified something about it? If yes, what and why?)

● What kind of advantages and disadvantages of it do you see?

● How effective do you think this way of assessment is? (What makes it effective?)

● Has it brought about any changes in students’ behaviour? (attitude, motivation, engagement results, and so on)

Is there anything we haven’t talked about and you think it is connected to this topic and you would like to share it?

Thank you very much for the interview!

(25)

25 Appendix B

Background data of the pilot interviews

1st interview 2nd interview 3rd interview 4th interview

Pseudonym Ann Mary Luke Ivy

Gender female female male female

Age 34 37 29 29

Field of subjects taught

Science Humanities Languages Languages

Interview length 74 minutes 76 minutes 77 minutes 51 minutes

(26)

26 Appendix C

The research questions linked with emerged categories from the pilot interviews

Research questions Emerged categories Number of

labels 1.) How do teachers define assessment

and alternative assessment?

definitions of assessment

(in general, traditional, and alternative) 63 2.) What alternative assessment

methods do teachers claim to use in public secondary education in Hungary?

forms of assessment (exact examples, practices)

119

3.) Why do teachers use alternative assessment methods in public secondary education in Hungary?

motivations and purposes (why they do something as they do it)

131

4.) What are the views of teachers using alternative methods on assessment?

beliefs about assessment 348

consequences (what results they see due to using alternative assessment methods)

194

teacher characteristics (personality or behavioural traits that could affect their views on assessment)

302

Figure

Updating...

References

Related subjects :