• Nem Talált Eredményt

WRITING PEDAGOGY AT THE ENGLISH DEPARTMENT: PRODUCING PROCESSES

3.3 Syllabus development

3.3.6 Students' views

these students, I was glad to see a marked agreement between the two scores.

But to be able to add to the reliability of this part of the study, further inves-tigation is necessary. In discussing preliminary findings of this project, sev-eral students suggested that i n reporting a score to me, some participants may not have given the true score of their work. In a future project, student re-search assistants may need to elicit this information. Also, interviewing stu-dents could provide insights into the process of stustu-dents* self-evaluation.

Another aspect of assessment is how levels of performance are compared.

Most university courses appear to apply criterion referencing: i n the syllabus the teacher specifies a grading scheme with percentages representing levels.

This may be a valid approach i n lecture courses involving a large number of students. However, i n seminar courses norm referencing may be more valid from the point of view of the construct of seminar work. Comparing students' results with each other informs teachers of the work they have been able to do. Also, fine-tuning level setting may carry higher face validity.

For these two reasons, I applied n o r m referencing throughout the five semesters, deciding on required levels of performance for each of the four passing levels by consulting the graph of final scores.

Students' course evaluations have become a regular procedure at the end of terms at JPU. They were introduced i n 1995 to provide staff, students and ad-ministration with the information students share about each of the courses they completed. Besides this official procedure, several tutors have imple-mented their own feedback generating practices so that they may receive valuable insight from students into the effectiveness of their courses. As the results of the official evaluations take a longer time to process and tabulate, and recently have not even been released, tutors who need more immediate feedback have experimented with an unofficial yet simpler technique of elicit-ing student response to their courses. In this section I report the result of one such evaluation project.

Thirty students participated i n the procedure of the Spring 1997 course evaluation. The three sections of the course, A N G 1601, 1602 and 1603, had a total of 36 registered students, of whom two had not participated i n the last four to six classes. Out of 34 students, 32 were present i n the last classes. Data was collected on May 12 and May 13, 1997, on the dates when students were submitting their end-of-term assignments.

M y hypothesis was that students would express positive and negative at-titudes to the course and that the information I would receive may be useful i n planning next semester's syllabus for a slightly modified course.

In each of the three sections, students were asked to participate i n the evaluation anonymously i n writing. The questionnaire consisted of four cat-egories that students were asked to rate numerically. They were told that they had the option of not completing the questionnaire or not submitting it. I

administered and collected the questionnaires. Two students chose not to participate.

Students were asked to rate each of the following four evaluation criteria on a scale of 1 to 7, where 1 represented extremely negative, and 7 extremely positive views:

> Fairness of evaluation;

> Assistance from students;

> Assistance from the tutor;

> Usefulness of the course.

I identified these criteria as I regarded them as genuine indicators of stu-dents1 satisfaction levels. Also, I hypothesized that the composite mean figures for fairness, student assistance and tutor assistance would correlate highly with the mean of the usefulness criterion.

After I collected the 30 questionnaires, I analyzed the data by statistical techniques, calculating means and STD figures.

Out of the 30 students who returned the questionnaire, 28 responded to the item on how fair they found the evaluation of their work i n the course. In the three sections, students seemed to consider my evaluation fair; two gave the Fairness of Evaluation criterion a value of 4, five students gave it a value of 5, six students a value of 6, and twelve students gave it the top value, 7.

Figure 15 presents the distribution of values for the fairness criterion.

Figure 15: Number of students selecting values for the fairness of evaluation query (N = 28)

The second item asked students to rate how much assistance they received from other students i n the group. A l l 30 students who took the questionnaire answered the question. With three students giving this criterion a value of 3, nine students a value of 4, ten students a value of 5, three a value of 3, and five a value of 7, the assistance students reported they received from others

92

Digitized by

appeared to be somewhat lower than I expected. Figure 16 shows the distri-bution of values for the Assistance from Students criterion.

I 10

Students

One Two Three Four Five Six Seven

Figure 16: Number of students selecting values for the assistance from stu-dents query (N = 30)

The third category was Assistance from the Tutor. A l l 30 students who re-turned the questionnaire responded to this query. One student assessed the tutor's assistance by giving it a 3, two by giving it 5, seven by giving it 6, and twenty-one by giving it the top value, 7. Figure 17 demonstrates the distribu-tion of values for the assistance from the tutor criterion.

I Tutor

Figure 17: Number of students selecting values for the assistance from the tu-tor query (N = 30)

The last course evaluation category in the questionnaire invited students to assess the usefulness of the writing course. Again, all the 30 students returned their questionnaires by assigning one value to this category. One student gave it the median value, 4, nine the value of 5, fourteen the value of 6, and six

93

Digitized by

Google

the value of 7. Figure 18 shows the distribution of values for the usefulness criterion.

H Usefulness

One Two Three Four Five Six Seven

Figure 18: Number of students selecting values for the usefulness of the course query (N=30)

To obtain information on how students' evaluations differed from each other, I calculated the standard deviation (STD) figure as well. A n STD can show how similar or different respondents' opinions are by comparing each respondent's rating with the mean. The lower the STD, the more uniform i n -dividual responses are; conversely, the higher this value, the more divergent the opinions. Although it is extremely rare that i n any group all members would agree on all issues, I regarded the STD of the four criteria as another essential aspect of the reception of the course.

As Figure 19 attests, the most divergent opinions were expressed about the fairness of evaluation (1.79). The other three category STD figures were lower, with the usefulness category STD value being the lowest (0.79), show-ing that this was the evaluation category that elicited most uniform responses.

94

Digitized by

Another way of looking at the results is by calculating the mean figures of the category values. To be able to form an overall image of students' evaluation of these criteria, I conducted this calculation and found the following: The lowest mean was obtained for assistance from students (4,93). While this was the lowest value, it was still in the positive range of the scale. Students ranked the usefulness of the course criterion higher, as the mean figure for that cat-egory was 5.83. For the fairness of evaluation and assistance from the tutor categories the mean figures were 6.14 and 6.53, respectively. Figure 20 shows the rating of the four factors.

Mean Figures

Figure 20: Mean figures of the evaluation of the four criteria

Finally, to assess the reliability of the results, I undertook a comparison an-alysis by calculating the means of the values assigned to the fairness of evaluation (F), assistance from students (S), and assistance from the tutor (T), and by comparing that with the mean figure for the usefulness of the course category. I hypothesized that the comparison would result i n little i f any difference between the two values i f the results were reliable, but be markedly different i f they were not. As Figure 21 reveals, almost no difference

95

Digitized by

Google

was found between the usefulness of the course and the composite of the other three factorial means.

Usefulness Averages of F/S/T

Figure 21: Comparison of the mean score for the usefulness criterion and the averages of the fairness, assistance from students and assistance from the tu-tor criteria

I obtained valuable information on students' evaluation of the three WRS sections. I hypothesized that students would share their positive and nega-tive opinions i n selecting values on the scales for each of the four categories.

Most opinions students expressed about these courses were i n the positive range of the scale, with only one student assigning one of the categories a slightly negative value (3, i n assistance from the tutor).