• Nem Talált Eredményt

Data Analysis

In document DOKTORI (PhD) DISSZERTÁCIÓ (Pldal 72-76)

ADC 3 ADC 4

4.7 Data Analysis

The procedure used to analyse the transcribed interview data and the word processed observation notes was based on the constant comparative method of data analysis first developed by Glaser and Strauss (1967). This method also features prominently in most subsequent accounts of how to analyse qualitative data. Its basic principle is to first fracture the data into small parts and then progressively reassemble the parts to build an abstract representation of the research phenomenon. The first stage is to break up the initial transcribed data into smaller units and code them according to the idea they contain.

Sometimes this is done line by line or even word by word, but whole sentences and paragraphs may be coded as well. This stage is known as open coding. At the same time initial memo writing of the researcher’s thoughts about the emerging concepts is done to give insights into what codes mean and how they are related. Then, as initial concepts emerge and connections are made with previously coded data, larger categories are built up from the more substantial codes, and their properties in the form of related sub-categories emerge as more and more data is coded. More substantial memos aid the researcher’s thinking at this point.

This second stage is known as focused coding (Glaser, 1978). Saldana (2009) provides a useful graphic to show the hierarchical relationships in the coding process, which is reproduced on the next page (Figure 3).

This method relies on constant comparison between the codes in newly analysed data and the already identified codes. The research proceeds through cyclical rounds of coding and memo writing followed by collection of more data to “fill in” information about

important emerging categories, followed by further coding and so on, until the categories are fully understood and no new information can be added (a point known as saturation).

Creswell (1998, p. 57) calls this a “zigzag” process, but it should be remembered that it a focused endeavour, aimed at increasing the researcher’s understanding by repeated theoretical sampling in the field. As coding and category building takes place, memo writing allows thinking to develop. Memos allow the researcher to compare the data and to explore ideas about the codes. Memos can also provide direction for further data gathering. As categories build so memos become more advanced and so theoretical thinking becomes more sophisticated. Eventually advanced memos can be integrated to provide the basis for the theoretical model. This process will be described in more detail with examples of coding and memo writing in the next chapter

Real Abstract Code

Code Category Code

Themes/

Concepts Theory Code

Code Category

Code Subcategory

Subcategory

Particular General

Figure 3. Saldana’s streamlined codes-to-theory model for qualitative inquiry (2009, p. 12)

In the present study the cyclical process of data collection, open coding, initial memo writing and then further more focused data collection and more advanced memo writing in order to refine emergent categories proved to be problematic and could only be achieved in a limited way. This was due to the lack of time available for data processing. It soon became clear that it would not be possible to keep up with the word processing of lesson observations and the transcription of interviews during the academic year, owing to my commitments as a full time teacher. For the classroom observations this was not a major problem since I developed codes as I observed and used these as a way of coding further observations in the margin as I took notes. This coding system developed naturally as part of the observation process, but with interviews, coding is normally done with transcribed data and there simply was not enough available time to do this, particularly as the interviews became progressively longer. When the transcription of all the recorded interviews was finally completed in 2009, the student interviews alone ran to over 350,000 words and nearly 1,000 pages. Indeed, to borrow Coffey and Atkinson’s (1996) phrase, the experience of “drowning in data” became a very real one during this period of data processing.

The solution to this problem of not being able to keep up with the transcription of interview data for analysis while the research was under way was to limit initial coding to listening to the recorded interviews and making notes to develop some initial categories and ideas for further data collection. Charmaz (2006) observes that some researchers prefer to code from notes rather than transcribed interviews, but goes on to say that “coding full interview transcriptions gives you ideas and understandings that you otherwise miss” (p. 70).

While this is undoubtedly true, in this case it simply was not an option, and working with notes from recordings provided a stop gap solution and still allowed further rounds of data collection to be theoretically driven.

Achieving saturation also presented a methodological challenge in this study, since by definition first-year students only remain so for one year, and therefore when working with individual students there is a palpable time limit for data collection. However, this problem is by no means an uncommon one in ethnographic research. Strauss and Corbin (1998), commenting on the inevitable restraints that exist in large-scale research, observe that

“sometimes the researcher has no choice and must settle for a theoretical scheme that is less developed than desired" (p.292). That may be the case in the present study, but as Morse (1995) points out, saturation is often claimed by researchers without being proven. Dey (1999) even questions the usefulness of the term at all because it is imprecise and prefers the term theoretical sufficiency (p. 257) as being a better reflection of how grounded theory is actually done. While I would not want to claim that saturation has been achieved in this research, even if it could be proved meaningfully, I would claim that a sufficient amount of relevant and rich data has been gathered and analysed with which to present a substantial theory.

Since transcription of recorded interview data could not be completed until well after data gathering was finished, the first in-depth analysis of the data as a whole had to wait until then, as well. This was done without the use of any qualitative software, as it was felt that this allowed more sensitivity in the coding process and familiarity with the data had already been achieved through the transcription process. For most of the data, coding was done on computer using the copy and paste function and opening new files for new codes. In the case of the subject teacher interviews the paper and paste method suggested by Maykut and Morehouse (1994) was used for the first pass through the data. Interviews were printed and cut up into codes which were then organised on large sheets of paper on the floor, and then emerging categories could be grouped and labelled. While this method provides a pleasing hands on feel to the coding process, the disadvantage is that it cannot be stored electronically

and modified later. Before beginning the writing of the dissertation, another sweep was made through the entire data to provide a check on interpretation and to allow new insights.

In document DOKTORI (PhD) DISSZERTÁCIÓ (Pldal 72-76)