content analysis

Top PDF content analysis:

Computer-Assisted Content Analysis

Computer-Assisted Content Analysis

Minimal Text Size. If text data are to be used in a scientific study to determine measure- ments considering both frequency and distribution a text needs to be divided into several segments. Making separate measurements on each section, the question is quickly posed as to the determination of the appropriate sample size. This aspect concerns the length of a section of text used in the analysis. This can be of significance, for example, in the decision as to whether single paragraphs/utterances or entire chapters/conversations should be used. There have not been any sound scientific studies on these questions. As far as the text size has been taken into consideration in published studies at all, the authors rely primarily on the observations that they were able to make, while analyzing speech material. For conventional content analysis Gottschalk and Gleser (1969) and Gottschalk et al. (1969) for example, determine that their anxiety scales could be utilized for a text of at least 70 (English) words. Schöfer (1977) gives 100 words as the minimum value for the German version of these scales. This minimum is founded methodologically in the reliability with which numerous judges were able to classify test sentences. Ruoff (1973) describes texts of 200 word forms as the minimum size which is also applicable in practice. He bases this on his view that phenomena necessary to speech are normally distributed; this normal distribution begins to occur in the corpus of spoken speech that he studied in southern Germany at this text length. For computer-assisted text analysis there are no well-founded indications of minimum size; usually values between 500 and 1000 word forms are suggested without further discussion.
Mehr anzeigen

31 Mehr lesen

Qualitative Content Analysis: From Kracauer's Beginnings to Today's Challenges

Qualitative Content Analysis: From Kracauer's Beginnings to Today's Challenges

Let's take a rather typical case where we are working with ten main categories that have an average of eight subcategories, that is, with a total of 80 codes. The probability—when a text is coded by two independent coders—of the same subcategory being randomly assigned to a predefined segment is extremely low (1.25%). Therefore, coefficients oriented towards the compensation of random correspondence are questionable in the usual coding process of qualitative content analysis. A further problem is that the size of the text segments obviously plays a role, because the smaller the coded segment is, the less likely it is that two code assignment ends coincide. After taking all circumstances into account, KUCKARTZ and RÄDIKER (2019) have come to the conclusion that the use of Kappa and other reliability coefficients of this type should be reconsidered within the logic of qualitative content analysis, as these coefficients are in most cases inappropriate. It is almost as if you wanted to use an alcohol breathalyzer to measure the air pressure in a bicycle tire. Yet, the development of adequate and generally accepted instruments for assessing the quality of coding in the context of qualitative content analysis remains a desideratum for the time being. Here, it is advisable to follow up the general discussion on quality criteria in qualitative social research; criteria must be found which have an appropriate relationship to qualitative content analysis. [44]
Mehr anzeigen

20 Mehr lesen

Qualitative Content Analysis: Conceptualizations and Challenges in Research Practice - Introduction to the FQS Special Issue "Qualitative Content Analysis I"

Qualitative Content Analysis: Conceptualizations and Challenges in Research Practice - Introduction to the FQS Special Issue "Qualitative Content Analysis I"

Qualitative content analysis (QCA) has a long history in the social sciences. BERELSON was the first to define the quantitative version of the method as "a research technique for the objective, systematic and quantitative description of the manifest content of communication" (1952, p.18). In the same year, however, this was met by criticism from the German expatriate Siegfried KRACAUER (1952) when he published his article "The Challenge of Qualitative Content Analysis," in which he pointed to the limitations of a purely quantitative content analysis. KRACAUER's main points of criticism were directed at quantitative content analysis being limited to the analysis of manifest content, and at the focus on coding frequencies. Instead, he argued for the importance of including latent structures of meaning into the analysis, and he pointed out that the single occurrence of a phenomenon in a given text can also be meaningful. On this basis, he proposed a distinctly qualitative content analysis, and his article can be considered the starting point of the history of the method. Since then, QCA has been turned into a highly popular method that is used widely across the social sciences (MAYRING, 2019). [1]
Mehr anzeigen

27 Mehr lesen

Robust Video Content Analysis via Transductive Learning Methods

Robust Video Content Analysis via Transductive Learning Methods

Since the appearance of certain concepts is related with a particular video source or program, the task of detecting these concepts can be considered as a transductive learning setting. In this chapter, two methods to adaptively learn the appearance of certain objects or events for a particular video with the aim of enhancing the retrieval quality are proposed to exploit this setting: 1.) a semi- supervised learning ensemble approach based on our framework described in Chapter 3 and 2.) an approach using transducitve SVM. In the preceding Chapters 4 and 6, it has been shown that an initial object model obtained for a particular video via unsupervised learning can be improved adaptively for this video via our proposed framework. Therefore, the given video content analysis task has been considered as a transductive learning setting. The transductive learning ensemble (as proposed in Chapter 3) has been realized in a self-supervised learning manner because only unlabeled data have been used in these tasks. In Chapter 7.3, it is shown that a transductive learning approach can also improve the retrieval performance for detecting high-level concepts, although the performance of the available baseline classifiers (based on supervised learning) is relatively low. For this task, the transductive learning ensemble is realized in semi-supervised way since it relies on supervised baseline classifiers. To improve results for particular videos, the fact is exploited that there are concepts whose appearance is strongly related to a certain video source (e.g., maps in a news cast). The novel idea is to estimate the relevant features for a given concept in a particular test video v. It is assumed that an initial concept model is available which has been obtained via supervised learning using a set of appropriate training videos. Based on this initial model, shots are ranked separately for each test video v. Then, features are selected with respect to the concept’s appearance in this video v and new classifiers are trained using only the best and worst ranked shots as positive and negative training samples. The feature set is split in order to train additional classifiers with different views to assemble a robust ensemble of classifiers which is finally used to re-classify or to re-score the shots of this particular test video v. The second proposal (Chapter 7.4) considers the scenario as a transductive setting as well and transductive SVMs are applied to improve concept detection in particular videos. Experimental results for the MediaMill Challenge [18] part of the TRECVID 2005 video set demonstrate the feasibility of the proposed approaches for certain concepts. The work presented in this chapter has been partially published in [48, 49].
Mehr anzeigen

282 Mehr lesen

Qualitative content analysis: theoretical foundation, basic procedures and software solution

Qualitative content analysis: theoretical foundation, basic procedures and software solution

A critical discussion of the own research results seems to be crucial for a scientific approach. The classical criteria, deriving from the test theory (objectivity, reliability and validity) cannot be simply transferred to qualitative approaches (cf. Steinke, 2000). But an introduction of totally different criteria seems to be problematic as well. A position, influenced by a constructivist theory of science, that qualitative and quantitative approaches, each following their own quality criteria, can be combined by triangulation (e.g. Flick, 2007) is not compatible with our intention of a unified scientific process. I think, validity in a broader sense is usually less of a problem within qualitative approaches, because they seek to be subject centered, close to everyday life (naturalistic perspective, field research), especially when the research process remains theory driven (construct validity). In qualitative research, efforts have to be made to enhance reliability in a broader sense. Within Qualitative Content Analysis, the rule guided procedures can strengthen this criterion. Objectivity, defined as total independence of the research results from the researcher, is held to be difficult within qualitative approaches. But on the other side, they discuss the interaction researcher–subject and strengthen objectivity in a broader sense.
Mehr anzeigen

144 Mehr lesen

Conducting Qualitative Content Analysis Across Languages and Cultures

Conducting Qualitative Content Analysis Across Languages and Cultures

1. Introduction: Meaning-Making Across Languages and Cultures Conducting a qualitative content analysis (QCA) in more than one language and cultural context is a challenging endeavor. When researchers want to examine and understand a phenomenon in more than one country, they need to equally consider and be able to work in and with each involved language and (political) culture. Grasping meaning not only in one's own native language, but also in another, involves a number of difficulties. Doing so in a manner that allows for comparison is certainly a challenge, and one that also relates to subjectivity. Since researchers are all part of and thereby largely "caught" in their own language and cultural context, they may easily miss, misjudge or misinterpret meaning in another such context. In addition, some words require more than a simple translation, lest meaning is lost or unduly changed. Furthermore, some meanings must be considered in their (political) cultural contexts and with fine nuances. QCA across languages and cultures thus requires deep knowledge of and familiarity and experience with each, as well as reflection on the process. My aim here is to illustrate how these challenges were met. [1]
Mehr anzeigen

8 Mehr lesen

Qualitative Content Analysis: Why is it Still a Path Less Taken?

Qualitative Content Analysis: Why is it Still a Path Less Taken?

its synonyms such as for example, non-frequency content analysis, thematic coding or ethnographic content analysis may not figure in the title or abstract of the article. Unless all of these explanations are taken care of, it will be difficult to take up a trend study or even to come to a conclusion about the frequency of the application of QCA either within the respective fields or as a method on its own. [11] Descriptions of QCA as a method in its own right began to appear in the literature only recently, primarily as an outcome of the interaction between researchers of the American and German intellectual traditions since the 1960s (FLICK, 2009; HSIEH & SHANNON, 2005; KUCKARTZ, 2014; MAYRING, 2014; SCHREIER, 2012). As MERTON (1968) pointed out in his interesting essay, the quantitative- manifest versus qualitative-latent orientations reflect the American vs. European intellectual traditions, respectively. According to him, the qualitative and latent approach to content is close to researchers of a European or more specifically German intellectual tradition, whose training stresses more on the meta perspective of the problem. Thus, individuals of these two traditions, broadly representing the continental and analytical philosophies, are distinct in terms of their understanding of the text as an aspect of reality and in comprehending its meanings. While researchers using an analytical approach assume that reality exists out there independent of the investigator who seeks to understand the reality as objectively as possible, those using the continental philosophical approaches see no such distinction. [12]
Mehr anzeigen

22 Mehr lesen

Principles of content analysis for information retrieval systems: an overview

Principles of content analysis for information retrieval systems: an overview

The computer-linguistic argument assumes that the reduction of content analysis to a purely symbol-oriented, non-language-dependent dimension is responsible for the major- ity of research problems. The language-dependent rule systems researched by linguists are therefore integrated. They replace the user’s linguistic skills which he (she) must transfer during traditional free-text research to the symbol-oriented help operations (truncation, context operators) which are by no means suitable for this purpose. The quality of the exact pattern match method of traditional IR systems will be improved by: a) returning the word forms in the text to their basic forms or expanding the word forms of the user query to all related basic forms. The recognition of abbreviations can be classified here as a special case.
Mehr anzeigen

25 Mehr lesen

Sustainable grocery retailing: Myth or reality? – A content analysis

Sustainable grocery retailing: Myth or reality? – A content analysis

Email: Marcus.Saber@hhl.de Abstract Sustainability reports are a crucial instrument to inform outside stakeholders not only about a company's sustainabil- ity performance but also to manage impressions. However, they are often prone to greenwashing and the reporting of negative topics can jeopardize corporate legitimacy. Therefore, this paper aims to analyze reporting quality and how grocery retailing companies deal with this challenge of reporting the true picture. The empirical material is taken from the latest sustainability reports and information avail- able on the Internet for two major German supermarkets, six grocery discount retailers, and two organic supermarkets. The Global Reporting Initiative standards are used to assess and compare the extent of information disclosure. A qualita- tive content analysis is applied to identify negative disclo- sure aspects and their legitimation. While the main focus areas (supply chain, employees, environment/climate, and society) are similar for the companies, different levels of reporting quality appeared. Negative information is rarely reported and “abstraction” and “indicating facts” are the dominant legitimation strategies.
Mehr anzeigen

19 Mehr lesen

Computer-Assisted Content Analysis of Twitter Data

Computer-Assisted Content Analysis of Twitter Data

concluSIon Content analysis provides a useful and multifaceted, methodological framework for Twitter analysis. CAQDAS tools support the structuring of textual data by enabling categorising and coding. Depending on the research objective, it may be appropriate to choose a mixed-methods approach that combines quantitative and qualitative elements of analysis and plays out their respective advantages to the greatest possible extent while minimising their shortcomings. Big data (from several thousand up to millions of tweets) should rather be considered for a quantitative assessment of, for instance, communication patterns within the data set. It can subsequently be reasonable to extract a subsample (= small data) and analyse it qualitatively with the help of CAQDAS software. Basic functions such as word, phrase, or category count analyses as well as features like co-occurrence or KWIC-analyses can be useful additions for a systematic interpretation of the data. The process of coding speech acts within tweets as a form of qualitative content analysis can be very demanding, as (re-)contextu- alising tweets, differentiating similar speech acts (or topics, arguments, etc.), categorising Twitter-specific symbols, and finally, interpreting the co-occur- rences can be quite challenging. Table 8.2 summarises the main advantages and limitations of CAQDAS in Twitter analysis.
Mehr anzeigen

21 Mehr lesen

Computer-Assisted Content Analysis of Twitter Data

Computer-Assisted Content Analysis of Twitter Data

The objectives of a content analysis of Twitter data can be as diverse as the possible methodological procedures. For example, the metrics of tweets can be analysed, i.e. how many @replies did two particular users exchange within a certain hashtag-based discourse? Which were the most common phrases used by a certain group of users in the data set? It might also be interesting to go into a detailed qualitative analysis of the tweets and find out about, for example, the linguistic characteristics of Twitter language and its speech acts, argumen- tative schemas, or semantic co-occurrences. One might also want to compare the topics that emerge on Twitter and the types of users who talk about simi- lar or diverging topics, for example, politicians versus citizens. The examina- tion of conversational structures through Social Network Analysis (Magnani, Montesi, Nunziante, & Rossi, 2011)—which can be regarded as one form of con- tent analysis (Herring, 2010)—is just as interesting as doing opinion mining through Sentiment Analysis (Kumar & Sebastian, 2012; Nielsen, 2011), or using a mixed-method approach—for instance, a combined statistical and hermeneuti- cal analysis—in order to assess the diffusion of information on Twitter (Huang, Thornton, & Efthimiadis, 2010; Jansen, Zhang, Sobel, & Chowdury, 2009). Content analysis is an approach to empirical research based on pre-existing material. On Twitter, we deal with high amounts of naturally occurring data, i.e. data that is usually produced without being motivated by any research intent,
Mehr anzeigen

21 Mehr lesen

Theme in Qualitative Content Analysis and Thematic Analysis

Theme in Qualitative Content Analysis and Thematic Analysis

Abstract: Qualitative design consists of various approaches towards data collection, which researchers can use to help with the provision of both cultural and contextual description and interpretation of social phenomena. Qualitative content analysis (QCA) and thematic analysis (TA) as qualitative research approaches are commonly used by researchers across disciplines. There is a gap in the international literature regarding differences between QCA and TA in terms of the concept of a theme and how it is developed. Therefore, in this discussion paper we address this gap in knowledge and present differences and similarities between these qualitative research approaches in terms of the theme as the final product of data analysis. We drew on current multidisciplinary literature to support our perspectives and to develop internationally informed analytical notions of the theme in QCA and TA. We anticipate that improving knowledge and understanding of theme development in QCA and TA will support other researchers in selecting the most appropriate qualitative approach to answer their study question, provide high-quality and trustworthy findings, and remain faithful to the analytical requirements of QCA and TA. Table of Contents
Mehr anzeigen

15 Mehr lesen

Data quality in content analysis: the case of the comparative manifestos project

Data quality in content analysis: the case of the comparative manifestos project

Mochmann 1980, Neuendorf 2002, Roberts 1997, Weber 1990). In the follow- ing discussion of the merits of human- and computer-based content analysis we mainly rely on Krippendorf’s (2006) recent and comprehensive assessment of the types and applications of content analysis. Krippendorf provides the follow- ing definition: ‘Content analysis is a research technique for making replicable and valid inferences from texts (or other meaningful matter) to the contexts of their use’ (2006: 18). This definition of content analysis first and foremost demands drawing replicable inferences because texts can be interpreted quite differently. In ordinary life, the number of inferences to be drawn from a text can be tantamount to the number of readers. By using content analysis as a scientific instrument, each and every properly trained coder should come to the same conclusion about unitising, i.e., choosing the same coding unit, and scor- ing, i.e., selecting the same concept for a unit.
Mehr anzeigen

19 Mehr lesen

Qualitative Content Analysis: A Novice's Perspective

Qualitative Content Analysis: A Novice's Perspective

definition: "Qualitative content analysis defines itself within this framework as an approach of empirical, methodological controlled analysis of texts within their context of communication, following content analytical rules and step by step models, without rash quantification" (2000, §5). In another source, the approach is described as "a research method for the subjective interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns" (HSIEH & SHANNON, 2005, p.1278). A similar definition is provided by SCHREIER: "QCA is a method for systematically describing the meaning of qualitative material. It is done by classifying material as instances of the categories of a coding frame" (2012, p.1). With these definitions, it seems that analytically QCA is positioned between the qualitative and the quantitative, or between positivistic and constructionist views of qualitative research, thus
Mehr anzeigen

15 Mehr lesen

Content Analysis of Interpersonal Coping-Behavior in Japanese and German Primary School Textbooks

Content Analysis of Interpersonal Coping-Behavior in Japanese and German Primary School Textbooks

Thirty-six Japanese and twenty-one German elementary school readers used for 6–9 year-old children (1st–3rd graders) were investigated in this study. The Japanese readers were published by only six publishing com- panies under the strict control of the Ministry of Education. On the other hand, German readers were not so strictly controlled by the central gov- ernment and were therefore published by many companies. Thus, Ger- man texts were selected by checking the recommendation lists published by sixteen different state governments in Germany. The textbooks selected for this analysis were recommended in more than ten German states. Sto- ries that had sentences illustrating stimuli toward the main characters and the main characters’ reactions were selected. These totaled 118 stories in the Japanese texts and 259 stories in the German texts.
Mehr anzeigen

18 Mehr lesen

Towards Extending Content Analysis (TECA) - Schlußbericht zu Arbeitspaket 2, coderbasierte Textanalyse

Towards Extending Content Analysis (TECA) - Schlußbericht zu Arbeitspaket 2, coderbasierte Textanalyse

Es wurde also zum einen auf eine höchstmögliche Kongruenz der Codierungen geachtet werden, zum anderen sollte das Kategorienschema aber auch so weit wie möglich “offen” gehalten werden,[r]

32 Mehr lesen

Towards Extending Content Analysis (TECA) - Schlußbericht zu den Arbeitspaketen 4 und 6, Umsetzung in SGML-Format

Towards Extending Content Analysis (TECA) - Schlußbericht zu den Arbeitspaketen 4 und 6, Umsetzung in SGML-Format

Würde jede Kategorie für sich einen Attributwert bilden, könnte so über den ID-IDREF-Mechanismus in SGML zusätzlich eine formale Kontrolle durchgeführt werden, die sicherstellen würde, d[r]

46 Mehr lesen

OPUS 4 | Mining Social Media: Methods and Approaches for Content Analysis

OPUS 4 | Mining Social Media: Methods and Approaches for Content Analysis

Our first scenario relates to a popular online content sharing service Twit- ter; a microblogging service having more than 500 million 5 users [30, 115]. Twitter allows users to share information with each other via short messages termed as Tweets, producing over 400 million messages a day [115]. In Twit- ter, users can follow other users in order to receive their tweets. If a user considers a tweet interesting, she may forward it to her own followers. This practice is called retweeting and usually users retweet the content of general interest or concerned with the audience who follows their tweets [7]. The purpose of retweeting is often to disseminate information to one’s followers. The conciseness of a tweet has been cited as a major reason for the success of Twitter, however at the same time it leads to information overload. The problem of information overload is evident from the high volume of data generated everyday from the wide range of uses of Twitter by its large user base. The quality assessment of the Twitter content is necessary as the microblog documents range from spam over trivia and personal chatter to
Mehr anzeigen

181 Mehr lesen

Towards Extending Content Analysis (TECA) - Schlußbericht zu Arbeitspaket 1, Verschriftung

Towards Extending Content Analysis (TECA) - Schlußbericht zu Arbeitspaket 1, Verschriftung

Die vorliegenden Texte weisen einige Besonderheiten auf, die zu beachten sind, wenn an eine Textanalyse, insbesondere eine computerunterstützte, gedacht wird. Zeitungsartikeln, Buch- bei[r]

14 Mehr lesen

Enabling social media content quality assurance using social network analysis

Enabling social media content quality assurance using social network analysis

This paper introduced the concept of quality assurance for user generated content. We focused on the role of social network analysis to study the reputation of users and the quality of contents. It can be concluded that SNA algorithms provide an integrated view of the relation among users and their content. Their metrics can be used as key performance indicators (KPI) and can be combined with other content mining methodologies to yield, for example, a more efficient report for ranking of newly uploaded community problematic / non compliant content objects (media, images, text). The usage of the enhanced users’ metadata can also allow analysis in some fields like video on-line communities where at present immature video content analysis [IAI08] still fails to produce satisfactory results. A complementary part of the concepts introduced in this paper relates to other measures, e.g. text analysis to assess hot topics. SNA can also provide alternative KPI’s to those traditionally used to measure and monitor communities, like for example counting of page impressions or unique visitors [WIK09]. SNA KPI’s introduce sociological concepts to community monitoring and measures, hence creating a more user centric approach. SNA could therefore be used to measure Social activity and qualify reactions to published content without the necessity of coupling traditional volume metrics with sociological analysis done using online questionnaires [AGO08].
Mehr anzeigen

9 Mehr lesen

Show all 10000 documents...