We present a novel system for the automaticevaluation of speech and voice dis- orders. The system can be accessed via the internet platform-independently. The patient reads a text or names pictures. His or her speech is then analyzed by auto- matic speech recognition and prosodic analysis. For patients who had their larynx removed due to cancer and for children with cleft lip and palate we show that we can achieve significant correlations between the automatic analysis and the judgment of human experts in a leave-one-out experiment (p<0.001). A correlation of .90 for the evaluation of the laryngectomees and .87 for the evaluation of the children’s data was obtained. This is comparable to human inter-rater correlations.
between the warmest pixel and the coolest pixel in each ROI. Consequently, smallest changes in the borders of the ROI, or dirt particles that are temporarily inside the ROI can lead to an altered ‘Max-Min’ value. The fact that actually surprises is that median VCs of ‘Max-Min’ in automaticevaluation are larger than those of manual evaluation. It is unknown if variations of the coolest or the warmest pixel are responsible for the large VC’s of ‘Max-Min’, but since automaticevaluation excludes hot spots more than manual evaluation, less variation concerning the warmest pixels is expected. It is thus conceivable that variations derive from cool pixels belonging to the environment, falsely selected into the ROI, and that automaticevaluation selects the outer borders less precise than manual evaluation. However, VCs of ‘Max-Min’ exceeding those of the other parameters by far indicates insufficient precision of this evaluation parameter in both methods. The box-plot graphs of Figure 8 give a first visual impression of the VC’s distribution: The box-plots of VCs of ‘Min’ appear noticeably larger than those of the other parameters. METZNER et al. (2014) suggest that minimum temperature is prone to falsifications since it, as mentioned above, depicts the coolest pixel inside the ROI, which can be a dirt particle or belong to the structures neighboring the ROI. Comparing automatic and manual evaluation, VCs of ‘Min’ are distinctly larger in automaticevaluation. This is probably due the automatic detection of the ROIs: even if the outer borders are detected correctly, it is possible that either parts of the adhesive patch or the neighboring structures of the body or surroundings are included. One pixel is enough to alter the outcome.
Note that unspecific control goals have little effect on affective priming: Priming effects occur even when participants are asked to ignore the prime or not to be influenced by it (Klauer & Musch, 2003). Why then were participants able to use prime information strategically in our studies? According to Gollwitzer (1993, 1999), goal intentions are more easily attained when furnished with implementation intentions. Implementation intentions specify the when, where, and how of goal striving as a kind of if-then rule, thereby linking anticipated critical situations to goal-directed responses. Instructions like “respond especially fast and accurately given a positive (negative) word following an Arab (liked celebrity)” might serve as such if-then rules. Following Gollwitzer, implementation intentions are effective because they operate in a quasi-automatic fashion: They are elicited spontaneously and fast if the situation specified in the “if”-part is encountered. The implementation intentions induced through our instructions may thus be successful because they trigger the actions that are required to counteract normal priming effects in concrete situations spontaneously and sufficiently fast to interact with speeded responding (for similar effects in go/no-go, task switching, and Simon tasks, see Brandtstaetter, Lengfelder, & Gollwitzer, 2001; Cohen, Bayer, Jaudas, & Gollwitzer, 2008).
While the semantics expressed by modifier-head relations remains an intriguing re- search area and has also been used to learn taxonomies from free text (see for example [Gillam, 2004, Gillam et al., 2005]), no attempt will be made at analyzing the modifier- head relations of biomedical terms. The reason is that these NP internal relations are largely expressed by implicit means, and the research object of this thesis is explicit knowledge patterns instantiating semantic relations between domain-specific concepts. Finally, there are three compelling reasons for zeroing in on the biomedical domain. Firstly, Biomedicine is a huge domain which has an impact on the lives of practically all people on the planet. Secondly, because of increasingly swift drug development cycles, the biomedical domain is in dire need of tools which can assist in keeping on- tological resources updated. The two relation types “induces” and “may_prevent” are particularly interesting because all drugs have to be tested for potential side effects, and copycat products are constantly introducing new side effects which have to be moni- tored. Thirdly, the Unified Medical Language System (UMLS) knowledge sources are not only among the most comprehensive ontological resources, they are also freely available 6 , making them ideal as both a source of relation instances for KP discovery but also as a baseline for an automaticevaluation of system performance.
Paraphrasing of reference translations has been shown to improve the correlation with human judgements in automaticevaluation of machine translation (MT) outputs. In this work, we present a new dataset for evaluating English-Czech translation based on automatic paraphrases. We compare this dataset with an existing set of manually created paraphrases and find that even automatic paraphrases can improve MT evaluation. We have also propose and evaluate several criteria for selecting suitable reference translations from a larger set. Keywords: machine translation, automaticevaluation, paraphrasing
Paraphrasing of reference translations has been shown to improve the correlation with human judgements in automaticevaluation of machine translation (MT) outputs. In this work, we present a new dataset for evaluating English-Czech translation based on automatic paraphrases. We compare this dataset with an existing set of manually created paraphrases and find that even automatic paraphrases can improve MT evaluation. We have also propose and evaluate several criteria for selecting suitable reference translations from a larger set.
The question now is what determines a suitable translation objective. Human in- terpreters usually have someone who gives them the job and informs them in advance about the particular interpreting situation, namely the domain of the dialogue and the social status of the dialogue partners. From that and from their general experience they can derive a translation objective. For automatic dialogue interpreting the translation objective results from the type of the dialogue and the domain. Therefore we propose the following translation objective for appointment-scheduling dialogues as they are described above:
Automatic calibration promises better and faster results especially for less experienced users. Gradients of all uncertain parameters due to the command variables form the basis. These gradients can be calculated very efficiently and accurate with the so called adjoint mode of a simulation program. With help of algorithmic differentiation (AD) an adjoint version of TELEMAC were developed by STCE, RWTH Aachen in the last years. The gradients were used by some reasonable optimization algorithms (line search algorithms) for automatic calibration of various input parameters.
As remarked after Deﬁnition 3.1.1 all ﬁnite structures are automatic. It is natural to ask whether given an automatic presentation of either kind ﬁniteness of the represented structure is decidable. In general this amounts to deciding whether an (ω-)(word-/tree-) automatic equivalence relation is of ﬁnite index. Given an injective presentation, however, the problem is not new, it asks ﬁniteness of the domain. This is well-known to be decidable for regular languages as well as for tree- regular languages. Since both word-automatic and tree-automatic presentations can be eﬀectively converted to injective ones, we have a decision procedure for these two models. Finiteness of ω-regular languages is also easily seen to be decidable, for instance by appealing to Eq. (2.1) on page 17. A decision procedure for non-injective ω-automatic presentations is obtained from Theorem 3.1.9 and Corollary 3.1.11 of the previous section. In the case of automatic presentations over inﬁnite trees a similar result is conjectured, however, at this point we cannot provide a proof.
To implement the equations of motion in minimal coordinates (2), we need the fol- lowing ingredients: the mappings of minimal joint states to the body states R R R and V V V and the Jacobians JJJ u u u (sss, u u u) and JJJ sss (sss, u u u). The functions R R R and V V V can be implemented easily by going through the open-loop tree from top to bottom and applying the coordinate transformations induced by the joints depending on their states. Instead of computing the Jacobians “by hand”, these will be computed by automatic differentiation in our implementation, which we discuss below.
Over the past several years, a number of projects have entered the area between natural language processing and multimodal communication, often focusing on a single specific functionality, such as the use of pointing gestures parallel to verbal descriptions for referent identification ([Cohen et al. 89], [Kobsa et al. 86], [Neal & Shapiro 91]). The automatic design of complete multimodal presentations has only recently received significant attention in artificial intelligence research. The most extensive discussion of active research in this field can be found in the proceedings of a series of workshops on intelligent multimedia interfaces (e.g., [Arens et al. 89], [Sullivan & Tyler 91], [Maybury 91]).
The recognition of contradictions can be described in a simplified form as a process of finding of at least two declarative sentences or parts of a declarative sentence from one or more texts which have the same semantic content (proposition) and refer in the same respect and time to the same situation in the world as well as include elements that are contradictory or in contrary relation with each other. Though the task seems to be easy, it poses a challenge to an automatic CD, requir- ing a variety of natural language processing steps at lexical to discourse levels. The main aim of the present chapter is first to summarize the theoretical elaborations on contradiction from the previous chapters and then incorporate them into a concep- tual design (what a system should do) of a CD system (Section 6.1). The conceptual design will serve as a basis for a physical design, which is how the system should be built (Chapter 7). Logical design, which refers to a buildup of a system with re- spect to a user, will stay beyond the scope of the study. Second, the aim of the present chapter is to introduce the reader to the state of the art of supporting tools for the CD task with a focus on English. The chapter begins with a discussion of NLP tasks at lexical, morphological, and syntax levels (Section 6.2.1). In the focus of the section are the tasks of tokenization and sentence splitting (Section 220.127.116.11), stop word detection and removing (Section 18.104.22.168), part-of-speech tagging (Section 22.214.171.124), stemming and lemmatization (Section 126.96.36.199) as well as parsing and chunking (Section 188.8.131.52). The existing techniques for meaning processing at se- mantic, pragmatic, and discourse levels are addressed in Section 6.2.2, including semantic role labeling (Section 184.108.40.206) and textual entailment recognition (Section 220.127.116.11). How and to what degree the sentence meaning construction and interpre- tation can profit from the properties of a text is discussed in Section 18.104.22.168. Here, the notions of cohesion and coherence (Section 22.214.171.124.1) and state-of-the-art tools for anaphora (or coreference) resolution (Section 126.96.36.199.2) are of particular interest. The NLP tasks relevant to a CD task include the negation and modality processing (Section 188.8.131.52), sentiment analysis (Section 184.108.40.206), named entity recognition (Section 220.127.116.11) as well as temporal (or time) processing (Section 18.104.22.168), and measuring semantic textual similarity (Section 22.214.171.124). Section 6.2.4, in turn, out- lines the existing approaches to meaning representation, which range from a logical form to the view of meaning as a word embedding. Finally, Section 6.2.5 introduces the prominent computational sources of knowledge, including lexical resources (Section 126.96.36.199) and ontologies (Section 188.8.131.52).
tortions and late reverberation with a non-stationary room impulse response. In a series of experiments with diﬀerent speaker to microphone distances we have demonstrated that compensating for additive as well as for convolutive distor- tions helps to improve the accuracy of an automatic speech recognition system. Furthermore we have been able to gain in accuracy by jointly estimating addi- tive and convolutive distortions over their individual estimation. In addition we have argued that the compensation of non-stationary distortions in the feature space is able to compensate for distortions which can not be treated well with those techniques assuming stationary distortions. This argument is conﬁrmed by demonstrating additional improvement in word accuracy by combining the proposed joint particle ﬁlter with feature and model adaptation techniques. Last but not least we have proposed to use class separability as a measure for channel quality. While no signiﬁcant performance diﬀerence could have been observed in contrast to signal-to-noise ratio on channels with diﬀerent charac- teristics such as close talk, table-top and wall mounted microphones, signiﬁcant improvements could be demonstrated in those cases where the channel quality was very similar; e.g. between diﬀerent table-top microphones.
Erstens müssen am Anfang jeder systematischen Evaluation die Ziele explizit gemacht werden, die mit den zu evaluie renden Maßnahmen erreicht werden sollen – da ist zunächst die Politik gefragt, die ihre Ziele klar benennen muss. Die Frage, ob und wie diese messbar gemacht werden, ist primär von den Forschern zu beantworten. Zweitens müssen kausale Zusammenhänge aufgezeigt werden; wurde das Ziel tatsächlich aufgrund der evaluierten Maßnahme erreicht oder diskutieren wir nur Zusammenhänge? Die Wirkungs analyse des KitaAusbaus auf die Beschäftigungsquote
A fundamental requirement of translation is the determination of the objective of translation . Roughly speaking, the translation objective species which aspects of a source-language utterance are to be rendered in a target-language utterance. Especially for the task of automatic dialogue interpreting it is crucial to state a translation objective as word-by-word-translation is pragmatically inadequate. Obviously, many phenomena of performance (like repetitions, false starts, and self-repairs) are to be smoothed out in the interpretation. The question is how to decide on those aspects of the source-language text which are considered as relevant and therefore rendered in the target-language text. A promising answer to this can be found in relevance theory (cf. Sperber, Wilson 86]) as it explicitly states when an assumption is relevant in a context. According to the relevance principle, the relevance of an assumption in a context depends on its eects on this context and the required eort to process it in this context. For determining a translation objective, the relevance principle must be applied to a concrete translation situation.