For this purpose, we ﬁrst provide a very simple formal model of decision processes incorporating the postulated eﬀects of cognitiveload, namely that cognitiveload tilts the balance toward more intuitive processes and away from deliberative ones. This simple model immediately delivers a useful prediction which can help improve future experi- ments relying on cognitiveload. The reason is that, if a given cognitiveload experiment ﬁnds no eﬀect, it is not possible to conclude whether this is truly because a shift to intuition does not aﬀect economic behavior, or rather because the particular cognitiveload manipulation implemented has failed to tax cognitive resources to a suﬃcient ex- tent. Our ﬁrst prediction provides a manipulation check which allows to test whether the manipulation has been successful or not independently of whether there are any eﬀects on behavior. Speciﬁcally, the model predicts that decisions under cognitiveload must be faster than in its absence. The intuition for this result is straightforward. One of the fun- damental characteristics associated with more intuitive (or more automatic) processes is that they are generally faster than more deliberative ones (Kahneman, 2003; Strack and Deutsch, 2004; Evans, 2008; Weber and Johnson, 2009; Al´ os-Ferrer and Strack, 2014). If a manipulation successfully induces a shift toward more intuitive processes, meaning that decisions arise from those more often, average response times must become shorter. However, while this eﬀect will typically arise for economic tasks, we do not necessarily expect to observe it in the kind of tasks characteristic of cognitive psychology. The rea- son is that economics often deals in more complex decisions than psychology. Cognitiveload is bound to cause a small increase in response times due to the more mechanical parts of decision making, e.g. those involved in perception, motor implementation, and process selection. For very simple tasks involving very short response times, those me- chanical eﬀects might dominate, resulting in longer response times (Gevins et al., 1998; Baddeley et al., 2001; de Fockert et al., 2001). For complex tasks as the ones of interest to economists, those mechanical eﬀects will typically be negligible and the eﬀects we describe here will dominate.
In addition to the evaluation criteria, the number of correct assignments as well as the response time between the assignments were recorded (not reported in this paper). It was assumed that the performance of the participants (number of correct answers and response time) could improve over the course of the test. This could have affected the cognitiveload and thus the subsequent assessment of the sounds. In order to keep the performance as constant as possible, each participant completed a training sequence under supervision. This training sequence consisted of the Stroop Test accompanied by the training sound. Due to the rather simple task, a training period of two minutes was considered appropriate in order to reduce possible learning effects to a minimum while also keeping the cognitiveload right before the test at a low level.
findings show that cognitiveload was accompanied by a clear increase in RR. Of note, 48% of the reviewed studies indicated medium to large effects for the increase from baseline to task. Also, higher levels of task difficulty resulted in an additional increase of RR. While TV appeared to be insensitive to cognitiveload, MV, following logically from the increase in RR without changes in TV, showed a consistent increase from baseline to task. Since MV was, however, not sensitive to dif- ferent levels of task difficulty and predominantly reflects the increase RR, we conclude that, for the assessment of operator load, MV does not provide incremental information over the more convenient frequency measure. This general increase in ventilation has been explained by a higher metabolic rate during performance  but also by psychological processes such as learned anticipation of metabolic need [93–95]. While human and animal research on limbic and paralimbic influ- ences on breathing is still scarce, available evidence suggests that an increase in cognitive as well as emotional impact is associated with corresponding changes in neural activity not only in the brainstem but also in the limbic and paralimbic regions, particularly amygdala and anterior cingulate cortex [17, 96], the latter being a key prefrontal region that is also involved in executive function [97–99].
Current evaluation methods for text-to-speech (TTS) synthesis rely solely on subjective rating scores. These tests typically account mostly for how natural or intelligible the voice is. With state-of-the-art systems, these measures are approaching ceiling and therefore alternative measures such as the cognitiveload may become more meaningful. To our knowledge, there is little or no recent work evaluating the cognitiveload of state- of- the-art text-to-speech systems. We use pupillometry as a measure of cognitiveload. The pupil has been found to dilate upon increased cognitive effort when carrying out a listening task. Currently we are evaluating speech generated by a Deep Neural Network TTS synthesiser. In our method, we generate stimuli that step incrementally from natural speech to synthesized speech by changing only a single feature at a time. Stimuli are presented to listeners in speech-shaped noise conditions.
Despite being promising, reported findings should be considered with caution due to the limited number of participants, which did not allow for an in-depth analysis of specific stressors in each category of vision impairment. A larger group study would need to be carried out to confirm and quantify the trends obtained here. Furthermore, the well-established Emotiv EPOC+ EEG headset has certain limitations with respect to the quality of the recorded signal during experiments involving physical activity “in the wild” such as those presented here. Future steps of the present study include refining the predictive model through explor- ing novel multimodal biosignal features for cognitiveload assessment and comparing different classifiers. Such findings hopefully pave the way to emotionally intelligent mobile technologies that take the concept of navigation one step further, accounting not only for the shortest path but also for the most effortless, least stressful, and safest one.
In the context of this thesis the present study revealed that the modality effect changes with the matter of control. A higher cognitiveload of written compared to spoken text presentation in multimedia learning could not be found in a learner-controlled instruction. Thus, the modality effect appears to be restricted to system-controlled instructions (cf. Chapter 2 & 3). Given this conclusion, some substantial shift must have taken place from system- to learner-controlled instruction. Comparing the viewing behavior within written text conditions of system- and learner-controlled instructions revealed such a substantial shift. Apart from the time spent reading the text, learners in a self-paced instruction showed a highly stable fixation pattern. They adjusted the pace of presentation to their individual reading speed and engaged in an otherwise systematic viewing behavior. This pattern underscores the self-paced nature of normal reading. In contrast, learners in system-controlled instructions showed a different viewing behavior. Varying the pace of system-controlled instructions revealed that learners used additional time in favor of illustration over written text and to shift visual attention between text and illustrations more often. This variation of viewing behavior with pace can be explained by a mismatch between pacing and individual needs. None of the different paces in system-controlled presentation conditions has met the needs of all learners. In fast presentation conditions some poor readers surely had problems keeping up with the pace and, thus, had not much time inspecting illustrations. Some skilled readers in slow presentation conditions on the other hand can be expected to have had dispensable time to look (more or less unintentionally) back and forth between text (which they already read) and illustrations (which at least moved). Thus, receiving written text not all learners are doomed to suffer from a fast pace and not all learners need to gain further benefit from a slow pace. Further research is necessary to sharpen the role of individual reading speed and text comprehension abilities in multimedia learning.
During the last years, an increasing number of studies were conducted which investigate the effects of various instructional designs (for an overview, see Mayer, 2014). Many of these studies aim to foster learning by providing a learning environment which facilitates learning by reducing extraneous cognitiveload. In contrast, it sometimes may be more beneficial to make learning harder and more effortful. Making learning harder to reach better learning outcomes might sound unusual. Indeed, there are studies reporting such effects – naturally, not for all difficulties placed in the learning environment, but only for so-called desirable difficulties (Bjork, 1994). Nevertheless, even such desirable difficulties rely not only on diligent learners – a learner´s characteristic Abigail Adams (1764) calls for – but also on further learner´s characteristics which not only enable the learner to deal but also to profit from these difficulties. However, if the learner needs to fulfil specific criteria to profit from such difficulties, educators need to carefully match learners and specific instructional designs. Given the fact that an educator potentially needs to prepare different presentations of the same learning content, it becomes obvious that instructional manipulations which can be implemented easily may be preferred. One desirable difficulty, which is easy to manipulate, is to present visual text in a disfluent font (Bjork, 1994). This implementation is easy and time-efficient, because it only affects the surface of the text (Bjork, 1994).
Grundlagen der CognitiveLoad Theory
chung eingesetzt. Zum Messen der mentalen Anstrengung und der Leistungsergebnisse kom- men empirische Methoden zum Einsatz. Dabei lassen sich zwei verschiedene Techniken un- terscheiden: einerseits subjektive Techniken, die auf der Bewertung durch Probanden mittels Ratingskalen basieren, und andererseits objektive Techniken wie z. B. psychophysiologische Tests, bei denen die Gehirnaktivitäten, die Herzschlagfrequenz oder die Veränderung des Pu- pillendurchmessers aufgezeichnet werden. 69 Ratingskalenverfahren basieren auf der Annah- me, dass die Testpersonen in der Lage sind, zuverlässige Aussagen über ihre kognitiven Pro- zesse und die mentale Anstrengung zu treffen und stellen die meistgenutzte Variante bei den bisher durchgeführten Experimenten zur Messung der kognitiven Belastung dar. 70 Psycho- physiologischen Techniken liegt die Annahme zugrunde, dass Veränderungen in der kogniti- ven Belastung sich in den Reaktionen physiologischer Funktionsbereiche des Menschen wi- derspiegeln. 71 Sie eignen sich besonders gut, um zeitliche Verläufe der kognitiven Belastung aufzunehmen. Bei den Problem- und Leistungsergebnismessungen lassen sich zwei Vorgehen unterscheiden. Die erste Variante besteht darin, die erbrachten Ergebnisse bei der Primary Task, also beim Hauptproblem, zu messen. Bei der zweiten Variante, auch als Dual-Task- Methode bezeichnet, bearbeitet der Problembearbeiter parallel zum Hauptproblem noch ein Nebenproblem, bei dem im Regelfall einfache Handlungen vollzogen werden, die jedoch kon- tinuierlicher Aufmerksamkeit bedürfen. Haupt- und Nebenproblem konkurrieren daher um die vorhandenen kognitiven Kapazitäten. Die Nebenprobleme werden häufig anhand von Re- aktionszeiten, erreichten Genauigkeiten und Fehlerraten bewertet. 72
have a work injury. When doing so, we estimate a model including a set of dummies corresponding to the number of non-professional tasks (which varies from 0 to 5, 3 tasks being the reference category). The results obtained with these two alternative specifications are presented in Appendix Table A.3, together with those from our baseline specification (reported in Table 1). The linear fixed effects estimates suggest that handling one more non-professional task is associated with an increase in the probability of occupational injury by 0.4 percentage points (i.e. +6.9% at sample average), significant at the 5% level – see column (5). When turning to the non-parametric fixed effects estimates, we find that performing four or five non- professional tasks every day (as compared to 3 tasks) is positively associated with a higher risk of occupational injuries, with the effect being significant at least at the 5% level – column (6). These findings confirm that cognitiveload is associated with a higher risk of occupational injury, whatever specification we use.
neue Anforderungen zu verarbeiten, werden mit Hilfe der Sinnesorgane gewonnene Informationen, sowie Informationen aus dem Langzeitgedächtnis in das Arbeitsgedächtnis geladen. Man unterscheidet zwischen intrinsischen Belastungen und extrinsi- schen Belastungen. Die CognitiveLoad Theory legt nahe, dass eine Überlastungssituation eintritt, falls die Summe aus beidem die Kapazität des Arbeitsge- dächtnisses überschreitet. Die Aufgabe ist dann nicht mehr lösbar. Auf der anderen Seite darf die Aufgabe nicht zu leicht sein, um eine Unterforderungssituati- on zu vermeiden .
Result 3. Subjects’ lottery choices are 10% faster (p < 0.001) in the presence of cognitiveload
than in its absence.
One might have expected the reverse effect: that the multi-tasking demands of the “cognitiveload” condition led to an increase in the time needed to make a decision in the lottery choice task. Our finding is in line, though, with previous evidence: a decrease was also observed by Whitney et al. ( 2008 , p. 1182). They conjecture “that participants were speeding up their decision-making processes . . . in order to maintain high accuracy on the WM load task.” Notably, in their design, just like in ours, faster choice between the lotteries did not lead to an earlier display of the probe phase of the working-memory task. The question whether faster lottery choices improve performance in the working- memory task is important because one might suspect that the channel through which concurrent cognitiveload influences risk attitudes is by generating time pressure during the lottery choice—or at least a feeling thereof. Unfortunately, this hypothesis is difficult to test, since the counterfactual is missing: how well would subjects have performed in the working-memory task, had they taken more time in the lottery choice?10 To be on the
theoretical assumptions underlying the beneficial effects of increasing germane cognitiveload rely on CTML (Mayer, 2001, 2005, 2009) and CLT (Plass et al., 2010; Sweller, 1999; Sweller et al., 2011). For methods aiming at increasing germane cognitiveload — such as these mental animation tasks — an increase in generative cognitive processing is assumed by fostering an intense information processing and a higher cognitive activity regarding the mental model construction. Intrinsic cognitiveload is assumed to be constant for methods attempting to increase germane cognitiveload because they do not add new elements to the learning content. However, the number of interacting elements may be increased due to the instructions to rotate, map and integrate different forms of representations. According to the additional task instructions and comprehension questions the number of active elements in working memory should be increased. Concerning the former model of CLT (Plass et al., 2010; Sweller et al., 2011), this assumption again highlights the difficulty to differentiate between intrinsic and germane cognitiveload. With regard to the influence of individual learner characteristics, the increased cognitiveload might rather be intrinsic than germane, depending on the individual available cognitive capacity. In general though, the increase in cognitiveload is assumed to be germane cognitiveload for the three-factorial model of CLT. The advantage of the updated model of CLT (Choi et al., 2014), which only considers intrinsic and extraneous cognitiveload, is that the assumed increase in cognitiveload can clearly be attributed to intrinsic cognitiveload. However, an increase in task difficulty must also be considered for the two-factorial model of CLT, depending on the available cognitive capacity as a function of individual learner characteristics. Given a proper instructional design and considering established multimedia design principles, extraneous cognitiveload can be assumed to be constant for both models of CLT. Concerning the measurement of cognitiveload, methods of differentiated cognitiveload measurement would be necessary, much like it holds true for the seductive details effect. Not only to gain insight into synergetic effects of instructional methods for fostering generative cognitive processes between the single cognitiveload factors, but also in order to control for possible effects on task difficulty dependent on individual learner characteristics.
Figure 6 shows that, for those at the bottom of the income distribution, the net effect of either shock to income uncertainty (priming, in Panel A, and rainfall shocks, in Panel B) on cognitive performance is negative across all tasks – including those that benefit from tunneling on scarce resources. In Panel A, the grey line crosses zero at USD 64.50 / month; over 40% of our sample lives in municipalities with per capita income below that threshold. In Panel B, the grey line crosses it only at USD 92 / month, below which over 90% of our sample lies. This implies that, for almost half of the farmers in our sample, tunneling effects do not overturn cognitiveload from priming high income uncertainty regardless of the real-world distribution of the two types of tasks (i.e. those involving scarce resources and those not); when it comes to uncertainty triggered by rainfall shocks, the same is true for nearly all respondents. For those at the top of the income distribution, the overall cognitive impact of the shock (both priming and rainfall) depends upon the real-life frequency of decisions involving scarce versus non-scarce resources. The wealthier the municipality, and the higher the share of decisions involving (relatively) scarce resources, the lower the likelihood of an overall adverse cognitive impact. 52,53
seems to be more sensitive to changes in mental workload compared to the variability in time domain (De Waard, 1996). The most commonly used indicator in the fre- quency domain, i.e. 0.1 Hz component, reliably showed to decrease with increased memory load and subjective e↵ort ratings (see for review Kramer, 1991; Jorna, 1992). However, as the valid interpretation of spectral features of HRV requires long mea- surement intervals (Aasman, J., Mulder, G., & Mulder, L.J.M., 1987), the potential for online use of HRV to detect changes in mental workload is limited. Both mea- sures are subject to several confounding influences. HR e.g. is not only influenced by mental e↵ort but also influenced by respirational patterns as well as physical and emo- tional strain (L. Mulder, 1992; Nakamura, Yamamoto, & Muraoka, 1993; Brosschot & Thayer, 2003). HRV on the other hand is highly a↵ected by even small measurement artifacts in the IBI time series (Jorna, 1992; L. B. Mulder et al., 2004). Those artifacts can either arise from simple measurement errors due to imprecision of the technical equipment or from changes in e.g. breathing patterns (Veltman & Gaillard, 1996). Hence, the use of HRV in real-time applications is highly debatable (Beda, Jandre, Phillips, Giannella-Neto, & Simpson, 2007) and is therefore not further considered in this thesis. Cardiovascular parameters are mostly measured by attaching electrodes to the body, which is not desirable in applied settings. Recent technological advance- ments, however, introduced new methods suitable to non-intrusively measure heart activity. Wartzek et al. (2011), for example, successfully tested a system that uses sensors integrated into the driver seat. Furthermore, systems that use video-based de- tection of color changes in the human face have become extraordinarily advantageous (Poh, McDu↵, & Picard, 2010).
procurement costs, which are transformed into annuities and combined to overall costs to determine the cost-efଏcient trade-off. 3.1 Generation of flexible load scenarios
The ଏexible load scenarios are deଏned based on forecasts for future development of the aggregated ଏexible load capacity as well as average unit sizing per customer. The forecasts are provided by the local DSO Netze BW GmbH [ 5 ] as well as public studies [ 1 , 6 ]. To simulate the actual supply task, the aggregated forecasts have to be distributed among the customers in the grid area. To investigate multiple supply tasks, the concentration of ଏexible loads in the grid is varied in the form of three distribution scenarios (Table 1 ). For example, in the scenario ‘Worst’ households in highly utilised low-voltage (LV) grids receive a high probability for the allocation of ଏexible loads. The individual units (ଏexible loads and generation) are allocated iteratively until the forecasts for future penetration are met.
Extraction of relevant intervals: In a first step, the whole year’s data is reduced by extracting intervals from the total power consumption which potentially are household appliances. Therefore, the load profile needs to exceed both a defined minimal power and minimal duration after the base load has been subtracted from the household’s load profile. In order to take the alternating load profiles especially from dryers into account an additional time is specified. This variable defines how long the load profile may be lower than the minimal power so that the interval is still extracted as one coherent sequence.
This line of argumentation has major limitations, however (Baumert et al. 2007). First, there are clear conceptual differences between domain-specific cognitive competencies and general, decontextualized cognitive dispositions such as intelligence (e.g., in processes of knowledge acquisition and information processing and in dependence on the quality of educational environments). Second, although there is a statistically significant correlation between intelligence and scores on domain-specific competence tests, the results of construct validation studies provide strong empirical support for the multidimensionality (i.e., empirical separability) of cognitive measures applied in large-scale educational assessments (see Baumert et al. 2007). Third, evaluations of the educational effectiveness of a specific school, state, or country differ across domains, as shown, for instance, by a recent study (Trautwein et al. 2007) comparing educational outcomes at the end of the academic track in two German states (Baden-Württemberg and Hamburg). Although the Baden-Württemberg students clearly outperformed the Hamburg students in mathematics, with an effect size of Cohen’s d = .98, the respective differences in English achievement (d = .16) and reasoning (d = .07) were negligible. Fourth, intelligence and domain-specific competencies differentially predict academic outcomes such as success at university (Nagy 2006).
5.1 Expansion costs
The expansion cost annuities amount to about 100,000 –200,000 € for reference and minimum expansion and do not change signi ﬁcantly over the distribution scenarios (Fig. 9 ). Complete expansion on the other hand causes a multiple of these annual costs with approximately 500,000 € in 2020 and over 1 million € in 2030. The large difference between reference and minimum expansion on the one hand and complete expansion on the other hand is caused by the 100% ﬂexibility requirement in case of complete expansion, which represents the benchmark for the theoretic maximum of grid expansion costs. This means, the grid has to be planned for a simultaneity factor of 100% for all ﬂexible loads, which causes a signi ﬁcant need for grid expansion. The load groups used in the reference case reduce the simultaneity factor because the activation periods differ in the respective load groups. In the minimum expansion, activation times are even optimised to minimise the need for grid expansion.
Trough the rapid expansion of load sensors and smart meters the analysis of various loads continuously gains more attention. For example, there is a great interest in decomposing an electrical grid’s load since grid operators need to estimate the demand but usually has no information about his consumer’s behaviors. Paisios proposes a method to decompose and profile the load for demand side management . This has recently lead to the development of algorithms for non-intrusive load monitoring (NILM) techniques, which aim to assign a meter’s load to the compo- nents connected to the meter. NILM often focuses on household meters to detect loads of various common household equipment . The initial approach for NILM was proposed by Hart in 1992 and gained attention through the in- creasing digitalization over the last years (, , ). These algorithms aim to detect component states by analyzing the load profile. Often not just the load but also additional characteristics like harmonics are used like Srinivasan’s analysis of loads with neural networks . Like stated by Saitoh NILM can identify several component’s states by analyzing the load . Usually NILM requires the installation of smart meters that provide detailed measurements of the load, so these concepts aim for the design phase of electrical systems . However, previous mentioned re- search focuses on analyzing load to allocate the load to the components or identify states via unsupervised learning methods. On industrial levels there is often no need to identify states of components since they are already recorded for other purposes like production planning and controlling. Energy monitoring systems also usually record just the power demand and no additional load information. An approach to decompose energy data in an industrial scale using NILM was proposed in by Holmegaard along with an analysis of the challenges, however not combining energy data with other production information . Abele presents an analysis of a milling machine whose load is allocated to its components using PLC signals . Ebersp¨acher presents a similar approach but uses PLC signals as an input for a Model to determine the load via simulations  and Gebbe presents a decomposition approach using machine states and processing this data through a linear regression model . The database of Gebbe’s evaluation is quite similar to case study 2 in this paper, with different levels of state details though.
5.5 Starting wages and wage growth
In addition to looking at current wages, our data allow for an analysis of starting wages and thus also wage growth (the difference between current and starting wages). This is especially interesting when it comes to wage returns to skills, as employers cannot fully observe a worker’s skillset at the time of hiring; instead, skills and productivity are only fully revealed over time (“employer learning”). Employers are assumed to use school attainment and other observable signals to predict productivity and set starting wages accordingly. Over time, wages are then adjusted to match observed productivity. Similarly, the different hiring channels offer different opportunities for employers to obtain information about a potential employee prior to hiring – either through information obtained in the formal hiring process (such as personal interviews or assessment centers) or by means of information received through networks (such as who recommended the applicant or a personal assessment of the applicant by the recommending person). All of our variables are measured at the same point in time, the present. However, some of them can be assumed to be constant over time and thus be important for wage setting. This holds especially true for levels of formal education (assuming that most workers do not engage in further formal education once they are employed and would only benefit from on-the-job-training), but it also most likely holds for non-cognitive skills. Indeed, non-cognitive skills are assumed to be rather stable throughout adult life and have been shown to only react slightly to major life events (Cobb-Clark and Tan, 2011). As the mean age at hiring in our sample is 26 years, we assume that while the survey (and its skills assessment) was conducted after the worker had started his current job, skills have not changed significantly between hiring and the time of assessment. For literacy and numeracy skills, the assumption seems to be a little more dubious: somebody who uses these skills a lot as part of his workday might improve on these measures, while somebody who does not use them might lose proficiency.