loosing balance and falling. Another approach states that priority of the motor is given over the cognitive task (i.e., “posture first” strategy; Bloem et al., 2006; Yogev-Seligmann et al., 2012 ). Thus, there is no explicit limitation of motor performance during DT practice and the risk of loosing balance and falling is low. Our results are not in line with the first or with the second approach. Task prioritization did not differently favor taskperformance in the prioritized domain during DT balance performance in young adults. One might again argue that central capacity in healthy young adults is not stressed enough by challenging demands during DT situations. Thus, their cognitive capacity in terms of attentional resources is able to adequately handle both, the motor demand of the stabilometer task as well as the cognitive demand of serial three subtractions. More difficult and attention-demanding tasks during DT conditions might overstrain central capacity in young adults. For example, in older adults it has been shown that particularly visual demands cause greater reductions in DT motor performance compared to other (e.g., verbal or auditive) tasks ( Bock, 2008; Beurskens and Bock, 2012 ). Shumway-Cook et al. proposed that the allocation of attention during the performance of concurrent tasks is complex, depending on many factors including the nature of the cognitive task, the postural task, the goal of the subject, and the instructions, implying that task prioritization is flexible and depends on a variety of individual, task, and environmental factors ( Shumway-Cook et al., 1997; Kelly et al., 2013 ). This finding might be used to explain our results, indicating that the motor and/or cognitive task used itself did not affect prioritization. Consequently, other aspects, such as individual preconditions (e.g., physique, cognitive status and motivation) might affect DT performance following prioritized practice. However, this needs further clarification in terms of DT practice and task prioritization. Further, other studies showed immediate effects of task prioritization during DT conditions ( Schaefer et al., 2008; Kelly et al., 2010; Yogev-Seligmann et al., 2010 ). Our results did not show differences in motor and/or cognitive performance following specific test instructions, indicating that prioritization of one over another task is beneficial during task execution. This however did not lead to a specific learning effect during the retention/transfer phase of the experiment.
processing to optimize task switching in terms of reduced switch costs and higher net multitasking efficiency compared to a strictly serial processing manner. A similar effect emerged at least on a descriptive level when comparing the dual-task efficiency of switchers and alternaters in the second experiment. However, neither in the first nor the second experiment the effects of overlapping processing were so strong to actually turn dual-task cost effects in dual-task benefits as would have been reflected in positive ODTPE scores. This was probably due to the fact that in both experiments overlapping processing was only applied in a limited number of trials only and hence could not fully compensate other cost effects. This certainly has prevented a more positive effect of this strategy on overall dual- task efficiency. One reason for this could be a lack of enough practice as needed to fully establish this strategy. Another reason might be the high similarity of the two tasks with respect to the specific processing resources (Wickens, 2002). This might have made task interleaving and overlapping processing particularly difficult and prone to negative side effects. However, it is remarkable that the subgroup of switchers in the second experiment nevertheless were able to realize actual time benefits associated with task switches in up to 47% of switch trials. This shows the potential benefits of a strategy combining task
The aim of our study was to further investigate the daytime off- line consolidation when a dualtask setting employing a strong distractor is implemented. Our results first display a significant effect of distraction over the implicit motor learnings in a dual-task setting. This can be seen from comparison of the daytime-awake (DA) and the daytime-awake-subsequent-WPAT (DAs) groups. Indeed, performance improved from Training 2 to Retrieval condition when the SRTT was performed alone whereas, it showed no change if the WPAT was performed concurrently to SRTT. Similar results were obtained in the nighttime-awake (NA) group who showed no gains in motor learning despite they did not sleep in the first night after the training, and were allowed to sleep the second night, before being retested in the retrieval session. This result rules out the general effects of fatigue which could have determined weaker performance in the DA group. This adds further support to the strong distracting effects of the concurrent declarative task on the implicit motor learning (dual-task interference costs).
Implicit learning and explicit learning may both lead to a mostly implicit knowledge base over time, as many studies in motor learning have shown (Kal et al., 2018), but in the beginning of practice different knowledge bases can be identified. Via questionnaires the absence of explicit knowledge can be established which may indicate a largely implicit knowledge base (Schmidtke & Heuer, 1997). The absence of implicit knowledge is difficult to measure, so the degree to which explicitly instructed participants also build an implicit knowledge base cannot be determined, although it is often assumed that learning shown in the presence of a dualtask is an indication of implicit learning (Curran & Keele, 1993; but see Shanks, Rowland, & Ranger, 2005). Explicitly instructed participants benefit directly from explicit knowledge about regularities in an SRT task while implicitly learning participants are slower but achieve better performance on retention tests. Both implicit and explicit knowledge about regularities are a basis of anticipatory control of perception and action. Data from previous experiments confirmed the idea that both types of knowledge aid dual-taskperformance since both unaware and aware participants benefited from repeating sequences (Ewolds et al., 2017). In contrast, many studies employing more complex movements (with tracking probably lying somewhere between the SRT task and most sports in terms of complexity) report that for experts in particular explicit knowledge hampers performance, presumably because it interferes with well-learned ‘automatized’ processes (Beilock et al., 2002; Koedijker et al., 2011; Masters, 1992; Poolton et al., 2006). It has been argued that the attentional demands during dual-task learning favor implicit learning and may even prohibit explicit learning, leading to better performance of implicitly instructed participants (Berry, 1993; Maxwell et al., 2003). The current study examines both implicitly and explicitly instructed participants to test whether implicitly instructed participants have an advantage in dual-task situations.
As one would expect from this reasoning, Fischer and Hommel (2012) found a smaller RCE if participants were primed with a convergent-thinking rather than a divergent-thinking task. If we assume that engaging in divergent thinking leads to a more broadly distributed allocation of processing resources, and that this bias toward more flexibility was sufficiently inert to affect performance in the overlapping dualtask, we can conclude that the size of the RCE reflects the relative bias toward persistence and flexibility. If our hypothesis that high-frequency binaural beats bias the cognitive control style toward flexibility is correct, presenting participants with high-frequency beats should thus increase their RCE in a dualtask that manipulates R1-R2 compatibility. We tested this prediction by adopting a task comparable to that used by Fischer and Hommel (2012) and having participants perform it after presenting them with either high-frequency binaural beats (the gamma group) or with a continuous tone of 340 Hz (the control group). Given that binaural beats may impact mood ( Chaieb et al., 2015 ), heart rate, and human blood pressure ( Carter, 2008 ), we also assessed participants’ subjective affective states, heart rate, and blood pressure before and after the dual-taskperformance.
The absence of any gender difference in task-switching and dual-taskperformance is not in line with the findings of Stoet and colleagues (2013) [ 17 ] and by Ma¨ntyla¨ (2013) [ 19 ] who observed better multitasking performance for women than for men, or vice versa. A major dif- ference between the present study and the task-switching study by Stoet and colleagues (2013) [ 17 ] lies in “stimulus valence”. In order to focus on the divided attention and attention shifting component of multitasking, we used univalent stimuli in the present study. Whereas bivalent stimuli activate both task sets (and hence induce substantial interference on the stimulus level), the univalent stimuli used in our study were only associated with one task (i.e., letters do not afford the digit categorization task and vice versa) (see [ 8 ] for a review) and require less selective attention because the relevant stimulus attribute is cued by the spatial location of the stimulus presentation. Thus, based on our data, we cannot completely exclude gender effects in specific aspects of selective attention when processing bivalent stimuli.
A central experimental task in executive control research is the Stop-signal task, which allows measuring the ability to inhibit dominant responses. A crucial aspect of this task con- sists of varying the delay between the Go- and Stop-signal. Since the time necessary to administer the task can be long, a method of optimal delay choice was recently proposed: the PSI method. In a behavioral experiment, we show a variant of this method, the PSI mar- ginal method, to be unable to deal with the Go-response slowing often observed in the Stop- signal task. We propose the PSI adjusted method, which is able to deal with this response slowing by correcting the estimation process for the current reaction time. In several sets of behavioral simulations, as well as another behavioral experiment, we document and com- pare the statistical properties of the PSI marginal method, our PSI adjusted method, and the traditional staircase method, both when reaction times are constant and when they are line- arly increasing. The results show the PSI adjusted method’s performance to be comparable to the PSI marginal method in the case of constant Go-response times, and to outperform the PSI marginal method as well as the staircase methods when there is response slowing. The PSI adjusted method thus offers the possibility of efficient estimation of Stop-signal reaction times in the face of response slowing.
Abstract—Learning everyday tasks from human demonstra- tions requires unsupervised segmentation of seamless demon- strations, which may result in highly fragmented and widely spread symbolic representations. Since the time needed to plan the task depends on the amount of possible behaviors, it is preferable to keep the number of behaviors as low as possible. In this work, we present an approach to simplify the symbolic representation of a learned task which leads to a reduction of the number of possible behaviors. The simplification is achieved by merging sequential behaviors, i.e. behaviors which are logically sequential and act on the same object. Assuming that the task at hand is encoded in a rooted tree, the approach traverses the tree searching for sequential nodes (behaviors) to merge. Using simple rules to assign pre- and post-conditions to each node, our approach significantly reduces the number of nodes, while keeping unaltered the task flexibility and avoiding perceptual aliasing. Experiments on automatically generated and learned tasks show a significant reduction of the planning time.
Because of the downsides of device-oriented tasks, they are usually avoided during the design of user interfaces (UI). In case this is not possible, device-oriented tasks are often made obligatory, i.e., the users are forced to perform them by the application logic. Examples for this practice are the login but- ton that users have to press after entering their credentials, or teller machines that return the card before delivering money. Making a step obligatory is quite effective. In a previous study, the expected error increase for device-oriented steps did only occur if the respective step was also non-obligatory (Halbrügge et al., 2015). Task necessity is therefore an im- portant factor for the genesis of errors.
Abstract: The performance of choice-reaction tasks during athletic movement has been demonstrated to evoke unfavorable biomechanics in the lower limb. However, the mechanism of this observation is unknown. We conducted a systematic review examining the association between (1) the biomechanical and functional safety of unplanned sports-related movements (e.g., jumps/runs with a spontaneously indicated landing leg/cutting direction) and (2) markers of perceptual–cognitive function (PCF). A literature search in three databases (PubMed, ScienceDirect and Google Scholar) identified five relevant articles. The study quality, rated by means of a modified Downs and Black checklist, was moderate to high (average: 13/16 points). Four of five papers, in at least one parameter, found either an association of PCF with task safety or significantly reduced task safety in low vs. high PCF performers. However, as (a) the outcomes, populations and statistical methods of the included trials were highly heterogeneous and (b) only two out of five studies had an adequate control condition (pre-planned movement task), the evidence was classified as conflicting. In summary, PCF may represent a factor affecting injury risk and performance during unplanned sports-related movements, but future research strengthening the evidence for this association is warranted.
The objective of this work is to analyze the impact of Galileo through the use of a combined constellation on the performance of GBAS under severe ionospheric gradients. The errors experienced by a user with a spatial separation of 5km and 20NM respectively to the GBAS ground station are evaluated. The simulation scenario considers an ionosphere anomaly with a gradient of 420mm per km between the ground station and the user – a value which has turned out to be a worst-case assumption as explained before in several publications . The dual frequency smoothing techniques mentioned above are applied.
Unfortunately, there seems to be quite a confusion about which taskperformance indices should be reported and how these can be interpreted. Some studies have reported indices based on signal detection theory such as discrimination index d’ or decision bias C and it has been argued that C is a better indicator of disinhibition than commission errors alone as it takes the number of both hits and false alarms into account ( Noël et al., 2005; Mobbs et al., 2008 ). Yet, it is not clear if these should be actually preferred over simpler measures. For example, the same authors switched to reporting the more straightforward reaction times, omission errors and commission errors in their later works ( Noël et al., 2007; Mobbs et al., 2011 ). While omission errors may indicate lapses of attention, reaction
Based on the differentiation between upward and down- ward comparison, the question arises of how the direction of social comparison influences motivation. As past research found out, there is no clear answer to it because the affective outcome of a comparison does not depend on its direction but rather on its salient implication. More precisely, a com- parison can imply an assimilative or a contrastive evaluation of the comparison target. Assimilation refers to the belief that one could achieve the same status as the comparison target whereas a contrast in social comparison emphasizes the separation between oneself and the target. 27 Therefore, an assimilation is likely to inspire an individual and to in- crease motivation in an upward comparison, but can on the other hand cause demotivation in case of a downward com- parison. A contrast, however, has the opposite effect in up- ward comparisons because it may foster negative feelings of the own inferiority and lack of ability. In downward compari- son, it engenders positive feelings of relief which can spur the motivation for further performance improvement. Overall, assimilation effects are expected to be predominant in com- parisons because large differences between the self and the other typically lead to the cessation of social comparison. 28 Nevertheless, social comparison often entails moderate dif- ferences and prior research has identified various factors that influence whether assimilative or contrastive evaluations are implied by a comparison. 29 Commonly, it is stated that as- similation is facilitated by feelings of ‘we-ness’ through the identification and psychological closeness to the comparison target. Consequently, contrast is promoted by contrary feel- ings of ‘I-ness’ which are determined by factors stimulating the salience of a personal self in opposition to a social self. 30
As already Foerstner  pointed out performance evaluation of computer vision algorithms is a challenging task. There are two main approaches. One is to split the overall algorithm into several sub-tasks which can be evaluated analytically. A typical example is Courtney . The second approach is to perform an ex- haustive evaluation regarding some ground truth with the challenge to manage the required evaluation effort. Exemplary Appenzeller and Crowley  focused on the parameter control for fair evaluation. Vogel and Schiele  present a method for the optimal adaptation using performance prediction on simulated data. From a technological point of view Everingham, Muller and Thomas  developed the most similar approach. They also use a genetic algorithm to en- able a comprehensive evaluation. The main difference is that we additionally integrate the analyze of context dependencies.
Der drahtlose Übertragungskanal ist inhärent nichtstationär, d.h. die statistische Beschreibung des Kanals ändert sich mit der Zeit. Die zeitvarianten Kanalstatistiken führen zu einer zeitlichen Veränderung der Performance von drahtlosen Kommuni- kationssystemen. Um die Zeitentwicklung der Performance solcher Systeme zu ver- stehen muss die Nichtstationarität des Kanals untersucht werden. Diese Arbeit führt eine Methodik ein, die eine fallspezifische Definition von Regionen, innerhalb derer der Kanal als stationärer Prozess behandelt werden kann, vorsieht. Die resultierenden „local quasi-stationarity“ (LQS)-Regionen erlauben die Bestimmung einer maximalen Performance-degradation ausgesuchter Algorithmen durch veraltete Kanalstatistiken. Basierend auf dieser Methodik und bestehenden Methoden wird eine umfangreiche messdatenbasierte Analyse der Nichtstationarität von einfach polarisierten (SP) und dualpolarisierten (DP) Mehrantennen-Kanälen durchgeführt. Exemplarisch werden Algorithmen zur Kanalschätzung und zum Beamforming betrachtet. Dabei zeigt die Analyse, dass die Größe der LQS Regionen stark vom betrachteten Algorithmus ab- hängt.
Another way of expressing this is to say that when a primal corresponds to the data- generating process, additional moment conditions are superfluous: they will in the limit attract Lagrange multiplier values of zero and consequently not affect the value of the pro- gram nor the solution. In a sense, this is obvious: the parameters of the primal can typ- ically be identified and estimated through an M –estimation problem that will generate K equations to be solved for the K unknown parameters. Nonetheless, the recognition that the only moment conditions that contribute to enforcing the independence requirement are those whose imposition simultaneously reduces the objective function while providing multipliers that are coefficients in the stochastic representation of Y suggests the futility of portman- teau approaches (e.g., those based on characteristic functions) to imposing independence. The dual formulation reveals that to specify the binding moment conditions is to specify an approximating data-generating process representation, which then can be extrapolated to
In this study, we test this theory by means of a set of specific hypotheses that can be summarized as follows: First, the formation of dual-earner couples strongly de- pends on the wife’s level of schooling and the position of her occupation within so- cial stratification. Second, family events, namely, childbirth and child-rearing, rep- resent the main barrier to the continuity of the labour market participation of Italian wives; married working women who are highly educated and at the top of the occu- pational ladder are in a far better position to elude this risk than are all other work- ing wives. Third, the work histories of Italian wives are almost completely inde- pendent of those of their husbands. Fourth, the formation of dual-career couples is not a process that starts after marriage, but before it. In other words, we hypothesize that it is the high level of homogamy with respect to education, occupation, and, even more, social origin that drives the formation of dual-career couples, and not specific joint strategies followed by wives and husbands after marriage. Before testing these hypotheses, we provide some descriptive statistics intended to give ba- sic information on the numbers and characteristics of dual-earner and dual-career couples in contemporary Italy.
As extension it is also possible to include signals from a second constellation on the same frequency, e.g. the Galileo E1 signals. The processing of the signals (e.g. 100 seconds / 30 seconds smoothing) and integrity monitoring (inflating sigmas for ionospheric threat mitigation) would be accomplished in the same way as with current systems if ground and airborne GBAS equipment can use the same second constellation. These modes would benefit from an improved geometry associated with typically much smaller protection levels and thus improved availability of the GBAS landing system. This is especially beneficial at airports with significant required masking angles for the ground station due to suboptimal sites for the reference antennas. It could potentially also bring benefits during increased ionospheric activity which often only affects a part of the sky. Even when monitors exclude individual satellites a good geometry could in many cases still be available. As the single and dual constellation cases are similar in processing and integrity monitoring (not so much in performance though), we regard them as only one mode for the rest of the paper.
“step in” at a point in which agents already hold some initially-signed contracts. An interesting avenue for future research is to generalize this and model also the interaction with the task providers. An intermediate step towards this may be to study a model in which agents may choose not to complete some tasks.
Thus far, we have not discussed who chooses the mechanism but rather argued in favor of a particular mechanism. Again, it is intrinsic to the open, blockchain-based economy that anyone can set up a smart contract. This leaves little room for rent extraction for the contract creator as anyone can copy the contract, make modifications, and invite the service providers to settle their allocation on the alternative platform. Thus, we expect that whether a mechanism will thrive in practice will depend on the fairness of the solution it provides. The reasoning is entirely analogous to what has been observed for centralized matching mechanisms, which, as noted for instance by Roth (1991), are most often successful when the outcomes they produce are perceived as fair (“stable”). Hence, similar to the evolution of norms in society, we expect to converge towards a mechanism that ensures cost-effective allocations with fair sharing of the cost reductions.
5.3. Limitations and future implications
One of the limitations of our study is the cross-sectional nature of the data; thus, care should be taken in drawing any causal inferences (i.e., one may argue that the positive experiences of leader effec- tiveness lead to a rise in political skill of leaders), however, theory would predict that that our hypotheses accurately place leader political skill as an antecedent to follower outcomes. Moreover, in order to reduce any other biases, the constructs under observation were assessed by two sources. The performance of the teams was assessed by their corresponding team leaders, and the individual and group-focused LPS was evaluated by the followers’ perceptions of their leader’s political skill aggregated at the team-level. Although we had different raters for the constructs used in this study, but the fact that leaders rated their own team’s performance might have resulted in the self-serving bias, that is, leaders might have elevated the ratings of their own teams to come across as an effective leader. Thus, future research can benefit from using more sources of rating performance, for instance, asking the supervisor of the leader to rate team performance instead of relying only on the leader’s ratings. Nevertheless, on the other hand, to reduce the selection bias, the followers’ participation was voluntary (i.e., they were neither nominated by their leaders nor were they forced by their leaders, instead they were contacted by the researchers themselves) and were fully informed about the purpose of the study. Therefore, in contrast to the leader-nominated followers, such sample has lesser selection bias and are expected to give more accurate responses.