Documents in EconStor may be saved and copied for your personal and scholarly purposes.
You are not to copy documents for public or commercial purposes, to exhibit the documents publicly, to make them publicly available on the internet, or to distribute or otherwise use the documents in public.
In all the applications, 4 chains were run from initial overdispersed values. These values were chosen from prior knowledge of the likely regions for the hyperparameter. They were taken as the endpoints of intervals for components that were uniformly distributed and the 0.01 and 0.99 quantiles for components that were normally distributed. Chains were run until convergence was diagnosed according to the Gelman and Rubin (23) and Geweke (24) diagnostics applied to the posterior density. After convergence, samples of size 2000 were stored for inference from all the chains. The pace of the moves between iterations was set by increasing and/or decreasing the proposal random walk variance in such a way as to have the acceptance rates between 30% and 60%. These proposal have been extensively used in the literature with good results reported. This was con¯rmed in our simulations. Therefore, alternative forms outlined above were not used in the applications.
DLR’s Institute of Flight Systems (FT) has a long tradition in flight research and simulation of various flight vehicles. Currently AVES, a modern research simulator facility is being operated at DLR Braunschweig. AVES is designed such that interchangeable cockpits of rotorcraft (EC135) and airplanes (A320) can be operated on motion and fixed-base platforms according to the particular needs. 2Simulate is the enabling real-time simulation infrastructure of AVES. All simulator software components are integrated over this infrastructure. This effort adopts best practices from both aerospace and automotive industries. It tackles the model integration problem of research flight simulators by developing a model integration workflow for the indigenous simulator infrastructure, namely 2Simulate. The motivation is to contribute to flight simulator development by introducing a model integration workflow for institutionalizing MBDSD.
Sola et al.  employed a multicriteria evaluation of ten widely used topographic correction methods, applied to SPOT-5 scenes in the Pyrenee mountains. The best ranking was ob- tained for the SE method. In an earlier study using Landsat im- agery, Hantson and Chuvieco came to a similar conclusion. Recently, Ma et al.  analyzed the uncertainty propagation chain of two semiempirical topographic correction models. A simplified terrain/BRDF model restricted to vegetation canopies was published for Landsat-8 OLI data . Even the simplified 1-D model is complex, requiring input data about the canopy structure and optical parameters, which is difficult to get on a global scale. In addition, a homogeneous canopy cover per scene is assumed. Also, the influence of the surrounding terrain is neglected. Therefore, this article concentrates on general pur- pose topographic correction models suitable for the operational processing of large data volumes.
The hierarchical predictive coding theory is suggested by a number of experimen- tal studies. For example, functional magnetic resonance imaging (fMRI) data has shown that illusory contours remain in the early visual cortex even if the con- tours from sensory input disappear [Muckli et al., 2005], which can be explained by the encoding at the local lateral interactions within V1 as well as the top- down predictive influences mediate from higher-level motion-sensitive areas such as MT/V5. The predictive coding theory can also explain other low-level adaptation phenomena, such as relative attenuation of neural signals (a.k.a. repetition sup- pression) [Summerfield et al., 2008], which means a reduction of neural response when stimuli are presented repeatedly. To further explain the predictive coding theory on the visual motion processing in a neuroanatomical context, Bar  proposed that the medial frontal regions should encode predictive templates (con- structed by individual objects) that are learnt associatively in contextual scenes at a higher visual cortex, which acts as a low-frequency adaptive filter. Similarly, this has also been observed in electrophysiological recording of auditory cortices; while the participants listened to auditory stimuli with varying pitch strength, the neural activities between the adjacent and primary auditory cortical areas can be explained with the principle of predictive coding [Kumar et al., 2011].
The film thickness evolves in a step-like manner, corresponding to the formation of an integer number of layers in the lubricant. The fluid can reach two possible configurations. Ordered structures are characterized by well-defined layers with little to no bridging. On the other hand, transition zones where the central layers merge together feature a disordered, entangled state. A stepped velocity profile is observed under shearing, with velocity jumps occurring between layers. The standard notion of viscosity is therefore no longer applicable during local film breakdown, and the fluid undergoes a solid-like transition. In the absence of wall slip for smooth wettable surfaces, inter-layer bridging and the degree of organization within the lubricant determines the fluid resistance to shearing. Hence, shear stress values oscillate when the thickness is reduced featuring local minima for well-ordered configura- tions and maxima for entangled states. A friction maximum is reached for a surface separa- tion of one single lubricant layer, where large wall slip takes place. High shear rates and fric- tion in the system lead to significant heat generation, as well as a rapid temperature increase in the lubricant and the surfaces. On the other hand, friction and heat generation are reduced in well-ordered states, and no significant heating of the lubricant is observed.
In order to investigate the transferabilityof parameters under changed climatic conditions, the transfer performance has been normalized with respect to the model calibration perfor- mance for the data period. The percentage difference between the normalized model per- formance and the maximum possible value (1) denotes the accuracy of the simulated model performance. The smaller the value, the worse the transferred quality. A value of zero repre- sents the simulation result is as good as the model calibration. Figure 6.8 shows the percent- age reduction in efficiency NS for all sub-periods and Figure 6.9 shows the absolute value of the simulated biases, respectively. Here the percentage reduction in model performance is plotted against the percentage difference in average annual precipitation in the validation period relative to the calibration period, and the average annual precipitation of the calibra- tion period is taken as the denominator [Vaze et al., 2010]. All the scatters on the left side of the y-axis represent the simulated performance where the precipitation in the simulation sub-period is lower than in the calibration sub-period and vice versa. As expected, the sim- ulation results are very similar for both HBV and HYMOD because of the similar model structure. Figure 6.8 shows that in a few number of simulations (HBV: 5 % and HYMOD: 2%), the simulation NS values are close to the individual calibration results, which demon- strates the calibrated model parameters are well transferred to the receiver sub-periods. The lines of best fit in this figure show the clear correlation between the differences in weather conditions for the calibration and validation sub-periods and the reduction in model effi- ciency NS. With the increasing of difference in rainfall conditions between calibration and validation sub-periods, the reduction in model performance usually becomes higher. Figure 6.9 indicates that the mean value of the absolute biases are significantly higher if there are a larger difference between precipitation in the calibration and validation sub-periods. It also indicates that the absolute biases are usually greater when the model parameters calibrated in a high precipitation sub-period is used to model the discharge in a low precipitation sub- period than when the parameters calibrated in a low precipitation sub-period is used to model the discharge in a high precipitation sub-period. It can be found from the figure that some simulations for the relatively wet sub-periods by transferring parameters from the dry ones lead to significant water balance error.
Control variables. We used several control variables to control for potential alternative
explanations. To control for industry fixed effects, we included industry dummy variables. As explained above we had five different industries represented in our sample (high tech, automotive & assembly, chemicals, energy, and consulting). Hence, we included 4 industry dummy variables. For each BU we coded the respective industry dummy with 1 when the BU belonged to respective industry and with 0 otherwise. Controlling for industry is important as different industries represent different levels of environmental uncertainty, which has been shown in prior research to influence the level of differentiation and integration (e.g. Burgers and Covin, 2013). Furthermore, we controlled for country fixed effects as different countries might influence organizational control mechanism in BUs. Thus, we included a dummy for BUs headquartered in Switzerland. We also controlled for BU size, which has also been shown to influence the balance between differentiation and integration (e.g. Burgers and Covin, 2013). We also controlled for BU age, as older BUs might face more problems in developing innovative business models than younger BUs. We controlled for BU R&D intensity, as R&D intensity is often used as a proxy for innovativeness (Hitt, Hoskisson, and Kim, 1997) and this may in turn influence and BUs ability to develop innovative business models. Furthermore, on the initiative level, we controlled for initiative size, the number of employees working on the initiative in average over the last 2 years. This number was provided by the initiative members themselves and examined for correctness by top management. Finally, we controlled for initiative duration as longer initiatives may show different novelty levels because they had more time for experimenting.
BIM for issues such as quantity take-off, clash detection and avoidance etc.” “BIM is an excellent tool for coordination and visualization allowing better decisions to be made better but only if everything is clear”. “We also use the models for energy, thermal light, smoke, fire evacuation simulations and so on, however unfortunately not in the same software tool”. The lack of compatibility “requires the BIM building geometry to be rebuilt because of the various software platforms”. With respect to FM they feel “the FM industry needs to define its needs and specifications and think about interoperability between the BIM models and existing FM systems as the difficulty is to keep the virtual model up to date”. “I think in the future, digital recordings of existing projects will become more important. In the future planning using BIM will become a very significant tool for renovation projects”. “If the information content in the virtual and real buildings is consistent, a variety of benefits can be generated e.g. managing life cycle costs for components, risk assessments, etc.” For us “energy and lighting simulations are at the forefront of today's applications. We can imagine that buildings in the future can be much better reviewed, and even the movement of goods and people flow can be simulated digitally”.
students and tourists. They also include the development of specific institutional structures and policies concerning the incorporation of the new arrivals. These directly affect the arrivals’ access to services and resources, as well as their ability to participate in the larger society. Ultimately, they play a critical role in how individual migrants relate to that society in terms of intentions to reside, their commitment to the country and their relations with other countries and societies. While the modes of incorporation favoured by migrants (as opposed to those encouraged by policy-makers) are typically extremely diverse and characterised by flexibility and mutation, their actual outcomes need to be considered as a reflection of policy influences at the national level which are rooted in developments within local communities, as well as in the daily lives and encounters of individuals. Despite this diversity in policies and migrant strategies throughout the region, there are also certain similarities. The first is that migration is acknowledged as a major factor in economic development. It can supply labour market needs, whether these be for skilled or unskilled labour. It provides the consumers for local manufacturing as was the case for post-World War II Australia and, more recently, the tourists, international students and the increasing number of health tourists who support the local service industries in countries like Australia, Singapore and Thailand. Secondly, governments in the region have become major actors in the control and promotion of migrant entry and determining how they can become incorporated into society. Since the beginning of the twenty-first century, new public and governmental concerns about cross-border criminal activities, terrorism and the spread of disease have given further impetus to government involvement. Thus it is important to consider whether there are now increasing convergences in the policies and patterns relating to migration in the region? And, if so, what factors contribute to this convergence?
Wind, solar and demand fluctuate on even shorter time scales than hours. As REMIX has an hourly resolution and does not additionally represent sub-hourly phenomena, it cannot cover the very- short-term effects of managing a power system. However, the representation of hourly variability already results in substantial deployment of flexible technologies like gas turbines, hydro power and battery storage at higher VRE shares. These technologies can also provide flexibility on a sub- hourly scale. Additionally, advanced VRE generators can increasingly supply active power control (Ela et al., 2014). We therefore do not expect that including sub-hourly details would have a large effect on the results at the aggregation level used for the current analysis. This view is supported by a power sector study that varied the modeling resolution between 1 hour and 5 minutes. It found that modeling sub-hourly features has an impact on cycling/ramping values, but is of low importance to the aggregated investment behaviour (Deane et al., 2014).
This Ph.D. thesis is an attempt to provide accessible, reproducible and efficient spatial assessments of the magnitude, and human and ecological impacts of freshwater degradation on large scales, and thus to contribute to an integrated freshwater management. Spatial-ecological approaches for large scale freshwater degradation analyses are still constrained by many issues, which should be covered by future research. Stressor data scarcity is certainly one of them, not only for resource constraint developing countries but also globally for many relevant variables. For example, stream temperature data are substantially lacking and hence, air temperatures are used as surrogates (Domisch et al., 2013; Li et al., 2014). Data for organic chemicals have only recently been available for a few regions and hence, large scale risk assessments are lacking (Malaj et al., 2014). Besides the space-time in- tegration, incorporation of local variability and usage of large scale secondary datasets as predictors, crowd-sourced data evolved from citizen science have shown considerable potential to fill spatial data gaps (Dickinson et al., 2010). However, low quality standard and lack of a common data struc- ture impede the usage of crowd-sourced data for the analyses of large scale freshwater degradation (Hochachka et al., 2012). Hence, future studies should develop appropriate quality standards and adaptable data structures to include crowd-sourced data in large scale human and ecological impact analyses. Coverage of spatial risk assessments from many stressors, especially from contaminants such as pesticides, is also lacking for many regions due to data scarcity and resource constraints, e.g. only 10% of the major stream catchments are covered by risk assessments in south Asia (UNEP). Hence, considerable expansion is required in risk assessments as well as in water quality monitor- ing in developing regions by extensive mobilization of resources. In addition, a comprehensive and robust risk assessment from contaminants requires development of an exhaustive toxicity database based on laboratory and field experiments.
For certification of a series of structurally identical rotor blades, static and cyclic full-scale tests on only one or maximum two rotor blades are required . A drawback beside the high costs is that the test blades can only be representative due to the uniqueness of each one and due to the more complex environmental conditions in operation. Furthermore, the structural collapse is caused by local failure, therefore testing structural details is defined in international standards as . With regard to adhesive bonds, mechanical tests at coupon level with thicknesses of t = 0.5mm and t = 3.0mm are required for material certification even though that is not realistic for practical rotor blade design [19, 23]. To close the gap between full-scale and coupon tests, the Fraunhofer IWES has developed the Henkel beam as a representative sub-component for detailed investigations on adhesive bonds, see Fig. 2.
As the countries of Europe have successfully managed to move the region’s integration forward step by step, the European experience offers three possible models for regional integration with different depths: a free trade arrangement, a single market, and a common currency area. In this paper, we examine the effect of these three different modelsof regional integration on total factor productivity (TFP) to assess the long-run growth implication of each model. Our findings suggest that joining a regional grouping changes the way participating economies grow, no matter which model of regional integration is used: domestically powered growth becomes less important, and regionally powered growth becomes the new source of growth. As existing theory identifies knowledge creation and its spillovers as key drivers of economic growth, regionally powered growth is expected to become relatively more important with a higher level of intra-regional dependence on research and development (R&D) spillovers. Of the three models for regional integration, the free trade arrangement is found to be the most effective in promoting intra-regional dependence on R&D spillovers. We find that largely negative windfall effects on TFP are associated with the other two models.
promoters could not entirely replace a chromatin state based approach because (i) the use of expression data in FANTOM5 limits the identified regulatory regions to transcriptionally active elements and (ii) CAGE was shown to be not as sensitive to rapidly degraded transcripts as GRO- cap and therefore might miss regulatory enhancers with unstable transcripts . Nonetheless, FANTOM5 provides an annotation of enhancers and promoters based on independent data that is well suited to assess how well the models distinguish promoters from enhancers. We filtered the FANTOM5 annotation to promoters and enhancers for activity in K562 by overlapping them with DHS  and GRO-cap TSSs . We considered that a promoter state performed well, when the recall of FANTOM5 promoters was high and the recall of FANTOM5 enhancers was low and vice versa for enhancer states. GenoSTAN-Poilog-K562 and ChromHMM-nature en- hancer states recall most FANTOM5 enhancers (60%, Figure 15D, Appendix Table A2). For enhancer states, the recall of FANTOM5 promoters was around 10% except for those of Segway- ENCODE, which recalls almost 35% of FANTOM5 promoters and EpicSeg which recalls 21% of FANTOM5 promoters. In accordance with this, many promoter regions were erroneously clas- sified as enhancer regions in this segmentation (e.g. TAL1 promoter in Figure 14A). The recall of FANTOM5 enhancers by promoter states was generally higher (17% - 37%). GenoSTAN- Poilog-K562 and -nb-K562 recalled more than 90% of FANTOM5 promoters and around 20% of FANTOM5 enhancers which is comparable to other studies (Segway-nmeth, ChromHMM-nature, EpicSeg). ChromHMM-ENCODE promoter states had a comparable recall of FANTOM5 pro- moters (92%), but higher recall of FANTOM5 enhancers (37%) (Figure 15D). This strong overlap of ChromHMM-ENCODE promoters with FANTOM5-labeled enhancers is in accordance with our observation that some enhancer regions were errenuously classified as promoters in ChromHMM- ENCODE (Figure 14A). These results show that GenoSTAN segmentations distinguish promoters from enhancers at similar or better accuracy than other segmentations.
Explicit methods for solving differential equations are methods that only use already known values of the function at earlier grid points to determine the value at the next grid point. The efficiency and accuracy of explicit methods is typically sufficient for systems of ODEs used to model neuronal behavior. Popular examples of such methods are the explicit 4th order classical Runge- Kutta or the explicit embedded Runge-Kutta-Fehlberg method ( Dahmen and Reusken, 2005 ) for the approximative solution of ODEs. Most neuron model implementations currently use explicit stepping algorithms and still achieve satisfactory results in terms of accuracy and simulation time ( Morrison et al., 2007; Hanuschkin et al., 2010 ). However, some published models involve possibly stiff differential equations (e.g., Brette and Gerstner, 2005 ), which potentially require a different class of solvers.
On the level of the spiking activity, the integrate-and-fire neuron is one of the most commonly used descriptions of neural activity. A multitude of variants has been proposed to cope with the huge diversity of behaviors observed in biological nerve cells. The main appeal of this class of model is that it can be defined in terms of a hybrid model, where a set of mathematical equations describes the sub-threshold dynamics of the membrane potential and the generation of action potentials is often only added algorithmically without the shape of spikes being part of the equations. In contrast to more detailed biophysical models, this simple description of neuron models allows the routine simulation of large biological neuronal networks on standard hardware widely available in most laboratories these days. The time evolution of the relevant state variables is usually defined by a small set of ordinary differential equations (ODEs). A small number of evolution schemes for the corresponding systems of ODEs are commonly used for many neuron models, and form the basis of the neuron model implementations built into commonly used simulators like Brian, NEST and NEURON. However, an often neglected problem is that the implemented evolution schemes are only rarely selected through a structured process based on numerical criteria. This practice cannot guarantee accurate and stable solutions for the equations and the actual quality of the solution depends largely on the parametrization of the model. In this article, we give an overview of typical equations and state descriptions for the dynamics of the relevant variables in integrate-and-fire models. We then describe a formal mathematical process to automate the design or selection of a suitable evolution scheme for this large class ofmodels. Finally, we present the reference implementation of our symbolic analysis toolbox for ODEs that can guide modelers during the implementation of custom neuron models.
electrostatic and effective vdW interactions only when choosing the appropriate order of introducing the fine-grained interaction components. In simulations of the CG system, the new approach ascertains that the vdW part of the interaction free energy has converged at the cutoff. Furthermore, methods for long-range electrostatics can be used for a proper treatment of the electrostatic interactions of the model. The CRW-CG interaction potentials are derived from two fine-grained molecules in vacuo and are not parametrized to fit any structural or thermodynamic target quantity. Still, coarse-grained simulations re- produce liquid-state properties in satisfactory agreement with the fine-grained model. In particular, the bulk density and predict the thermal expansion coefficient are reproduced quite accurately. This shows that CRW models with explicit electrostatics show a high degree oftransferability, as has been observed for CRW modelsof apolar molecules.[3, 12, 14] Furthermore, both CG models presented in this work predict the vapor-liquid surface tension in good agreement with reference simulations, an observation that can serve as further evidence for the state point transferabilityof CRW-CG potentials. Unfortunately, the CRW models do not reproduce the relative dielectric permittivity of the FG systems. This quantity depends on a number of factors and it is unclear whether modifying one of these factors independently will result an a better reproduction of the dielectric properties. However, if a correct reproduction of this quantity is desired one could attempt to tune the charges on the CG sites in order to change the molecular dipole moment in systems where the orientational correlations between dipole moments are well reproduced (like EPP). However, we advice to perform such tuning carefully because whenever such a modification of the model is performed, the interaction free energy G e f f will no longer be independent
However, the decision about appropriate actions to be performed according to de- tected fault belongs also to SHM tasks. In dependence on the fault type and assigned fault criticality, aforementioned actions may include corrective maintenance whereas it is aimed to restore the system state to an undamaged state, or emergency main- tenance whereas it is aimed to prevent failures with catastrophic consequences. Re- gardless of corrective or emergency maintenance introduced above, both approaches take in consideration the decision made at the point of fault detection but not be- fore the fault is detected. Completely different approach is preventive maintenance whereas the decision about an action to be carried out is required before the failure happens. It may be preestablished maintenance interval, elapsed predefined service time, or similar [KHV09]. More advanced approaches that have taken more at- tention in recent years, concerns condition-based and reliability-based maintenance. Taking in consideration condition-based maintenance, the decision to conduct main- tenance action is based on the continuous system state observation/inspection, and as such is typically system specific. Herein, the decision to pursue the maintenance action is conditioned by fulfillment of predefined condition(s), like exceedance of system’s variable predefined limit, high vibration index, too high or too low temper- ature (beyond acceptable boundaries), or similar [ZTBF11] [GMPP12] [ZDEA15]. Reliability-based maintenance, as opposed to condition-based maintenance, involves not only an observation of current system states but additionally also previous sys- tem states targeting to gather the probability of failure [LCC + 14]. In accordance with this, an estimation of system reliability as well as prediction of system health state are issues to be solved in order to make the decision on whether the main- tenance action should be performed or postponed [FBB12]. System health state prediction and the estimation of deterioration level of system components require both: continuous structural health monitoring as well as the establishment of lifetime models. Beside the definition of SHM introduced above, there are numerous other definitions introduced in literature [LLEB09] [SFH + 04]. However, most of them are
proaches and are based on neurophysiological and neuroanatomical data. Already one of the earlier contour integrationmodels following this approach suggested a nonlinear coupling of diﬀerent synaptic inputs, which was termed the ’bipole property’ (41). The main idea of this mechanism is that a neuronal column, which does not receive direct aﬀerent input from a visual stimulus, becomes only activated when simultaneously receiving lateral input from neighboring columns on both sides. This bipole property is also part of a later model developed in Grossberg’s lab (106; 42), which focuses more on the laminar organization of the visual cortex and interlaminar cortical circuits. It also includes area V2. In con- trast to this multi-layered network, several other models based on physiological ﬁndings and known neuronal components concentrate on contour integration in V1 (77; 137; 131) and show that local processing in V1 is already suﬃcient to extract contours even without feedback from higher cortical areas. In the models from Li (77) and Yen and Finkel (137) contours are represented by oscillatory neurons which synchronize on fast time scales. The main diﬀerence between the two models is that Li assumes inhibition for cells whose receptive ﬁelds are dis- placed orthogonally to their preferred orientation, while Yen and Finkel assume excitation. Ursino and La Cara (131) especially investigated the cooperation be- tween feed-forward and feedback connections in their model and suggest that the interplay of these two connections helps to avoid too long processing times when extracting contours from noise.