Zugspitze (Schneefernerhaus) is part of the global WMO- GAW station Zugspitze/Hohenpeissenberg and jointly op- erated by the German Federal Environment Agency (UBA) and the German Meteorological Service (DWD). The ob- servatory is located at 2670 m a.s.l., about 300 m below the Zugspitze summit and on the southern slope of the moun- tain massif. The high altitude leads to a significant annual cycle in observed aerosol particlenumber and mass concen- tration, caused by different boundary layer heights in sum- mer and winter (Birmili et al., 2009b) The station’s elevated position allows us to sample air masses that had only little contact with the local boundary layer, especially in the cold season from October to March. Zugspitze occasionally re- ceives lofted aerosol layers from remote source regions, such as North America (Birmili et al., 2010b) or the Eyjafjalla- jökull volcanic eruption (Schäfer et al., 2011). Besides the GUAN measurements, the Karlsruhe Institute of Technology conducts long-term measurements of total particlenumber concentration (condensation particle counter – CPC), parti- cle number size distribution (optical particle counter – OPC; aerodynamic particle sizer – APS), and primary biological aerosol particles. Particlenumber size distribution measure- ments started in 2004 and eBC measurements in 2008.
As consequence of the studies concerning particle emissions, several governments de- cided to limit particlenumber emissions for combustion engines. In Europe, the JRC guided a research group to define both the measurement technique and the emission limits for passenger cars in the European Union for the exhaust gas legislation Euro 6 . It was shown in this inter-laboratory exercise, that the particulate mass mea- surement had an unsatisfactory repeatability (≈ 55 %). Furthermore, even using high efficiency filters in the dilution tunnel, the background levels were found similar to the emissions of the vehicles (≈ 0.4 mg /km). Thereby most of the mass collected on the filters was volatile and less than 10 % was soot. A new method measuring particlenumber was therefore approved, focusing on non-volatile particles using a volatile par- ticle remover (VPR) 7 and a CPC with 50 % cut-off efficiency at 23 nm (90 % at 40 nm). This new method showed good intra-lab and inter-lab reproducibility of ≈ 40 % and ≈ 25 % respectively. These values are similar to other gaseous emissions like carbon monoxide and hydrocarbon. Even though no difference in the repeatability variabil- ities was found comparing the reference particle measurement system and the labs own systems, a 15 % difference in the mean number emission levels of the vehicle was found. The authors state, that the calibration procedure for the particlenumber measurement systems should be better defined in order to ensure the measurement of the absolute value of the vehicle emissions. Based on these findings, the Economic Commission for Europe of the United Nations (UNECE) published the Uniform pro-
A key phase of the EUCAARI Intensive Observation Pe- riod (IOP) was the Long Range Experiment (LONGREX), during which in-situ and remote sensing aerosol measure- ments were performed by the DLR Falcon 20 research air- craft, operating between 6 and 24 May 2008. Particlenumber concentrations with diameter (D p ) >4 nm (N 4 ) and >10 nm (N 10 ) were measured onboard the Falcon aircraft using two condensation particle counters (CPC, TSI models 3760A and 3010). The number concentration of non-volatile par- ticles (D p >14 nm) was measured using an additional CPC with a thermodenuder inlet set to a temperature of 250 ◦ C (Burtscher et al., 2001). The total particle and non-volatile residual size distributions were measured in the dry size range D p ∼ 0.16–6 µm using a Passive Cavity Aerosol Spec- trometer Probe-100X (PCASP; e.g. Liu et al., 1992) and Grimm Optical Particle Counter (OPC), respectively. CPC and PCASP measurements were used to calculate particlenumber concentrations in three size ranges 4–10 nm, 10– 160 nm and 160–1040 nm that are roughly representative of the nucleation, Aitken and accumulation mode size classes, respectively. Measurements from 15 flights have been used in this study; the tracks of these flights are shown in Fig. 1 (flight sections where the altitude of the aircraft was at or below 2000 m a.s.l. are shown in bold).
Since the parallel TASEP is equivalent to the Nagel-Schreckenberg model with maximal velocity 1, the results also apply to a bottleneck situation in traffic flow. The average density is 1/2, however, the density fluctuates around this value. With the help of the generating function obtained in this paper one finds the probability that the road section contains any density of cars. Further investigations shall consider the fluctuations of the particlenumber as in  using the findings of the present paper. This is planned to be done in the near future. Additionally it would be interesting to compare the joint current-density distribution with the one for random-sequential update . To finally generalize present results to arbitrary α and β one proceeds as in : Denote by H N,M the sum of all possible products of N
In addition to the statistical fluctuations, the complicated dynamics of A+A collisions generates dynamical fluctua- tions. The fluctuations in the initial energy deposited inelastically in the statistical system yield dynamical fluctuations of all macroscopic parameters, like the total entropy or strangeness content. The observable consequences of the initial energy density fluctuations are sensitive to the equation of state of the matter, and can therefore be useful as signals for phase transitions . Even when the data are obtained with a centrality trigger, the number of nucleons partici- pating in inelastic collisions still fluctuates considerably. In the language of statistical mechanics, these fluctuations in participant nucleon number correspond to volume fluctuations. Secondary particle multiplicities scale linearly with the volume, hence, volume fluctuations translate directly to particlenumber fluctuations.
One common measure to reduce these errors is to sub- cycle the dynamical time step 1t and treat nucleation with a smaller time step 1t NUC = 1t / l NUC , l NUC being the inte- ger number of sub-cycles. The stronger the updraught is, the smaller is the 1t NUC that should be chosen. Kärcher (2003) recommends 1t NUC := 5 cm/w syn as an empirical rule. Un- like for bulk or spectral bin microphysical models, a further threshold parameter n min is introduced in our Lagrangian model to keep the number of SIPs treatable. If the concen- tration of newly formed ice crystals is below n min , the for- mation event is ignored and it may or may not happen in the subsequent time step. The total number of SIPs and real ice crystals depend on n min . Thus statistical convergence de- pends on the choice of n min and values around 10–100 m − 3 were found to be reasonable for a specific case (Sölch and Kärcher, 2010).
investigation after charged-particle exposure, in tumor cells as well as healthy cells. There is also a huge knowledge gap between these signaling events occurring within the first hours up to 1 day after exposure and the sustained signaling changes observed in irradiated animals weeks or months later, when complex damage should be repaired. The underlying mechanisms for those long-term changes are still under investigation; for example, mitochondrial damages and sustained ROS production are suggested. With hypofractionation (1-3 fractions with a very high dose [up to 25-30 Gy]), more or other pathways might be modulated compared with lower doses per fraction. Therefore, molecular profiling studies with relevant tumors in the high dose range are required to personalize radiotherapy in cases of therapy resistance.
of the static correlation of Sanjeevi and Padding  parti- cle configurations with an even distribution of attack angles were used. In a dynamic system, however, the particle orientations can generally not be assumed to be evenly dis- tributed. As the Reynolds number increases, particles are increasingly oriented perpendicular to the mean flow veloc- ity. This observation is consistent with the results of the un- resolved DEM-CFD fluidization simulations of Mema et al. , who found that spherocylinders are preferably aligned horizontally in the upper region of the vertical fluidization column, where particles are more mobile and no longer densely packed. This flow situation is similar to the flow sit- uation in the simulations of the present study. Since these DEM-CFD simulations also considered gas-solid fluidiza- tion, the Reynolds numbers can be expected outside the Stokes regime and in the range of Re = 1000, where also an orientation perpendicular to the mean flow direction is obtained. The frequent occurrence of particles with attack angles between 60 and 90 supports the idea that lift forces are not strongly correlated with the angle of attack (relative to the mean flow velocity) in dynamic gas-solid flows,
solutions, but once a certain threshold is exceeded the accu- racy of the simulation signiﬁcantly drops due to the numerical breakup of the ﬂuid structure. Therefore, we explore the origin of the instability, which is shown to depend on and increase with the Reynolds number. The novelty in comparison with previous work ( Sigalotti et al. 2003 ; Basa et al. 2009 ) is seen in a more detailed discussion which, e.g. ﬁrstly demonstrates the analogy of the instability for the Couette ﬂow example and also explores the diﬃculties that arise in the presence of a low physical vis- cosity. Moreover, a novel mode of instability, which occurs in pipe ﬂow with expanding diameter, is presented and a qualita- tive explanation for its initiation is given. We further address the implications of the observed stability problems to open channel ﬂow applications. In particular, it is shown that for low viscous free surface ﬂows with an initially uniform particle distribution the simulated velocity proﬁle deviates shortly after imposing the analytical solution. The second section of the paper is then devoted to demonstrating two examples which are not aﬀected by the mode of instability: It is shown that for ﬂow over a sill and a sharp-crested weir energy balance based on Bernoulli’s prin- ciple is accounted for correctly even at high Reynolds numbers. In the ﬁnal section the relevance of the instability is addressed from a theoretical, computational and practical point of view.
Participants were instructed to press the upper response button on a button box when the upper number was larger and the lower response button when the lower number was larger. In half of the items the upper number was larger, in the other half, the lower number was larger. Among the items presented to the left and right hemiﬁeld respectively, 20 comprised so called within-decade (WD) items. In WD items both numbers contain the same decade digit (e.g. 64 < 68), i.e. those items can only be solved by focusing on the unit digits. These items were included as ﬁller items to avoid participants forming a strategy of fo- cusing only on decade digits. Among the remaining 80 items presented to each hemiﬁeld, 40 were compatible (higher number has higher unit digit, e.g. 78 vs. 21) and 40 were incompatible (higher number has lower unit digit, e.g. 71 vs. 28). All items consisted of 4 di ﬀerent digits and numbers involved ranged from 21 to 98. Problem size, decade distance, unit distance and parity were matched between stimulus ca- tegories (for details, see Harris et al., 2018 ). At the beginning of each trial, a ﬁxation control was implemented to ensure that participants ﬁxated the center of the screen. Once ﬁxation was conﬁrmed, number comparison trails were presented either to the left or right hemi ﬁeld for 200 ms. The short presentation time ensures processing in the re- spective hemispheres as participants do not have the possibility to shift their gaze and look directly at the numbers. Participants had three seconds to decide which number was larger, before the next trial started. Fixations and eye movements were recorded with an EyeLink 1000 eye tracker (SR Research, Ontario, Canada). The eye tracker was calibrated and validated by a 5-point calibration routine. The criterion for successful calibration was an average error of < 0.5° (M = 0.36°, SD = 0.08). Reaction time and accuracy were recorded for each item. 2.4. Hormone analysis
• Either the electron (or the photon…) is a “particle”. But what is a particle? A small body, a small individual whose location is de- scribed by the coordinates of a single point. This is not simply the position of the center of mass; it is the “position” of the whole particle. So there is no other choice than to imagine the particle as point-like.
Thirty years after the CP 1 – 3 was discovered, researchers at McGill University made a breakthrough in the analysis of molten metal quality by applying the ESZ approach to such an aggressive fluid. By applying this technique via what is now known as a liquid metal cleanliness analyzer (LiMCA), 4 the number of pulses can be related to the passage of insulat- ing particles, although the amplitude of the measured voltage is millions of times smaller than that delivered by saline aqueous solutions with the same electric current. Using high amperages and high amplifications, micro-voltages were suc- cessfully converted into clear signals above the background noise level. Since then, LiMCA has been successfully used at moderate temperatures to monitor the quality of molten metals such as gallium, 5 lead solders, magnesium, 6 – 8 zinc, and aluminum. 9 – 11 However, the application of LiMCA to the detection of inclusions in melts, such as liquid steel, remains challenging due to the high temperatures that are involved (1500–1700 C) and to material problems, such as
Clearly, it can in general not be stated which model is the best. It depends on the application and on the level of accuracy that has to be accomplished. For processes with xed bed height and small particles (< 1 mm), such as heterogeneous catalysis, the quasi-continuous approach will most likely remain the predominant model of choice. However, processes where the particle size changes during operation (e.g.: combustion, gasication, and pyrolysis of biomass or waste) are probably better represented by RPM and DPM. It may be allowed to generalize that the larger the particles, the more appealing the RPM and DPM, respectively, and the less recommendable the quasi- continuous approach.
The overlay of holographic frames with alternating signs influences the visibility of closer grains. The particles outside along the trajectory appear brighter because of the superposition with only one "self originated" neighbour. This destructive inter- ference results in a gap between these two unique particle images (Fig. 4, left picture - red marks).
In this study, the replacement number as it is implicitly defined in the framework of the CEM is derived and then compared with the phenomenological approach that was applied by the Robert Koch Institute (RKI). Section 2 presents a brief history of the official R-number, the methodological basis of which has been revised several times, but without changing the principles of calculating the reproduction number. Section 3 introduces some basic concepts and mathematical approaches of quantitative epidemiology. Section 4 recapitulates the core equations of the CEM and explains the logical foundations. Section 5 deals with the definition of the replacement number and discusses its operationalization when applied to real data. In addition, it derives some equations that can be used to statistically determine, for example, the average duration of infectivity. Section 6 confronts the correct replacement number theoretically and empirically with the R-number published by the RKI. In section 7 some conclusions are drawn.
The test particle approach surely is not as accurate as it would be preferable, but it is reasonable to assume that the feedback of the non-thermal synchrotron particles on the magnetic field is negligible as the magnetic and dynamic evolution of PWNe is dominated by the thermal plasma components. Several other simulations have shown that a thermal lepton and ion component is naturally produced by the termination shock (Gallant et al., 1992; Hoshino et al., 1992; Arons & Tavani, 1994). A self-consistent approach like in Particle-in-Cell Simulations would be much more desirable, however this imposes a great constraint on the size of the computational domain when performed in full 3D. The high Lorentz factors demand the use of very small simulation cells in order to cover the high energy radiation accurately, which enormously increases the computational effort. Hence, there is no way by current available computational power to simulate the presented turbulent problem in a fully self-consistent manner. This makes our hybrid approach a valuable tool to analyse large scale structures.
The second model considered was the modified shallow water model. It is based on the shallow water equations, to which is added a small stochastic noise in order to trigger convection, and an equation for the evolution of rain. It contains spatial correlations and is represented by three dynamical variables - wind speed, water height and rain concentration - which are linked together. A reduction of the observation coverage and of the number of updated variables leads to a relative error reduction of the particle filter compared to an ensemble of particles that are only continuously pulled (nudged) to observations, for a certain range of nudging parameters. But not surprisingly, reducing data coverage in- creases the absolute error of the filter. We found that the standard deviation of the error density exponents is a quantity that is responsible for the relative success of the filter with respect to nudging-only. In the case where only one variable is assimilated, we formulated a criterion that determines whether the particle filter outperforms the nudged ensemble. A theoretical estimate is derived for this criterion. The theoretical values of this estimate, which depends on the parameters involved in the assimilation set up (nudging intensity, model and observation error covariances, grid size, ensemble size,...), are roughly in accor- dance with the numerical results. In addition, comparing two different nudging matrices that regulate the magnitude of relaxation of the state vectors towards the observations, showed that a diagonally based nudging matrix leads to smaller errors, in the case of as- similating three variables, than a nudging matrix based on stochastic errors added in each integration time step.
The first project of my thesis deals with a stochastic many-particle model with which we explained a generic class of condensation phenomena that arise in both classical and quantum systems. Interestingly, this condensation does not proceed into a single, but into multiple states in a single realization of the stochastic pro- cess. Only recently has it been shown that this stochastic many-particle process governs the coarse-grained dynamics of a periodically driven and dissipative system of non-interacting bosons. This process can be understood as a generalization of Bose-Einstein condensation to nonequilibrium. We showed that the condensation of bosons in such a driven-dissipative, nonequilibrium system corresponds to the selection of strategies in evolutionary game theory. By combining analytical concepts from the theory of stochastic processes, nonlinear dynamics, and linear programming theory applied to antisymmetric matrices, we explained how multiple condensates form in the stochastic process. Furthermore, we proposed the possibility of conden- sates with oscillating occupations. We found that the condensation dynamics follow a simple physical guiding principle: the vanishing of relative entropy production guides the selection of condensates. How this condensation of bosons in nonequilibrium and the design of oscillating condensates may be realized with an open quantum system is an interesting challenge for future experiments.