The concept of resilience has been applied within a wide spectrum of disciplines and has become increasingly relevant for interdisciplinary research. The interdisciplinary use of the concept could be linked to the fact that the conceptualization of resilience is not so much related to a discipline, but rather to the concrete ways of posing a research question. Against this background, the article builds on both conceptual and empirical arguments. At the same time, it explores a collaborative interdisciplinary re- search network of 13 projects with regard to the used theoreticalmodels, applying a quantitative empirical investigation. The results show that the projects of this network differ when it comes to the theoreticalmodels used. Four different theoreticalmodels could be identified regarding basic concept (structural vs. process-oriented) and context relation (open vs. closed). Moreover, the construction of the theoretical model is strongly related to the specific ways of posing a research question while remaining rooted in a specific discipline.
The theoreticalmodels suggest that a positive electoral outcome from austerity reforms could be expected if: (1) the voters act according to the subjective/subordinate political model and follow the expected rules, expressing their opinion in elections and tending to support government parties, and if the country has only a limited experience of democracy; (2) the government suc- ceeds in convincing the electorate that they were not active decision-makers or that austerity was not their free political choice from among other alternatives, but the one and only rational option left because of a ﬁnancial or institutional force majeure; (3) the most active groups of voters in elections are not particularly affected by the austerity measures; (4) the negative image of the crisis is ampli ﬁed bringing along a strongly negative public expectation about the crisis solutions; (5) neighbouring countries in a comparable situation have decided to take even more stressful measures; (6) there are no ideologically acceptable alternatives to the government; and (7) indi- viduals are more focused on the public welfare rather than their own welfare, or the degree of economic voting is low.
In Chapter 4 and 5, we will concern ourselves with the canonical space-time structure. The construction of gauge theories on non-commutative spaces [13, 14] will be revised in Chapter 4. These ideas will be crucial in Chapter 5 where we will formulate the Standard Model of theoretical particle physics on canonical space-time. Some phenomenological and experimental implications will also be discussed. These results were obtained in collaboration with Xavier Calmet, Branislav Jurˇ co, Peter Schupp and Julius Wess . A special quantum deformation will be under scrutiny in Chapter 6, κ-deformation [16, 17]. First of all, we will study the algebra of κ-Euclidean space and κ-deformed rotation algebra very carefully, construct invariants and wave equations. The most important result is the generalisation of the ideas developed in Chapter 4 to spaces symmetric under a quantum group. So we will develop a model symmetric under both, κ-Poincar´ e (rotation) symmetry and (arbitrary) gauge symmetry. So far, only scalar field theory has been considered, and no gauge theory has been tackled on κ-deformed spaces. The work is still in progress and is conducted in collaboration with Marija Dimitrijevi´ c, Lutz M¨ oller, Efrossini Tschouchnika and Julius Wess .
In conclusion, it must be admitted that these models are rather imprecise, as both models have been developed from studies which are controversial. For instance, horizontal structuring can never be complete and include all disciplines, whereas vertical structuring would benefit from measuring the degree of knowledge of the relevant discipline or subject – a non-realistic approach. So a degree of imprecision has to be accepted when applying these concepts. Of course, it is possible that they will become more concrete and in this way more reliable applicable as they are developed further? The dichotomy between experts and laymen is a frequently discussed topic and referring to research as a global enterprise where scientists have been working together in inter- and transdisciplinary projects for a long time, it needs to be revised. These people cannot be considered laymen commonly. Taking the emic and etic approach (Pike 1982) into account, it depends on the contemporary position if someone can be considered to be an expert or not. We shall go on to discuss these issues further in the present paper.
The emergence of a wavelength (or “color”) continuity constraint, which was absent in old circuit-switching theory, motivated some authors to revisit the theory in the 1990’s, in order to evaluate the resulting performance degradation in terms of blocking probability [1, 2, 3]. This early work was successful in incorporating the wavelength continuity constraint into the classical blocking models, which assume an equilibrium condition between activation and deactivation of connections. This assumption was reasonable in old telephone networks because voice traffic used to grow only 10% a year while voice connections would last for only a few minutes. Current optical path dynamics is quite different, as Internet traffic may grow 50-100% a year while optical connections may last for months. In other words, traffic intensity may grow considerably during the duration of a connection, so the system can hardly approach an equilibrium. This issue has motivated some authors to propose new blocking models for non-stationary conditions under exponential traffic growth [4,5]. Next generation optical networks, though, will deal with a much more dynamic traffic. If average connection duration is reduced to minutes or a few hours, so that traffic intensity may be seen as constant during this time, blocking models based on equilibrium will be useful again to support the network dimensioning and planning, so looking for equilibrium conditions is still valid.
hallucinations anymore. Now it is time to go beyond the current dogma and skip the rotten art system, it is the right time to take the next step in the evolution of art and leave all these fucking objects, files, happenings, ideas and concepts behind and form new markets and build theoreticalmodels, diagrammatical universes and new forms of aesthetic abstractions - or unknown levels of concreteness - that fit into the current overall theory…
This carbon isotopic abundance ratio has been used to trace the FDU mixing event. 12 C is present in the photospheres of MS stars, but 13 C is only present after products of the CN cycle of nuclear reactions reach the surface via convective mixing. The presumed primor- dial value (i.e. solar value) is about 90, but a decrease to values of 20-40 is expected due to the FDU. There seems to be an agreement between theoreticalmodels and observations on the values of this ratios for intermediate-mass stars (e.g. Boothroyd & Sackmann 1999) but not for low-mass stars (e.g. Charbonnel et al. 1998). Furthermore, values as low as 3-4, which is the nuclear CN-cycle equilibrium value, have also been measured by other groups (e.g. Smith et al. 2002, Pilachowski et al. 2003). Standard models of RGB stars do not predict such low ratios after the FDU but rather values that only go as low as ≈ 20. In order to explain these differences, an “excess” mixing mechanism that would pen- etrate deeper into the star has been invoked as a necessity to explain the data. One theory proposed for this extra mixing at the base of the convective envelope is the meridional circulation caused by rotation on the RGB. More recently Eggleton and collaborators ad- vance the following explanation: a molecular weight inversion created by the 3 He( 3 He, 2p) 4 He reaction is responsible for the extra mixing (Eggleton et al. 2006, 2007); they at- tribute it to the Rayleigh-Taylor instability. Finally, Charbonnel & Zahn (2007) argue that the non-canonical mixing process is instead due to a double di ffusive instability called thermohaline convection.
5. If mental states are theoretical entities they must be intersubjective in that there is no difference between third and first person access to them. But doesn’t this mean that if mental states are theoretical entities we lose our grip on the idea that they are private in that each of us has a privileged access to her own? If “privacy” here means “absolute privacy” – i.e. a privileged access that is independent of context – then the intersubjectivity of our mental states does indeed entail that they are not private (107). However, there is a weaker sense of privacy, which is compatible with intersubjectivity; indeed, privacy in this sense presupposes intersubjectivity. Thus, people who have been taught a theory that applies to their behavior “can be trained to give reasonably reliable self-descrip- tions, using the language of the theory, without having to observe [their own] overt behavior. [This may be brought] about, roughly, by applauding utterances by [the trainee] of [e.g.] 'I am thinking that p' when the behavioral evidence strongly supports the theoretical statement '[The trainee] is thinking that p'; and by frowning on utterances of [e.g.] 'I am thinking that p' when the evidence does not support this theoretical statement” (106-7). But once one has been trained in this way one has gained a sort of privileged access to one’s mental states. One may then reliably report on one’s mental states without relying on any behavioral evidence, while others cannot do this. “What began as a language with a purely theoretical use has gained a reporting role” (107).
Many of these early feasibility questions have been answered, either experi- mentally or theoretically. In addition to the theoretical developments discussed in this thesis, there have been considerable experimental efforts to characterise the behaviour of dielectric haloscopes. For example, a device of five disks has already been intensely studied and there exists a prototype capable of holding and positioning 20 sapphire disks, 20 cm in diameter. Such a device would be capable of searching for hidden photons in novel parameter space, though at the time of writing this thesis the main experimental focus is on understanding dielectric haloscopes themselves. There are currently ongoing magnet design studies for a roughly 10 T, 1 m aperture dipole magnet with two world leading institutions (CEA Saclay and Bilfinger Noell). This is not to say that all engi- neering problems have been solved. How to move and support 1 m 2 dielectric disks inside a high Tesla magnetic field, to create such disks in the first place and the influence of 3D effects are still being studied. Fortunately, the grow- ing size of the collaboration increases our capacity to look into these effects in detail. While there is always the possibility for an engineering or financial showstopper, the MADMAX Collaboration seems likely to realise a full scale experiment.
another one on ground [ 191 , 192 , 193 ]. With the ACES experiment [ 22 , 91 , 92 ] a test of the equivalence principle with clocks will be conducted and the general relativistic prediction for the redshift will be tested. Furthermore, missions such as RELAGAL [ 90 ], in which the clock data of the errant Galileo satellites is analyzed, look for the validity of the gravitational redshift and aim to beat the GPA accuracy, see also Ref. [ 37 ]. Another independent mission in this respect is the RadioAstron experiment [ 121 ]. In a nutshell, chronometric geodesy in space and on Earth will become one (and probably the most important) cornerstone of high-precision geodetic research in the future. Therefore, a major part of this work is devoted to its theoretical framework and we shall base our definitions on concepts that are accessible by chronometric mea- surements.
Having at disposal a reliable approach for 31 P NMR chemical shift prediction of large organophosphorus compounds in solution, it has been decided to apply it to the already mentioned Morita-Bayllis-Hillman reaction (MBH). It is one of the most important processes in modern organocatalysis and, in spite of the fact that many experimental and computational studies have been performed in this field (in following chapters this point will arise again), it still poses a number of mechanistically related questions. Obviously, the detection of MBH reaction intermediates would be the best way to clarify the mechanism. And since phosphanes are very popular catalysts for MBH reactions, the 31 P NMR spectroscopy could serve as a suitable analytical method and the theoretical support could be helpful in order to assign the measured chemical shifts. Recently a series of 31 P NMR experiments to monitor MBH reaction intermediates has been done by Dr. Yinghao Liu.  Three types of signals have been found for the mixture of PPh 3 (catalyst) with methyl vinyl ketone (MVK) dissolved in CDCl 3 :
NMR chemical shifts were calculated with the NMR program module of ADF 2013.01. 3, 4 The contributions of relativistic spin-orbit effects to the nuclear magnetic shielding constants were included with the two-component zeroth-order regular approximation (ZORA) 5, 6 formalism, as implemented in ADF. All applied Slater-type basis sets, optimized for relativistic ZORA calculations, were taken from the internal basis set library of ADF 2013.01. 7 A series of representative hybrid and GGA exchange-correlation functionals commonly in use was included in our study, the hybrid functionals mPW1K, 8 B1PW91, 9 PBE0, 10-12 B3LYP 13 as well as the GGA models PBE, 14 OPBE 15 and OP86. 16, 17 29 Si NMR chemical shifts are reported relative to tetramethylsilane (δ 29 Si(TMS) = 0). Solvent effects have been taken into account with the conductor-like screening model (COSMO) 18 as implemented in ADF 2013.01 with the parameters epsilon=4.8, radius=3.17 (both for the solvent CHCl 3 ), cav0=1.321 19 and cav1=0.0067639 19 .
MAD value can be lowered further when using the larger G3large basis set (instead of 6- 31+G(2d)), giving MAD = 3.3 kJ/mol at the B2-PLYP-M2/G3large//mPW1K/6-31+G(d) level. Rescaling of the PT2 correlation energies is, in principle, rather similar to the strategy pursued in "scaling all correlation" (SAC) methods such as SAC/3 or PCI-X, in which it is assumed that correlation energy calculations such as MP2 recover only a limited amount of the overall correlation energy. 113,114 Best results are indeed obtained here for combinations with b + c > 1, which effectively corresponds to scaling up correlation energies in absolute terms. This can also be verified by combining the optimized c = 0.43 parameter with b = 0.57. This B2-PLYP-M3 variant gives significantly inferior results as compared to B2-PLYP-M2. The similarity of the results obtained with B2-PLYP-M2 and B2K-PLYP and the inferior results obtained with B2-PLYP-M3 also illustrates that rescaling the correlation energies (as in B2-PLYP-M2) and rebalancing the mixture of exchange energies (as in B2K-PLYP) have rather similar consequences for the dataset chosen here. Considering the limited size of this data set this conclusion cannot be made in general terms at the moment, but certainly suggests that departure from the b + c = 1 recipe pursued in developing the B2-PLYP and B2K-PLYP models offers one more opportunity of optimizing the performance of double-hybrid functionals.
Recently, Constant, Gataullina, and Zimmermann (2009) established a new method to measure ethnic identity which they called the “ethnosizer”. Using information on an individual’s language, culture, social interactions, history of migration, and ethnic self- identification, the method classifies that individual into one of four states: assimilation, integration, separation or marginalization. A large body of literature has emerged examining the effects of immigrants’ characteristics (age, gender, education, religion, etc.) on their ethnic identity using the ethnosizer. This note presents a basic theoretical framework to shed light on the vast collection of empirical results obtained on this topic.
describes, how this influences the precision of the model and its predictive power. The algorithm is stochastically exact in the sense that it considers the random nature of reaction events sequence, as well as the random nature of their occurrences over time [ 52 ]. Further, it models Monte-Carlo simulations in a rejection-free manner, as a system change with a defined rate has to emerge. As such it provides the connection between artificial discrete time steps and the natural continuous time scale, without making any assumptions about possible reactions during the execution. The original algorithm of Gillespie was enhanced for performance, see section 2.2 .
the genetic algorithm method in detail afterwords which is used to ob- tain the globally optimized structures for our systems. In chapter three we have discussed the geometric structures obtained from global optimizaion of semiconductor clusters including both elemental and binary clusters, Silicon and Germanium. In chapter 4, we have given various tools for analysing our semiconductor cluster strucutures, which include similarity functions, com- mon neighbours analysis, radial distribution functions, stability functions, HOMO-LUMO Gap analysis and shape analysis. All these tools enable us to study the structural and elctronic properties of investigated clusters in detail. In chapter five we have presented the results of theoretical studies of Cu n clusters and their structural and electronic properties are discussed
The estimated maximum uncertainty for the position of the defect state within the bandgap induced by the slab model is 0.2 eV which still suffices to categorize the different (001) diamond surfaces from the aspect of the charge control of NV defects proximate to the surface. The (001) diamond surface, seeking for the best termination for nanoscale sensing with the NV-center close to this diamond surface is investigated in this study. My typified surface models are “ideal” in the sense that they are atomically smooth and do not contain any trivial defects, as e.g. dangling bonds. This approach aims to focus on the effect of various terminators. The (001) diamond surface reconstructs to (2×1) under ambient conditions. This leads to long carbon-carbon bonds (C-C bridges) at the surface, reducing the number of dangling bonds per surface C atom from two to one. With the remaining dangling bonds saturated by hydrogen atoms, one obtains the (2×1):H diamond surface (see Fig. 5.2a). After oxidation, hydroxyl (-OH) groups may replace H-terminators. This is a typical termination of nanodiamonds in biological environment. These -OH groups are placed relative to each other into the energetically most favorable configuration (see Fig. 5.2b) . A planar array of “closely packed” ether-like groups (C-O-C bridges with oxygen inserted between any two carbon atoms on the surface, after removing H or OH terminators ) are also considered as a simplified model to study the role of C-O-C bridges (see Fig. 5.2c) in making a PEA diamond surface [209, 216]. The (2×1):F (001) diamond surface (see Fig. 5.2d), which is very similar to the (2 × 1):H surface, but hydrogen atoms are replaced by strongly electro-negative fluorine atoms is also studied. We also consider a partially oxidized model surface, allowing for alternating termination with -H and -OH groups as well as ether-like C-O-C bridges. I call this surface type as H/O/OH termination (see Fig. 5.2e). I investigate both NV(0) and NV(-) defects near these diamond surfaces.