the facade of building part 1. Likewise, some ofthe localized PSs are found beneath building part 1 (right of Figure 8). However, the points are less in numbers than in the simulated data and are irregularly distributed. A major factor influencing the deviation from the vertical plane is the limited accuracy ofthe localization in elevation due to the narrow orbital tube of TerraSAR-X. Moreover, thesimulation is idealistic as the ground level is represented by a flat plane in the 3D model. In reality, the ground in front ofthe facade slightly varies in height and is partly covered with objects. Hence, the majority ofthe simulated fivefold bounce signal is unlikely to occur. Nonetheless, regular patterns of PSs below the ground level are found for other isolated buildings in the center of Berlin. Hence, thesimulation case study may give a strong hint that also specular fivefold bounce signatures correspond to PSs.
With increasing use of civil helicopters the problem of noise emission has become increasingly important within the last decades. Blade vortex interactions (BVI) have been identified as a major source of impulsive noise. As BVI-noise is governed by the induced velocities of tip vortices, it depends on vortex strength and miss-distance, which itself depends on vortex location, orientation, and convection speed relative to the path ofthe advancing blade. Blade vortex interaction can occur at different locations inside the rotor plane depending on flight velocity and orientation ofthe blade tip path plane. Numerical simulations ofthe acoustic near and far-field (e.g. Ehrenfried et al. 1991, Burley et al. 1991) are basedonthe interaction ofthe blades with vortices, which were described by mathematical models. Due to the complexity ofthe flow field under investigation, more sophisticated aerodynamic simulations like e.g. RANS computations can not be included. Therefore, aero-acoustic rotor simulations depend on experimentally obtained flow field data in order to derive appropriate vortex models and to validate thesimulationofthe vortex flight path and aging. Since the vortex structure considerably depends on Reynolds number (see e.g. Splettstößer et al. 1984, Caradonna et al. 1988, McCroskey 1995, Leishman and Bagai 1996) remarkable efforts are taken in order to obtain flow field data at large scales (e.g. Splettstößer et al. 1995, Raffel et al. 1998). It is understood, that the study of these phenomena is of particular interest for progress towards quieter helicopters. The measurement technique described here, has been applied during the international HART II project undertaken by NASA, US-Army, ONERA, DNW, and DLR (Y. Yu et al, 2002).
Simulationof artificial reflectivity maps supports the interpretation of high resolution SAR images in the azi muth-range plane. Nonetheless, simulation in 3D is mandatory in order to localize and identify salient point scat terers which, for instance, are analyzed for detecting deformation signals by Persistent Scatterer Interferometry. This paper presents first results of a 3D SAR simulation approach basedon Ray Tracing methods. Within an ur ban scene, two different kinds of trihedrals detected and are analyzed for identifying their backscattering sur faces and the corresponding focused intensity contributions. Besides comer reflectors related to real building comers, the appearance of artificial comers inside buildings is confirmed by simulationmethods.
The bulk ofthe NEG models are basedon general equilibrium approaches. Their subject is the description of economic states, where markets clear, no changes in the variables involved occur, and perfect predictions concerning the future are made. Sit- uations out of these perfect world states are not only unaddressed, but cannot even be characterized using the given equations. Fowler  tackles this issue by converting the classical cp-model to an agent based framework. This enables him to analyze the dynamic behavior ofthe model outside of equilibrium. That the construction of an agent based model of this kind, which is very close to the original model, is not an easy task is shown by the fact that the agent based model has some severe weaknesses including the independence of worker’s location and firm’s labor demand. This is why Fowler  improved his work by an enhanced version ofthe agent based framework. Some other attempts like for instance Baldwin  have been made to at least define the NEG models outside of equilibrium, most of them building onthe core periphery model. Fujita and Mori  have pointed out the fact that, because there has been a great progress in computational power and in the relevant software instruments, the obvious next step are numerically computable NEG models. Indeed, there has been a persistent trade of between the realism and tractability of NEG models and, while simplified, analytically solvable scenarios won’t loose their importance, a numerical analysis can open the door for understanding more about distance between economic regions.
basedonthe temporal stability of their phase and/or amplitude scattering patterns and relies onthe availability of large time series of acquisitions. Their introduction in SAR interferometry enabled the development of millimetric terrain motion monitoring, by measuring Line-of-Sight displacement, as well as the possibility of accurate DEM refinement , . More recently, another class of point-like scatterers has been introduced, namely the Coherent Scatterers (CSs) . Their detection is basedon their spectral correlation properties and has been addressed in terms of image sub-look (i.e. spectral) correlation. Thus, it can be performed onthe basis of single SAR images as long as the system bandwidth is large enough. The deterministic natureofthe CSs is linked to their spectral correlation. Indeed, high values of sub-look coherence are related to CSs that represent more closely point-like scatterers while low values of sub-look coherence indicate CSs with a (partial) developed speckle pattern and hence a stochastic scattering behavior. CSs have been explored mainly in terms of their polarimetric properties that have been used to characterize the individual scatterers and to extract information about dielectric properties and physical orientation.
Current prediction techniques use branch execution history collected during the runtime of a program. Such dynamic branch prediction techniques started with simple one-bit predictors that use a single bit to determine the speculation direction. Speculation is always determined by the outcome ofthe last execution ofthe branch. Two-bit predictors distinguish four states stored in two bits: pre- dict strongly taken (11), weakly taken (10), weakly not taken (01), and strongly not taken (00). If the prediction is in a strong state, then the prediction must miss two times to reverse speculation to the opposite direction. A single habitual change in the execution pattern ofthe branch should not reverse the prediction. The saturation counter scheme simply decrements or increments the predictor value in case of misprediction respectively prediction hit until the saturation value is reached. Two-bit predictors improve prediction accuracy over one-bit predictors in case of nested loops . Multiple bit predictors need more storage space and evaluations showed that branch prediction accuracy is not improved . In principle, multiple bit predictors change a learned habit more slowly and the number of used bits determines how slow the habit change occurs. Multi- ple bit predictors can be seen as a kind of Markov predictor , if the multiple bit representation is replaced by the frequencies of taken or not taken. One-, two-, and multiple bit predictors are ﬁrst candidates for a transfer to context prediction.
samples were cooled down immediately after the first TG step and the isolated residues were inves tigated by X-ray powder diffraction and elemental analysis. As can be seen in Fig. 4 the experimental pattern for the residues obtained from the decom position of compound II and III are in good agree ment with those calculated for the corresponding 2:1 compounds V and VI from single crystal data (Fig. 4). In addition the results of an elemental anal ysis of these residues are in good agreement with calculated data. Because the 2:1 compound with CuCl is unknown no authentical powder pattern is available. However the elemental analysis as well as the results ofthe DTA-TG investigations suggest strongly that this compound has formed during the first reaction and therefore the diffraction pattern shown in Fig. 4 should belong to this compound.
Mit einer Single-Photon-Counting-Apparatur wur den die Fluoreszenzabklingkurven von 1 — 3 gemes sen. Die Simulation dieser Kurven mit Hilfe von Ex ponentialfunktionen führt zu den in Tab. 2 angegebe nen mittleren Lebensdauern f. Da laut Formelschema bei 1 und 2 prinzipiell drei Rotamere und bei 3 zwei Rotamere auftreten können, muß man mit einer ent sprechenden Anzahl von Abklingfunktionen rechnen. Ist ein Rotameres kaum populiert, oder besitzen zwei Rotamere praktisch dasselbe Abklingverhalten oder ist die Gleichgewichtseinstellung zwischen angeregten Rotameren schnell im Vergleich zur Lebensdauer des Sj-Zustands, so wird sich die Zahl der e-Funktionen reduzieren. Bei Raumtemperatur genügt für einen guten Fit bei 1 und 3 bereits jeweils eine einzige e-Funktion. Damit stehen diese Verbindungen in Ge gensatz zu 2,4 oder 5, bei denen die Raumtemperatur fluoreszenz sehr deutlich nicht-monoexponentiell ab klingt. Erst im erstarrten Lösungsmittel benötigt man für 1 allmählich zwei und bei 15 K sogar drei vonein ander unabhängige e-Funktionen, um eine zufrieden stellende Anpassung der Abklingkurven zu erhalten. In Abb. 3 ist ein typisches Beispiel für das mono- und das nicht-monoexponentielle Abklingverhalten wie dergegeben. Auch das Lösungsmittel hat auf die mitt leren Lebensdauern f einen entscheidenden Einfluß. In der letzten Spalte von Tab. 2 wird der Versuch gemacht, aus den Produkten Aixi auf die prozentuale Rotamerenverteilung zu schließen. Die erhaltenen Zahlenwerte geben jedoch kein zuverlässiges Bild der Grundzustandspopulation. Das wäre auch nur bei ganz engen Voraussetzungen der Fall , Es ist anzu nehmen, daß die einzelnen Rotameren einer Verbin dung verschiedene Fluoreszenzquantenausbeuten be
mark, and partly to avoid measurement problems for p t in the end ofthe sample.
5.1 Model and Rank Determination
A third order vector autoregression with a restricted constant is fitted to the data 1991:1 to 1994:1 giving a sample size of T = 37 − 3 = 34. The lag length is chosen so as to ensure that the mis-specification tests pass. As the explosive component will now be eliminated two lags would perhaps have been preferable on grounds of parsimony. Modelling the data right until the end ofthe hyper-inflation in 1994:1 resolves the first puzzle set out in §2.1. While this only represents a modest gain in degrees of freedom, the importance lies in the ability to analyse the hyper-inflation to the end. This is where Cagan’s theory is meant to work best. Mis-specification tests supporting the model are reported Table 7. Graphical tests, not reported here, include recursive tests and they are likewise supportive ofthe model. This shows that a well-specified joint model with time-invariant parameters can be established
demand. With further analysis, it became apparent that a form of empirical relationship, typically represented within SD models using a table function, would be required for part of this model. Despite a general consensus onthe likely shape ofthe table function, however, there was no hard data available to confirm this. This was overcome by developing a ‘micro- scale’ agent-based model to replicate the workforce and using this to determine thenatureofthe table function required in the SD ‘macro’ model. The paper describes in detail how this was achieved. In conclusion, Homer reflects that (1999, p. 159): “SD model parameters should be estimated using data below the level of aggregation of model variables wherever possible.” However, it is not always straightforward especially when using lower aggregate historical data for use in an SD model designed to predict future outcomes beyond the range of past experiences. Here the process of developing a micro-level model for service queuing and task allocation to individual engineers in order to develop a key table function for service readiness was considered successful. Homer reflects that while it was time consuming to complete, the process of doing this produced greater insight into the issues around cross- training and achieved much more stakeholder buy-in than if the table function had been based purely on assumption or judgement.
Modelling the dynamics of physical or technical processes usually leads to differ- ential equations. If the states of a system modelled in this way are restricted, additional constraints, represented by algebraic equations, must be included. Such constraints may arise from conservation laws or geometric considerations, e.g., the wheel of a car which should preferably stay connected to the ground. It is pos- sible to incorporate these constraints into the system variables and transform the system algebraically to a so-called ordinary differential equation (ODE) in minimal coordinates. In this way the constraints are always perfectly fulfilled, but the effort necessary for these transformations may be considerable and, especially for large or nonlinear systems, it is barely manageable. Due to changes of basis, the variables in the systems may lose their physical meaning. An alternative approach is to dif- ferentiate the whole system until, by algebraic means only, it can be transformed into an ODE, the so-called underlying ordinary differential equation or uODE. The drawback of this method is that the constraints do not explicitly appear any more. Under some strong assumptions, local bijections exist between the solution set of a differential equation with constraints, the solution set ofthe ODE in minimal coor- dinates and the solution set ofthe underlying ODE. Analytically, for the first two cases, the constraints are always fulfilled. Due to roundoff and approximation errors, the numerical solution ofthe underlying ODE almost inevitably drifts away from the set that is defined by the constraints. In order to prevent this phenomenon, the algebraic constraints have to be kept and integrated together with the differential equations. The arising systems are called differential-algebraic equations (DAEs). For further reading, we refer to, e.g., [7, 19, 70, 72, 92, 128].
Clouds form due to movements of air in the atmosphere, and it is therefore the most important task of any physics-based cloud simulation to model these airflows. The physics discipline that deals with the flow of liquids and gases - fluids - is fluid dynamics. A scientifically accepted model for fluid flow are the Navier-Stokes equations, a set of nonlinear partial dif- ferential equations. More precisely, the following equations model incom- pressible fluid flow. This might at first appear to be an odd choice, since air is obviously a compressible fluid: Its density changes with pressure and temperature. In the atmosphere however, this change is very small because the air can move freely, and it is therefore a common simplification to con- sider air as an incompressible fluid . This has also technical reasons, since the models for compressible flow only allow small time steps in order to maintain accuracy and stability .
In this case, the task for an inventory manager is to choose the optimum values for the reorder point and the reorder quantity. There are classical analytical methods to do that which are widely used. However, as will be shown below, these methods have severe drawbacks and often lead to suboptimum solutions. The problems arise in two aspects: First, the usually used assumptions onthe stochastic structure ofthe customer demand often are not adequate and lead to wrong results. Second, the cost structure ofthe analytical models is very simple and, often, does not coincide with the real situation. Due to these reasons, inventories often are managed suboptimal. Either costs are larger than needed, or the service level is lower than necessary.
. I am pleased to acknowledge the rapid support ofthe Institut für Holztechnologie Dresden gemeinnützige GmbH (IHD), Dresden, Germany, where the competent and experienced local staff offered open-minded cooperation and willingness for any discussion. I am deeply grateful for the opportunity to have been able to perform some research at the Institute of Wood and Paper Technology, Technische Universität Dresden, Germany, with outstanding support by the lo- cal staff offering obliging and straightforward cooperation. Many thanks are also due to the Bundesan- stalt für Materialforschung und -prüfung (BAM), Department of Non-destructive Testing (DEPT 8) Radi- ological Methods, for providing measuring equipment. Particularly the discussions with the former mem- ber Dr. Kurt Osterloh and his advice in some topics ofthe thesis are highly appreciated. I am very thankful to the staff members and management ofthe Electronic Wood Systems GmbH, Hameln, Ger- many, who made numerous measuring series possible and contributed to several results in terms of a cooperation project and beyond. Susan J. Ortloff did a great job proofreading this comprehensive thesis. Part ofthe performed X-ray investigations are based upon experiments within the scope ofthe research project ”Erforschung und Adaptierung von radiometrischen Verfahren zur Messung von Materialdichte und -feuchte an Holzwerkstoffen unter Berücksichtigung des strukturellen Aufbaus“ funded by the Ger- man Federal Ministry of Economics and Technology onthe basis of a decision by the German Bundes- tag by the lead partner AiF Projekt GmbH. A travel grant from the Stiftung Holzwirtschaft, Hamburg, Germany, served as valuable contribution to enable a conference visit in Brazil. A completion scholar- ship from the presidential board ofthe Ostwestfalen-Lippe University of Applied Sciences (now OWL University of Applied Sciences and Arts), Lemgo, Germany offered a good deal of freedom in the final stages of my research work. All funding is gratefully acknowledged.
The expected performances of WALES have been simulated through the application of an end-to-end model. In this work we provide an assessment of WALES daytime performances in clear sky and cloudy conditions. Expected performances are expressed in terms of systematic and noise errors in dependence of altitude and SNR for three selected reference atmospheric models (tropical, sub-Artic winter and US Standard Atmosphere). An estimate ofthe major components ofthe systematic error is also provided. Real atmospheric data from existing lidar systems have also been considered to estimate the performances of WALES in variable atmospheric conditions, as well as to determine the effects on system performances associated with atmospheric inhomogeneities and variable cloud scenes.
Experimental investigations can be separated into indoor testing under laboratory conditions and outdoor testing. Laboratory investigations have the goal to vary only one parameter while keeping all other parameters as constant as possible. This allows precise parameter studies which could lead to a better basic understanding. Another benefit of laboratory investigations is the possibility to verify analytical calculations onthe one hand and to provide input parameters to model-based simulations onthe other hand. Outdoor testings are facing realistic testing conditions concerning the texture and condition ofthe road, environmental and weather influences etc. Particularly for accident research realistic testing conditions are of great interest. In what follows, the description of test methods is limited to outdoor testings.
TP materials are a very large polymer class and this allows one to find a suitable polymer for nearly every application. Among the TP polymers polycarbonate (PC) is an important engineering polymer that is widely used since its development in 1953 and first production in 1960. 61 Its main features are transparency, toughness and high temperature stability. It is applied in car parts (e.g. dash boards, headlights) glazing, lighting, housing for electrical equipment, packaging (e.g. milk bottles) or as compact disc (CD). Apart of these uses, this material has some drawbacks which limits its applications. 62-65 In order to overcome the limitations, PC is often blended with an engineering polymer, like acrylonitrile-butadiene-styrene (ABS). 66 PC/ABS is one ofthe most successful commercial polymer blend since the first patent date from 1964. 67 This blend combines the good mechanical and thermal properties of PC and the ease of processability. PC/ABS actually is a ternary blend, since ABS itself usually consists of styrene- acrylonitrile random copolymers (SAN) and dispersion of grafted polybutadiene (PB). The properties of such a ternary blend will depend onthe structure properties ofthe components. Sometimes a reinforcing material like glass fiber is added to the blending mixture inorder to increase the impact strength. Also glass fibers increase the surface roughness, which aid in adhesion. Poly (styrene-co-maleic anhydride) (SMA) polymer is another engineering material that has wide range of utilities. But this material is often EULWWOHDWWHPSHUDWXUHVDV ORZ DV ±& 68
Most generally, empirical studies on trade impacts can be divided into ex-post and
ex-ante analyses. Whereas ex-post studies focus onthe effects of trade policies
that have already been implemented, ex-ante analyses simulate the effects of potential future (and actual) trade policies ( Piermartini and Teh 2005 ). In other words, ex-post studies have both pre- and post-reform data at their disposal, while ex-ante studies rely exclusively on pre-liberalization data. Ex-post analyses have the advantage of being grounded on real-world observations; however, their difficulty lies in applying appropriate statistical methods to separate the impact of a given trade-policy reform from any other shock affecting the economy in the observation period ( Hertel and Reimer 2005 ; Piermartini and Teh 2005 ). This identification problem is absent in ex-ante studies, conducting counterfactual analyses, as they allow to explicitly and exclusively simulate the trade-policy shock ( Hertel and Reimer 2005 ). However, simulation studies encounter yet other challenges, namely to verify the assumptions concerning the model specification (e.g., parameters and functional forms) and, thus, to ensure the quality ofthe results ( Piermartini and Teh 2005 ; Winters et al. 2004 ). Their strength, in turn, is to reveal possible orders of magnitude of a policy impact, to identify relative winners and losers, and to give insights into the quantitative importance ofthe mechanisms behind the effects of a given trade-policy reform on poverty ( Winters 2003 ; Winters et al. 2004 ; Bourguignon et al. 1991 ).
The feature based matching approaches use only some distinctive features instead ofthe entirety ofthe image, therefore the data processing can be more computationally efficient. Onthe other hand, FBM normally requires sophisticated image processing for feature extraction and a reliable matching depends consequently onthe robustness of feature detection. In view of these facts, featured based approaches are typically applied when the pattern patch has “strong” features, i.e. the local structural information is more significant than the information carried by the intensity levels ofthe image, such as the applications in face recognition and panorama generation. Obviously, our pattern patches are not that case. Accordingly, the FBM approaches may only be applied as a supplement in special cases of radar target detection.
The fate ofthe contaminant metals is, however, not sufficiently described by complex stability since the mobilising effect is also determined by the solid-liquid partitioning ofthe carrier col- loids, being subject to the geochemical circumstances, which will not be invariant in the course of migration over long distances. For the purpose of a comprehensive modelling, it is therefore necessary to specify the respective conditions leading to an enhancement or con- finement of migration. Within the scope ofthe project, systematic investigationsonthe fun- damental partitioning processes (complexation, adsorption, precipitation) were performed by static and dynamic experiments employing various humic materials and geogenic solids. Altogether, theinvestigations are focused onthe implications of colloid charge compensation induced by protons (influence of pH) and trivalent metal ions. Due to their high binding affin- ity, multivalent metals are expected to exhibit an especially high competitive potential in re- spect of actinide-humate complexation. Additionally, these metals are known to be most ef- fective as coagulants. In view ofthe humic colloid-borne transport, the coagulation- flocculation process may be more important than the competitive effect in that it results in complete immobilisation. For this reason, both aspects were considered jointly in the present study. Most experiments were conducted with aluminium according to its abundance. Com- parative measurements with gallium, indium, scandium, yttrium and lanthanum were per- formed in order to determine the scope of specific properties at equal ion charge.