In section 3.2.2, we already mentioned that Mesos is capable of using different con- tainer technologies as containerizers  for tasks as well as executors. That is, frame- works define tasks that run an arbitrary executable like /bin/echo and Mesos takes care of launching the command within a container. If desired, it is also possible to instruct Mesos to put executors, which act as supervisor processes for tasks, in con- tainers. That makes rolling out new custom executors and switching between existing ones much more convenient. Mesos is able to leverage third party container technolo- gies like Docker to implement containerizers, but meanwhile also brings its own con- tainer tools, which are built upon the same set of kernel features like Docker or LXC, however with smaller set of features. From a Mesos perspective, integrating a custom solution for containerization is a reasonable goal, as this approach supersedes the ne- cessity of external dependencies. If for example Docker is used as a container provider, it must be ensured that each host running a Mesos agent has the Docker engine in- stalled. Apart from leaving containerization of tasks to Mesos, frameworks may, as an alternative, launch containers explicitly by themselves. For instance, Marathon sup- plies a JSON-based abstraction for the definition of tasks as Docker containers. Behind the surface, these tasks definitions are translated into plain shell commands using the Docker CLI which are then passed to Mesos and launched by the default command executor.
Abstract—Communication, Navigation and Surveillance (CNS) infrastructure in civil aviation must evolve as fast as possible to cope with all challenges posed by the growth of the worldwide population, globalization and the demand for more and more mobility worldwide. Analogue systems are replaced by digital means, automation is becoming much more important to handle new entrants in the air traffic system, spectrum saturation must be solved by introducing digital systems and the safety and security of the safety critical infrastructure surrounding civil aviation must be constantly updated to support the ever-growing complexity of the system. As one of the Future Communication Infrastructure (FCI) candidates we introduce LDACS as the very first true integrated CNS system worldwide. In previous works we have already analyzed its cybersecurity and developed an architecture with corresponding algorithm and proofed the improvement of the cybersecurity due to our security addi- tions. Here we implement the LDACS cybersecurity architecture and evaluate the impact of introduced security overhead on the LDACS system. We conclude that the proposed protection mechanisms successfully mitigate previously identified risks and only add minor time, data and computation overhead on top of the LDACS protocol stack, making the security solutions a good candidate to be included in the SESAR wave 2 updated LDACS specification.
The paper will present analysis of relevant measurement results ofhybrid wood joist floor constructions from laboratory and field objects. The paper will discuss possible improvements of such floor constructions. From the "Silent Timber Build project", some relevant laboratory measurement data exist. Due to the impact sound insulation focus, some apartments with hybrid joist floors have been built and data collected. This paper is a part ofa project at SINTEF Building & Infrastructure aiming to develop robust solutions also involving HVAC components inside the partition structure. Further progress in this project will include new laboratory
This master’s thesis evaluated different ORBs to determine whether CORBA is a suitable technology for the adaption of the existing image processing system at voestalpine Stahl. An ORB should fulfill certain criteria that are of significance for the company, but also personal requirements to simplify the development of CORBA applications. An additional requirement to an ORB is that it should offer a high performance that can compete with a low-level application, e.g. C++ sockets. For the performance evaluation, the computational latency of the invocation ofa function on a CORBA object was measured. It determines the time needed for calling a servant function until the results are returned to the client. For that purpose, a list of various CORBA implementations was elaborated that seemed to be of interest. A criteria evaluation was conducted on them regarding requirements like programming language, real-time support, quality of documentation and tutorials, kind of license (commercial or open- source), etc. There were quite a few CORBA implementations that passed the criteria evaluation. However, within the scope of this master’s thesis, not all of them could be evaluated further to measure their performance. Therefore, only a few were selected from the list of suitable ORBs: (1) The ACE ORB (TAO), (2) omniORB, (3) MICO IS CORBA (MICO), and (4) VisiBroker.
because the large absolute deviations would otherwise distort the overall evaluation. With the exception of the somewhat poorer performance of the LDA functional SVWN for bond angles, all MAEs for bond lengths and bond angles are essentially on par. EXX admixture in the GHs TPSSh and PBE0 tends to shorten bond lengths compared to their non-hybrid counterparts leading to left-shifted error histograms (cf. Figure A.2 in Appendix A.8). Among the LHs, no clear trends are observed, i.e. neither the underlying LMF model nor the LMF prefactor significantly alters the averaged results. Moderate prefactors as in LH07t-SVWN tend to deliver somewhat more uniform performance for bond lengths as can be seen from Figure A.2. Out-of-plane angles show significantly larger MAEs than bond angles for all functionals, but the low number of values in the test set prevents a more definite analysis. No major differences between singlet and triplet ESs are observed for bond lengths indicating that structures of the considered triplet ESs are not particularly more challenging than the singlet ESs included in the test set. The distinctly larger MAEs for bond angles of triplet ESs presumably are a statistical artifact of the small number of values for the triplet case.
hybrid optimum ranking criterion (Bonnafous and Jensen, 2005): the ‘output’, defined
as the ratio of the socioeconomic NPV to the amount of subsidy it needs. In the case ofinfrastructure financed exclusively by public subsidies, the public objective function has traditionally been the NPVse provided by the program of scheduled projects, the question of their optimal ranking being solved by the decreasing order of their IRRse's and the rhythm of their implementation depending on the available budget. We have shown that in the case ofa PPP, and more generally when the projects are partially financed by the users, the objective function of the public authority still being the total NPVse of the program, the decreasing order of the IRRse's does not provide the optimal ranking: the pure financial IRR is a better ranking criterion, and this is the truer the tighter the budget constraint. The ratio of the socioeconomic NPV to the amount of subsidy required is a still better criterion - in fact, the best. We can conclude, therefore, that both the tyranny of financial profitability and the error of ranking by the IRRse become issues as soon as the user becomes involved in the financing of water infrastructure.
The microservice architecture pattern is a paradigm for programming applications by the composition of small independent services, called microservices. Each microservice runs its own process and communicates with other services via network calls. To establish the exchange of information between the microservices, each one must expose an Application Programming Interface (API). The microservice pattern is built on the concepts of Service- Oriented Architecture (SOA), which puts an emphasis on the design and development of highly maintainable and scalable software. Microservices manage growing complexity by decomposing large systems into a set of services. This approach focuses on loose coupling, high cohesion and it is beneficial in terms of modularity, maintainability and scalability . Company names, such as Netlix, Amazon and others, have joined the trend of decoupling large monolithic systems into a set of independent services   . Figure 1 shows a taxi- hailing company’s infrastructure represented as a monolith application anda refactored version, which uses microservices.
Use Case 3: CAD-Based Three-Dimensional Assembly Instruction
One use case provides three-dimensional assembly instruction based on CAD-drawings and it represents one of the final steps of the production process and exemplifies the implementationof support systems in the assembly line. Participants step in for manufacturing personnel and are put in charge of assembling the still uninstalled parts of the produced good. While they initially have to perform the assembly steps with a printed two-dimensional manual, they subsequently experience various enhancements using the three-dimensional instruction given on an installed touch screen, where further information on tools, materials provisioning anda description of next work steps is interactively displayed. The relevant three-dimensional instructions may be loaded directly from the engineering department’s data base by scanning a RFID tag which was attached to the respective kart in advance. This ensures an up to date data set in the sense of one single-source of truth. If they were to rerun the assembly steps, participants could use individualized and stripped-down instructions depending on their skill level and thereby save further non-value added time. In the end the participants have learned how improved and individualized instructions can boost the learning curve, prevents mix ups and provide a data foundation for better process time scheduling.  This case does not yet aim at increasing picking times, which could be further enhanced by installing a pick-by-light solution.
In the acoustic simulation, receivers can be considered as vir- tual microphones, i.e., receivers return the output signals of the acoustic simulation. Two receiver types are used: A spatial re- ceiver encodes the direction of the direct and reflected sound sources in 3rd order ambisonics, and adds the diffuse sound sources to the first order components of the receiver output 1 . Omnidirectional re- ceivers with a single output are used to return the signal of the di- rect and reflected sources to the external diffuse reverberation gen- erators. Both receiver types apply the distance and air-absorption mode to the source signals. The omnidirectional receivers can have a finite range box. If a source is within that range box ofa receiver the distance law is only applied to the delay and not to the gain and air absorption model. This way it can be achieved to have all sources within a simulated room contributing with the same gain to the diffuse reverberation. Both receiver types can be restricted to render only sources within a range box, and they can also be re- stricted to render only direct point sources, mirrored point sources or diffuse sources.
e partitioning stage consists of domain decomposition and functional decomposition and des- ignates separate tasks for the parallel processing. ere are some recommendations on the number of tasks (e.g., at least an order of magnitude more than processing elements), granularity of tasks, scalab- ility of the partition. In the communication stage locality and structure of the communication are considered. For instance, we might consider reordering the data so that one-to-one communications happen more oen between neighbour processing elements (PEs). Furthermore, it is important that one-to-many (many-to-one) schemes do not exceed the bottleneck of the single sender (receiver). An- other point of consideration is dynamic communication. Here the communication channels are not established beforehand, but are created while the computation is already in progress. Finally, asyn- chronous communication is possible. Foster  underlines the importance of it in a distributed memory setting. Agglomeration stands for merging small tasks from the partitioning stage to larger tasks. An important aspect of agglomeration is to merge interdependent tasks together and to reduce the need in communication. en, mapping designates where each task has to be executed. In this phase the tasks are assigned to processing elements, it is so called ‘process placement’. e optimal map- ping problem is N P-complete [Bokhari, ], but there are some specialised heuristics and strategies for particular cases. One of the approaches to mapping are various methods of load balancing, see, e.g., [Tantawi and Towsley, , Foster, , Kwok and Ahmad, a,b]. e load balancing meth- ods include both some partitioning approaches, like graph partitioning and round-robin balancing, and task scheduling approaches, like various master-worker schemes, including more sophisticated hierarchical and distributed master-worker implementations [Hamdi and Lee, , Shao et al., , Shao, , Aida et al., , Grama et al., ]. Eden-based master-worker schemes include [Peña and Rubio, , Loogen et al., , Priebe, , Berthold et al., , Dieterle et al., a].
The EFB is funded by the European Union (EU) (2015 – 2019). Sport and Gymnastic organizations as well as scientific partners from ten institutions across eight European countries were involved in the development of the EFB (Austria, Sportunion; Belgium, Artevelde University of Applied Sciences; Bulgaria, BG Be Active; Denmark, Danish Gymnastic and Sports Federation; Germany, German Gymnastic Federation, Karlsruhe Institute of Technology; Slovenia, Sports Union Slovenia; Spain, UBAE; Europe wide, International Sports and Culture Association). Throughout the entire year, the EFB can be accessed by all interested organizations in Europe. Data are gathered online by these partner institutions. Organizations in participating countries offer completion of the EFB in different settings such as during sports club training sessions, community activities or public events such as the European week of sports, according to their abilities and outreach. As the overarching goal is to make the EFB available to any adult residing in Europe, the only exclusion criteria of the EFB are age < 18 years and one or more items on the physical activity readiness questionnaire (PAR-Q) that were answered with “yes” (Bös et al., 2017). The EFB is administered to participants by licensed instructors who successfully completed a one-day EFB instructor workshop. After completion of the EFB, each participant receives an individual certificate and additional feedback of seven pages on how to improve his/her fitness by those trained instructors. Collected data is saved on an online data platform accessible by all EFB instructors via an individualized access code. Data can be exported to Excel and SPSS through an anonymized output. The study procedures were approved by the ethics committee of the Karlsruhe Institute of Technology (Tittlbach et al., 2017). Participation in the EFB is voluntary and all participants provided written informed consent.
tax classifications or prevent the application of any tax classification at all.
Internally inconsistent features of the hybrid in- strument result in it being subjected to different tax treatments in the various legislations or tax regimes in which it is used. For company financing purposes, the use ofahybrid instrument with characteristics that do not clearly indicate whether it is a debt or capital instrument may lead to scenarios in which one coun- try considers the instrument in question to be a debt, whereas another nation treats the same financing method as a capital instrument. In this type of situa- tion, for companies operating in the EU, the payment of remuneration from the issue ofhybrid instruments can become a tax-deductible interest expense, whereas remuneration received from the same instrument may be treated as dividends received from capital funding, which are subject to the participation exemption for the beneficiary of the dividend in question (Finnerty, Merks, Petriccione & Russo, 2007).
This has two effects. First, if for an angle lower than 90 ◦ more slices and, thus,
more discrete lattice points are needed to achieve the same high-cSNR performance, these points have to be closer together in the discrete part. Therefore, a higher channel quality is needed for successful decoding (Figure 5.13). It may be noted that the base vectors of the lattice exhibit equal lengths, but their absolute length is not given during the design and is derived from the required high-cSNR performance. The second effect comes from the shape of the slices. An AMBC may be interpreted as a concatenation ofa vector quantizer, a mapping of the quantizer indices to lattice points, anda transmission of the resulting quantization error as the analog part, while the lattice points and the analog part are additionally rotated in space. Here, each slice can be regarded as the Voronoi region of the vector quantizer. From quantizer theory it is known that spherical Voronoi regions are optimal [LG89] [Krü10] since the symbols in the “spikes” of the slices contribute significantly to the quantizer distortion, i.e., to the required transmission power in this context. Thus, the slice which is closest to a sphere is the square of the 90 ◦
isoliert. Diese Isolierung funktioniert in der Theorie durch die lose Kopplung einzelner Services ohne großen Aufwand. In der Praxis kann sie aufgrund von fachlichen Designentscheidungen oder falsch abgebildeten Aufgabenverteilungen jedoch schnell herausfordernd werden. Die Schnittstellen der Services bilden die Kommunikationsgrundlage des Systems und müssen daher intensiv geprüft werden. Das Testen der Schnittstellen ist komplex, da oft sehr viele unterschiedliche Aufrufpfade existieren und auch hier eine Isolierung schwierig sein kann. Ein einzelner Microservice besitzt möglichst wenig Komplexität und nur so viele Zeilen Code, wie für das Erfüllen einer einzelnen gekapselten Aufgabe innerhalb des Gesamtsystems notwendig ist. Diese Aufgabe und der dafür implementierte Code sollte dabei ohne viel Aufwand von einem Entwickler nachvollzogen werden können. Diese Eigenschaften führen dazu, dass vor allem Service-Tests, welche die Kommunikation und das korrekte Zusammenspiel mit externen Systemen überprüfen, eine große Bedeutung haben und weniger der einzelne Service mit einer großen Anzahl an Unit-Tests im Vordergrund steht.
to express my sincere gratitude to my supervisor Prof. Dr.-Ing. Peter Vary. His continuous support and interest, numerous ideas and suggestions as well as his outstanding ability to create an inspiring, co-operative, quality-driven working environment combined with a great amount of flexibility and friendliness makes time at the IND exceptional. I am also indebted to my co-supervisor Prof. Dr.-Ing. Anke Schmeink for showing her interest in my work. Furthermore, I would like to thank my current and former colleagues at the IND for providing a very enjoyable working atmosphere, for fruitful discussions and collaboration, as well as very valuable proof readings. Thank you, Andreas Heitzig, Andreas Welbers, Annika Böttcher, Aulis Telle, Bastian Sauert, Benedikt Eschbach, Bernd Geiser, Birgit Schotsch, Christiane Antweiler, Christoph Nelke, Daniel Haupt, Florian Heese, Hauke Krüger, Heiner Löllmann, Helge Lüders, Kim Nguyen, Laurent Schmalen, Magnus Schäfer, Marc Adrat, Marco Jeub, Markus Niermann, Matthias Pawig, Max Mascheraux, Meik Dörpinghaus, Moritz Beermann, Roswitha Fröhlich, Simone Sedgwick, Ste- fan Liebich, Sylvia Sieprath, Thomas Esch, Thomas Schlien, Tim Schmitz, and Tobias Breddermann. I would also like to express my appreciation to all the students who made significant contributions to my work.
Instead of vector arithmetic, the SIMT model uses threads to work on a large number of elements at once. The cores for the threads are relatively simplistic, at least compared to a core of the CPU, and are bundled into groups. In each group the cores share a local memory that eectively acts as a cache and an instruction unit. While all cores share the same instruction unit, the code can branch into dierent cases. This is possible, because a core can decide if an instruction is carried out or not. When a branch is executed the thread ignores the commands if the condition for entering the branch is not met. Since there is only a single instruction unit the branches have to be executed sequentially. While a branch is executed only the threads in the branch perform the instruction. Excessive branching can therefore have a signicant impact on the performance, because only a few threads are working while the rest are idling. [9, 15, 1]
The present work has to be seen in the context of real-time on-board image evaluationof optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluationof this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance ofa segmentation method is still subjective evaluation, in which human experts determine the quality ofa segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.
Eine Microservice Architektur definiert hingegen ein konkretes Softwarepro- dukt als eine Sammlung aus einzelnen Services, die wiederum jeweils über einfache Protokolle (engl. „dumb pipes“) kommunizieren. Hierbei ist es ent- scheidend, dass für jeden Service folgendes individuell bestimmt werden kann: Technologische Basis, Art der Bereitstellung des Service und individu- elle Skalierungsverfahren . Weiter können ebenfalls Eigenschaften wie das automatische Erstellen und Testen von Services, laut u.a. Fowler und Le- vis, zu den Charakteristiken einer Microservice Architektur gezählt werden  .
R245fa is a HFC also known as pentafluoropropane and by its chemical name 1,1,1,3,3,-pentafluoropropane. From the corresponding T-s diagram, shown in Figure 3.4, it could be considered as an isentropic fluid. Unlike CFC and HCFC, it has no ODP and is nearly non-toxic. Its thermophysical properties and environmental characteristics make it suitable for different applications, such as centrifugal chillers, ORC for energy recovery and power generation, sensible heat transfer in low-temperature refrigeration, heat pumps and passive cooling devices. As well, it has a broad range of applications like foam blowing agent, solvent and aerosol. Although it is intended to remain trapped within the foam insulation, it is practically non bio-degradable with a lifetime of 8.8 years when it eventually does escape into the atmosphere. It has a high GWP of 930, but Honeywell refers to this as acceptable in their literature. Chemical analysis confirmed that the fluid can be considered stable at 300°C . One of the main disadvantages of R245fa is instead the cost. Its actual price is continuously rising, since the fluid is subjected to GWP taxes depending on the country regulations .
cial and non-financial components that can promote deeply rooted institu- tional change towards transparency and democratic accountability. While the common intervention logic assumes that both goals can be mutually reinforcing, this parallel achievement is not a given. Especially in times of crisis, CPs will have to rank the relative importance of these two goals in order to decide whether to continue with disbursements or not. As the case of Zambia shows, this potential conflict of interest has been existent but has not been made explicit. The relative priority given to governance promotion as opposed to financing poverty alleviation has differed among the CPs and has complicated a joint approach of the CPs in the dialogue process. The differences in goal priorities also partly explain why for instance Sweden temporarily suspended BS in 2009 while the EU increased its disbursements significantly through its V-Flex mechanism around the same time. It also explains why different CPs have given different importance to the UPs. For some, these underlying principles – referring to the general reform process and governance context – are the most important part ofa de facto condi- tionality. For those CPs that give high priority to governance promotion, every disbursement has at least implicitly to be made against an assessment of these principles. For those CPs, indicators of the PAF play only a supplementary role, while the fulfilment of the Underlying Principle is a key factor in trig- gering disbursements. For others, the relative priority of Underlying Princi- ples and PAF indicators is just the opposite. For instance, the EU also regards BS as a “dynamic” instrument that can be implemented in countries with rel- atively weak governance structures because the instrument is assumed to be apt for improving those weaknesses. This interpretation can easily conflict with the interpretation of several bilateral donors such as Germany and Swe- den. The latter CPs perceive budget support only as a suitable aid instrument for countries which have already reached a certain level of political trans- parency and accountability. These different interpretations of the instrument’s intervention logic also partly explain the still fragmented analytical mecha- nisms. Individual CPs prefer to define the concrete terms for analysis accord- ing to their specific interpretations of the instruments’ intervention logic, which is not necessarily identical with that of other CPs. As one representa- tive ofa bilateral aid agency has put it, the core problem is straightforward (as was confirmed in several other interviews):