As a solution approach, this thesis introduces the Palladio Component Model (PCM) as modelling language to specify component-based software systems. The PCM is a meta-model for performance-aware component-based software modelling which explicitly tackles the introduced issues of early design time per- formance prediction in a component-based context. It supports parameterised component specifications including support for parameterisation over the ex- ternally connected components, the hardware execution environment, and usage dependencies including abstractions of the inter-component data flow. The PCM provides distinct modelling languages for each developer role participating in a component-baseddevelopment process. It allows performance specifications with arbitrary distributed stochastic performance annotations thus lowering the risk of violating model assumptions when composing components from different sources. Its initial development started in the context of this work, but meanwhile ex- tensions by other PhD theses, e.g., for parameterisations based on component usage, exist (Koziolek, 2008).
Component-based software engineering aims at developing software systems by assem- bling pre-existing components to build applications. Advantages gained from this in- clude a distribution of the development effort among various, independent developer roles, and the predictability of properties, e.g., performance, of the resulting assembly based on the properties of its constituting components. Especially, during software de- sign, system models abstract from system implementation details. These abstract mod- els are the input of automatic, tool supported architecture-based performance evaluation methods. However, as performance is a run-time attribute, abstracting from implemen- tation details might remove performance-relevant aspects resulting in a loss of prediction accuracy. Existing approaches in this area have two drawbacks: First, they insufficiently support the specifics of a component-baseddevelopment process like distributed devel- oper roles and second, they disregard implementation details by focusing on design-time models only. The solution presented in this thesis introduces the Palladio Component Model, a meta-model specifically designed to support component-based software devel- opment with predictable performance attributes. Transformations map instances of this model into implementations resulting in a deterministic relationship between the model and its implementation. The introduced Coupled Transformations method uses this re- lationship to significantly increase prediction accuracy by an automatic inclusion of im- plementation details in predictions. The approach is validated in several case studies showing the increased accuracy as well as the applicability of the overall approach by third parties for making performance-related design decisions.
On the other hand, with current deductive verification methods, the price for component-baseddevelopment is full re-verification on every composition step [ 75 ]. In other words, deductive verification results over components do not transfer to systems composed of such verified components. This can be particularly painful, if verification of the whole system is strikingly more chal- lenging than verification of individual components, and even more so, if parts of the system (i. e., some of the components) have previously been verified, but must be re-verified as part of the system under consideration. For instance, a traffic network consisting of multiple intersections can become huge and hard to verify as a whole. Also, if an intersection is part of multiple models, it has to be re-verified repeatedly. Furthermore, even if only small parts of a system evolve, repeated re-verification of the whole system model is necessary. For in- stance, a collision avoidance system, where a robot tries to avoid colliding with an adversarial obstacle, might at first be verified using a stationary obstacle and subsequently be extended to an obstacle moving in straight lines and an obstacle moving in curves. In such a setting, even though the robot—probably representing the major part of the model—does not change, the whole system, including the robot, needs to be re-verified on every step. As hybrid system models are often developed in such an incremental fashion [ 73 ]—which entails that only small parts of a system evolve—and since deductive verification often requires human guidance, this re-verification can be very costly.
This definition already mentions most of the key aspects of components and component-baseddevelopment. Composition is the principle by which individ- ual components are composed to form a larger component assembly. The speci- fication of components is given by the specifications of the component interfaces which are the access points of the functionality provided by that component. In particular, these interface specifications should be seen as contracts between the user and the implementor of the component, and may include assumptions on the context (i.e. on the user of the component). The second sentence says that a component should be “deployed independently”. This suggests that the com- ponent is an independent, encapsulated entity with its own local memory that works correctly as long as the contracts at its interfaces are respected by the en- vironment. Finally, the importance of independent components also manifests in the phrase “(a software component) is subject to composition by third parties”. Components are often delivered as compiled software units and later integrated by a system architect into a larger system, and therefore any context depen- dencies must be explicitly specified in the interface specifications to admit the construction of correct component assemblies.
3 defect, and it was treated the same way. Lymphocytes were obtained from the patients and transduced with a retroviral vector outside of the patients. The retroviral vector integrated the correct version of the γc cytokine receptor into the genome of the lymphocytes, and reinfusion alleviated the symptoms of the gene defect. The supposedly random gene integration, however, placed the repaired gene preferentially near the LMO2 proto-oncogene promoter, which led to the development of leukemia in three of ten patients (Hacein-Bey-Abina et al., 2003; Hacein- Bey-Abina, 2003). Two of the three patients with leukemia responded well to chemotherapy. Sadly, the third child died in October 2004. Another problem using viral vectors surfaced when Jesse Gelsinger, an 18-year-old patient suffering from a mild form of ornithine transcarbamylase deficiency, was enrolled in a phase 1 clinical trial to test the safety of an adenoviral vector against his disease. He developed a massive immune reaction to this vector with subsequent multi-organ failure and died after four days. He is considered to be the first person to have died from a gene therapy product (Lehrman, 1999). These tragic events sparked a global debate about safety and risk/benefit ratios of gene therapy and severely lowered the number of approved trials in the following years (Gansbacher, 2003).
still a demand for a simple and more accessible method to produce gold nanoparticle assemblies with long term stability.
In this chapter, a versatile liquid phase method based on microemulsion has been developed for assembling gold nanoparticles to form dimers and immobilizing the assembly by silica shell. The phenomena of aggregate micelles containing two or three gold nanoparticles have been reported in a study on synthesizing gold nanoparticles in block copolymer microemulsion during a certain stage of the reaction 30 . The intention of this study is to fuse the microemulsion droplets to achieve controlled assembly of gold nanoparticles and use the microemulsion as template to encapsulate the gold nanoparticle assemblies into silica to realize the immobilization. This stabilizes the gold particle dimers and prevents them from further agglomeration. The introduction of silica coating also allows attaching and modifying the particles with other functional groups as developed in silica nanoparticle chemistry. Furthermore, the introduction of other groups on the surface of gold loaded silica nanoparticles could control the assembly and the distance to the gold particles.
This paper shows how the recently developed Func- tional Mockup Interface (FMI) standard for model exchange can be utilized in the context of AUTO- SAR software component (SW-C) development. Au- tomatic transformations between the XML schemas of the two standards are utilized to convert FMI models to AUTOSAR. An application example is demonstrated, where a Modelica controller is ex- ported through FMI, converted to an AUTOSAR SW-C and then imported into an AUTOSAR tool. The presented approach, with FMI as an intermediate format, should be an attractive alternative to provid- ing full-fledged AUTOSAR SW-C export.
In a ﬁrst step the maximum design space and connecting interfaces to other parts in the assembly were de ﬁned (Fig. 5.12 ) to guarantee the ﬁt of the optimisation result to the seat assembly. Interfacing regions are determined as frozen regions, which are not part of the design space for optimisation. The kinematics lever is dynamically loaded if the passenger takes the sleeping position. Current topology optimisation software is limited to static load cases. Therefore the dynamic load case is simpli ﬁed to ﬁve static load cases, which consider the maximum forces at different times of the dynamic seat movement. Material input for the optimisation is based on an aluminium alloy (7075) which is commonly used in aerospace industry: material density: 2810 kg/m 3 , E Modulus: 70.000 MPa, Yield Strength: 410 MPa, Ultimate Tensile Strength: 583 MPa and Poisson ’s Ratio: 0.33. The objective criterion of the optimisation is a volume fraction of 15 % of the design space. The part is optimized regarding stiffness. In Fig. 5.13 the optimisation result as a mesh structure and a FEM analysis for veri ﬁcation of the structure are shown. The maximum stress is approx. 300 MPa, which is below the limit of Yield Strength of 410 MPa. Before the manufacturing of the optimisation result via SLM, the surfaces get smoothened to improve the optical appearance of the part. Com- pared to the conventional part (90 g) a weight reduction of approx. 15 % ( ﬁnal weight 77 g, Fig. 5.14 ) was achieved. For a series production of this part further improvements to increase the productivity of the process are needed.
In a ﬁrst step the maximum design space and connecting interfaces to other parts in the assembly were deﬁned (Fig. 5.12 ) to guarantee the ﬁt of the optimisation result to the seat assembly. Interfacing regions are determined as frozen regions, which are not part of the design space for optimisation. The kinematics lever is dynamically loaded if the passenger takes the sleeping position. Current topology optimisation software is limited to static load cases. Therefore the dynamic load case is simpliﬁed to ﬁve static load cases, which consider the maximum forces at different times of the dynamic seat movement. Material input for the optimisation is based on an aluminium alloy (7075) which is commonly used in aerospace industry: material density: 2810 kg/m 3 , E Modulus: 70.000 MPa, Yield Strength: 410 MPa, Ultimate Tensile Strength: 583 MPa and Poisson’s Ratio: 0.33. The objective criterion of the optimisation is a volume fraction of 15 % of the design space. The part is optimized regarding stiffness. In Fig. 5.13 the optimisation result as a mesh structure and a FEM analysis for veriﬁcation of the structure are shown. The maximum stress is approx. 300 MPa, which is below the limit of Yield Strength of 410 MPa. Before the manufacturing of the optimisation result via SLM, the surfaces get smoothened to improve the optical appearance of the part. Com- pared to the conventional part (90 g) a weight reduction of approx. 15 % (ﬁnal weight 77 g, Fig. 5.14 ) was achieved. For a series production of this part further improvements to increase the productivity of the process are needed.
Modia utilizes Julias metaprogramming features to in- tegrate an equation-based modeling language with a pro- gramming language (e.g. a Modia model can be stored in a dictionary that in turn is inquired in another Modia model to select and use a submodel from this dictionary). Modia3D 5 is designed to model 3D systems, initially only mechanical systems, but it shall be expanded into other domains in the future. It is implemented in Julia and uti- lizes ideas of multi-body programs and game engines. In the near future, Modia and Modia3D shall be closely in- tegrated, e.g. using a Modia3D model in Modia or using Modia models in Modia3D. Up to now, Modia3D is im- plemented for functionality and not tuned for efficiency. Therefore, there are no benchmarks yet and in particular no comparison with Modelica models. For animation the free community edition as well as the professional edition 6 of the DLR Visualization library 7 (Bellmann, 2009; Hellerer et al., 2014) are used. The overall goal is to apply the results of the Modia/Modia3D prototyping into the design of the next Modelica language generation.
• At the moment, the software architect has to decide when he wants to terminate the process. Section 4.2 suggests that the process ends when the architect “is satisfied with the reconstructed architecture”. Kazman et al. point out that it is a general problem of architecture reconstruction processes that “there are no universal completion criteria” [KOV03]. This is also true for Archimetrix. For now this architect’s satisfaction with the architecture can only be based on a ’gut feeling’ or simple metrics like ’The system contains no more bad smells’. Thus, the architect obviously runs the “risk of stopping too soon” as Parnas puts it [Par11]. This situation could be improved by employing metrics to measure the modularisation quality of the system [SKR08]. In addition, design heuristics like the ones presented by Riel [Rie96] and by Cho et al. [CKK01] could be taken into account. For example, the architect could follow the process until a number of metrics reaches a certain threshold.
Using Equinox - Jetty, a framework which supports the deployment of JSF 240 based dynamic modular web applications 241 was successfully designed and realized. Its dynamic module handling capabilities were demonstrated in chapter 7, where modules in the form of OSGi bundles 242 were swapped 243 in and out of an application during runtime. However user session state information contained in bundles is lost when a bundle is swapped out of an application. Therefore a solution was required to preserve a user’s session state when a bundle swap occurs, so that user information in a swapped out bundle can be transferred to a newly swapped in bundle. This problem was solved using the Serializable API 244 , which demonstrated (in chapter 7) its capabilities of successfully preserving and restoring user session state during bundle swaps. According to the tests conducted in chapter 7, it was recommended that user session state should be preserved in an application’s memory (RAM) so that the preservation and restoration process can be done faster as compared to storing the session state on disk. Writing the information to the disk incurred long delays, which as a result contradicts an important requirement stating that the bundle swapping process should be done as fast as possible in order not to delay user requests for too long.
by communities, by the local administrations, non-government agencies and organizations (NGOs). The Community Based Tourism (CBT) is, probably, one of the oldest forms of tourism. A community involves persons that have a kind of collective responsibility and the capacity to make decisions as representative bodies. CBT is that tourism where the inhabitants of an area (mostly rural, poor and economically marginalized) invite tourists to visit their communities supplying accommodation. The inhabitants make incomes as land managers, entrepreneurs, service providers and product suppliers. At least part of the tourism receipts is put aside for the projects that bring benefits to the community as a whole. CBT allows tourists to discover local habitats, wild fauna, and celebrates and observes traditional cultures and rituals. The community is aware of the commercial and social value of their natural heritage through tourism, and this will favor the preservation of these resources. The community may choose to co-operate with a partner in the private sector to supply capital, customers, marketing, accommodation for tourists or other expertise. Under an agreement on supporting the development and preservation of the community and on the planning of tourism development in partnership with the community, this partner may or may not hold a share of the tourism organization.
Plasil and Visnovsky use another extension of finite state machines to represent the behav- ior specification [PV02], a similar approach is used in other works (e. g., [BHJ09]), too. The representation is called behavioral protocols. This extension allows the use of two different parallel operators. The first operator “|” defines the and-parallelism, which results in the shuffle language of both participating processes. The second operator “||” describes the or-parallelism, where either both processes are performed via the and-parallelism or just exactly one of the participants performs its interactions. Moreover, synchronization during interaction is possible. Adamek describes in [Ada06] an extension of this approach for allowing an unbounded number of components. Each component is represented by a finite state machine. But an arbitrary num- ber of components can communicate. The author suggests to allow to create a behavior template at component-design time. After the actual architecture is known (the complete component- based software can be considered), the number of components is chosen based on the level of parallelism in the concrete architecture. Thus, the number of parallel processes is bounded at the verification time. [PP09] is an extension of our work [BZ08b] towards behavior protocols. However, this approach is not capable to deal with (unbounded) recursion.
100 Next, additional modifications of the 66 th formulation were performed. The 66 th coating formulation was a good starting point for the development of further lipid-based coatings. Pellets coated with this coating showed an adequate modification of the drug release profile. The aim of the additional alterations of the 66 th formulation was to obtain a lipid coating with a longer lag phase with close to zero API release in the hydrochloric acid medium (pH 1.2), and simultaneously a fast release at higher pH values (phosphate buffer, pH 6.8). As a result three additional coatings (67 th , 68 th and 69 th formulation) were obtained which showed the desired delayed, pH- and time-dependent release of the active substance. The longest lag-phase was observed for pellets coated with the 69 th coating dispersion, whereas the fastest release demonstrated the 67 th formulation. Thereafter, pellets coated with the developed formulations were stored at room temperature for 10.5 weeks and subjected to dissolution tests. The release rates of stored pellets (Ft 10.5w ) were higher than release rates obtained from pellets analyzed right after their coating (Ft 0 ). However, during 2 hours’ incubation in HCl medium the release profiles of all stored pellets did not differ much from release profiles of Ft 0 pellets. Subsequently, the differences were increasing. The dissolution profiles in phosphate buffer differed only slightly between Ft 0 and Ft 10.5w during the whole test period. After the storage the longest delay in the drug release was still demonstrated by the 69 th formulation and the shortest by the 67 th formulation. As the 69 th coating dispersion showed very good properties and pellets coated with this formulation exhibited the highest stability, it was taken for further examinations.
The principal component analysis is a suitable approach to reduce the dimension of input data by deleting some trivial information where the data are to some extent correlated. In other words, PCA is mainly performed for dimensionality reduction to project high dimension data into smaller ones but with keeping as much information as possible. Assuming a 2-dimensional scatter plot, PCA finds the best fitting line by maximizing the sum of the squared (SS) distances from the projected points to the origin. The models try with many different fitting lines in which the line with the largest SS is recognized as PC1 which is a linear combination of the variables. The number of PCs in a dataset is equal to the number of variables but it may be the same as the sample number if the sample number is smaller than the number of variables. Mathematically, factor analysis gains concepts of eigenvalues and eigenvectors in which for each PC, the sum of squared distances are eigenvalues of that PC, and the singular value for the PC is computed as the squared root of the eigenvalue of the PC. The eigenvalues and eigenvectors give the magnitude and direction of transformation carried out on the data matrix, respectively. Moreover, the proportion of variation of each PC can be determined when the corresponding eigenvalue of the PC is divided by the sample size minus 1 (i.e., n − 1). In this way, the contribution of each component can be figured out and the main components associated with higher variances can be selected for further implications. On the other hand, the components with lower variances indicate that their contribution can be neglected and they do not catch much information. Therefore, for dimensionality reduction purposes, only the PCs with higher contributions or variances are employed for the modeling procedures. However, the applicability of PCA and correlation among the data matrix should be examined before performing factor analysis [ 30 , 31 ]. Considering time series forecasting already decomposed with wavelet transform, a high dimension of input data is expected in which PCA can be served as a suitable proxy for dimensionality reduction of the input variables.
rigid and flat floor. The plots on the left show the resulting measured and (adjusted) reference DCM trajectories. The adjusted DCM reference is smooth. After the perturbations, close to perfect DCM tracking is regained after a few steps. The momentum-based disturbance observer presented in Sec. III-C was also tested in OpenHRP simulations. Consid- ering the perturbation force estimate, the controller survives external forces of up to 15% of the robot’s weight (ramping up from zero within e.g. 6 seconds).
Previous approaches to PR recommendation focus either on predicting the likelihood of response  or the likelihood of accep- tance [14, 34, 40, 43]. However, approaches focusing exclusively on response likelihood fail in recognizing PRs that are directly accepted and merged into the master branch without the need for discus- sion. For example, this may occur when contributors apply minor fixes 2 or when the changes are performed to fix known issues 3 . We empirically assessed that on a total of 278,418 PRs analyzed in our study, 33,231 PRs (i.e., 11.6%) were accepted without any discussion. Similarly, relying exclusively on acceptance may lead to misidentification of PRs that can be accepted after “shepherding”. For these reasons, we argue that PR recommendation strate- gies should be based on a more fine-grained classification of PRs: accept, respond, and reject. We define the three classes as fol- lows: accept are PRs accepted without any discussion; respond are PRs accepted after discussion with the contributors; and finally, reject are PRs which have not been accepted. To automate the classification of PRs we propose an approach, called CARTESIAN (aCceptance And Response classificaT ion-based requESt I dentifcAtioN ). From a practical point of view, CARTESIAN can be used by inte- grators to have PRs clustered/classified according to the aforemen- tioned categories. In this way they can prioritize the PR to address, depending on the actions they want to take. We note that CARTE- SIAN does not deal with the order of PRs given.