The following describes how the prototypical transformation implemented for this thesis maps components to POJOs or EJBs. To illustrate how the model-2-text transfor- mation generates code without showing generator templates, the following uses UML diagrams to give the structure of the generated code. As transformation language, QVT relations are used ( Object Management Group (OMG) , 2007a ) which show the relation- ship between the PCM instance and the generated code as represented by UML diagrams. Notice, this only serves illustration purposes, the implementation does not use the QVT model-2-modeltransformations shown but generates the code directly. Furthermore, the depicted relations use the concrete syntax of both the PCM and the UML to ease un- derstanding. The relationships have to be interpreted in a way that such that the target pattern on the left hand side is matched as many times as possible. For every match, the target pattern is emitted by the transformation. The names in the figures represent place holders. The place holder are set according to the matched values in the source pattern and can be used in the target pattern. For more details see the QVT standard ( Object
the source consistency control condition for forward rules on the one hand and NAC- consistency together with termination of modeltransformations on the other hand? Furthermore, practical applications sometimes use plain graphs, which do not correspond to triple graphs according to , because morphisms between triple components cannot be defined accordingly. For instance edges and attributes occur in the component graph but have no connection to source and target model elements, thus morphisms of the triple graph cannot be total. These constructions are used to store information about the history and dependencies of executed integration steps, and thus they do not directly belong to the integrated model and can be stored separately. In other examples one node of the correspondence component may be connected to multiple nodes of the source resp. target component, which again does not correspond to triple graphs (e.g. [11, 13]). It remains unclear whether the theoretical results including those of  and  for triple graph transformations can be transferred to plain graph grammars, which do not directly correspond to TGGs.
Triple graph grammars specify modeltransformations, but they do not directly solve the problem of how, given a source model (respectively, a target model), do we build its forward transformation (respectively, its backward transformation). However, as we will see in Sections 2.1 and 2.2, we can derive from a triple graph grammar the associated operational rules that are used for this task. In particular, in Section 2.1, we present model transformation in terms of forward (and backward) transformation rules, describing the main results (Sch¨ urr and Klar 2008; Ehrig et al. 2009a). Then, in Section 2.2, we present a more elaborate kind of rule, called forward (and backward) translation rules, based on the notion of translation attributes. These rules are the basis for the analysis of functional behaviour and information preservation in Section 3.
Modeltransformations based on triple graph grammars (TGGs) have been applied in several practical case studies and they convince by their intuitive and descriptive way of specifying bidirectional modeltransformations. Moreover, fundamental properties have been exten- sively studied including syntactical correctness, completeness, termi- nation and functional behaviour. But up to now, it is an open problem how domain specific properties that are valid for a source model can be preserved along modeltransformations such that the transformed properties are valid for the derived target model. In this paper, we analyse in the framework of TGGs how to propagate constraints from a source model to an integrated and target model such that, when- ever the source model satisfies the source constraint also the integrated and target model satisfy the corresponding integrated and target con- straint. In our main new results we show under which conditions this is possible. The case study shows how this result is successfully ap- plied for the propagation of security constraints in enterprise modelling between business and IT models.
Figure 5.21: Example structure based mapping: flattening hierarchical activities The HR grammar based language descriptions, used in our approach, in general, are more restrictive and complex than pure meta-model (with structuring classes) based ones. However, when compared to meta-models with OCL constraints of comparable expressiveness, that enforce (when possible) the same structural well-formedness con- straints on e.g., activity diagrams (see end of Section 3.1 for examples), HR grammars typed over meta-models present a more concise, intuitive (possibly with concrete syntax), and powerful way to describe such structural constraints. Like meta-models, they only need to be created once per language and can then be re-used by any transformation de- veloper, that employs our approach. The quality and complexity of an HR grammar can affect the characteristics of the grammar based modeltransformations using it, but this is also the case with meta-models and modeltransformations defined based on them.
The design of the control flow constructs is intentionally different from those in UML activity charts even if the basic ideas are comparable. However, in contrast to UML activity charts, the RD-SEFFs control flow constructs use a similar representation as the abstract syntax trees of structured programming languages like Java. For example, a loop is not modelled by a control flow reference pointing at an action already executed earlier. It is modelled by a loop action which explicitly contains a sequence of actions representing the loop body. After repeating the inner behaviour n-times, the course of actions continues at the successor of the loop action. The same is true for branch actions, forks, etc. The rationale behind this kind of modelling is the avoidance of ambigui- ties which can arise when analysing models with control flows models based on arbitrary graphs like UML activity diagrams. Additionally, making nested be- haviours explicit eases the handling of model instances in both types of modeltransformations - transformation into analysis models as well as transformations into implementations. The reason for this is that there is no need for the trans- formations to figure out the start and end of inner behaviours. Additionally, performance annotations like iteration counts can annotate directly the corre- sponding control flow actions. As a consequence of the explicit modelling of nested behaviours, each behaviour is a chain of actions going directly from the (only) start action to the (only) stop action.
To tackle this scenario, this thesis presents a new application of the model driven architecture (MDA), which transforms a platform independent model transformation specifications (PIM-MT) to platform specific model transformation specifications (PSM- MT) by higher order transformations (HOT). For industrial usage, both the platform independent transformation specification and the platform specific execution reuse proven existing technology which is tailored and extended where needed. This allows for the stepwise introduction of model transformation technology in existing engineering and technology environments based on a classification scheme which was developed as part of this thesis. For the PIM-MT specification, the strict handling of references between engineering model elements from current model transformation specifications, which does not fit well the requirements of engineering models with temporarily violate references within the engineering workflow, was replaced by a weaker reference handling based on domain specific reference designators. An existing model transformation specification, the ATL language, has been tailored for PIM-MT specifications. For the PSM-MT desktop execution, the ATL desktop model transformation engine was reused. XSL transformations were adapted for enterprise modeltransformations executed on PLM servers. A PSM-MT engine for real-time IEC 61131 programmable logic controllers was developed as part of this thesis.
The paper is structured as follows: After introducing our case study for refactor- ing and model transformation in Section 2, we consider the notion of consistency of a model transformation step and a refactoring step in Section 3, where the steps are defined as single rule applications of the respective graph rules to a model state. In Section 4, we extend this basis to sequences of rule applications and state our main result for the consistent evolution of modeltransformations. We give an overview over extension of our main results in Section 5, and look into some further refactor- ings in Section 6. Section 7 compares our approach to related work, and in Section 8 we conclude the paper with an outlook to future work.
A rst approach to analyze functional behaviour for modeltransformations based on TGGs was already given in  for triple rules with distinguished kernel typing. This strong restriction requires e.g. that there is no pair of triple rules handling the same source node type - which is, however, not the case for the rst two rules in our case study CD2RDBM. The close relationship between modeltransformations based on TGGs and those on plain graph transformations is discussed in , but without considering the special control condition source consistency. The treatment of source consistency based on translation attributes is one contribution of this paper in order to analyze functional behaviour. As explained in Sec. 3 additional NACs are not sucient to obtain this result. Functional behaviour for a case study on modeltransformations based on plain graphs is already studied in  using also critical pair analysis in order to show local conuence. But the additional main advantage of our TGG-approach in this paper is that we can transfer the strong results concerning termination, correctness and completeness from previous TGG- papers [4, 5] based on source consistency to our approach in Thm. 2 by integrating the control structure source consistency in the analysis of functional behaviour. Finally there is a strong relationship with the model transformation algorithm in , which provides a control mechanism for modeltransformations based on TGGs by keeping track of the elements that are translated so far. In  we formalized the notion of elements that are translated at a current step by so-called eective elements. In this paper we have shown that the new translation attributes can be used to automatically keep track of the elements that have been translated so far.
Fig. 4. Banking Example Transformation (UML Object Diagrams)
2.3 Tool Evaluation
Henshin provides two editors to define transformation models and a runtime component to process them. Further, it offers analysis tools such as state space analysis for verification and critical pair analysis (explained in section 3). Note that the evaluations regarding Henshin expressed in this paper are based on ex- periences we have gained during the experiments with the example in Section 2.2 and especially with the implementation of the DSL transformation in Section 4.3. The creation of transformation rules and units is simple with the provided tree-based and graphical editor as soon as one understands the Henshin trans- formation meta-model (Fig. 1)  and the used Ecore input meta-model is not too complex. A screenshot of the graphical editor of Henshin has already been shown in Fig. 3. However, as soon as the used meta-model is getting complex and aggregates multiple Ecore models, the Henshin Eclipse tools are getting challenging to handle. For example, the DSL-based example which will be ex- plained in Section 4.3 is based on two Ecore models. The User Interface (UI) of the tree-based editor can not handle this and certain changes had to be made in the XML manually. Nevertheless, those are basically just usability issues. The transformation engine works very well, once the transformation rules are defined. Using the engine (runtime component) with the Java API provides a better feedback weather the Eclipse UI, since the user is getting exceptions and thus helpful information if something went wrong.
A pre-requisite of our repair algorithm is the existence of a model transformation system which configures the algorithm w.r.t. a given modeling language. The algo- rithm is configured with different sets of repair rules in a way that at any point of the algorithm where choices have to be made, they can be random, or interactive. This applies to choices of rules (repair actions), matches (model locations or targets), and attribute values. In other words, identifying the applicable rules and their matches aim at finding the proper repair actions and matches w.r.t the algorithm steps and the given model state. For example, Step 1 uses a kind of rules which creates a node with its containment edge in an existing node if the lower bound of its containment type is not yet reached. Step 1 executes the rules as much as possible and stops once there is no applicable rule. At this end, all the required nodes with all their required child nodes are created. Please note that the different kinds of repair rules are defined to be generic in the sense that they can be used to manage (fulfill) any lower or upper bound. Additionally, they consider the EMF model constraints. In other words, each successful application of a rule enhances (or at least preserves) the model consistency. In the following, we present the different kinds of repair rules. Since these rules have to be derived from each given meta-model, we present their meta-patterns and how to derive them. Thereafter, we present the algorithm steps as transformation units being configured with the derived repair rules.
MDE process and model transformation as the key technique to adapt them for various purposes. Developers with domain knowledge can focus on the problem domain rather than on implementation details. Due to a high abstraction level, models may be easier built, understood, maintained and analyzed [ 59 ] than implementations. Technical experts can specify modeltransformations for repetitive tasks generating different artifacts such as code for different runtime environments. This also affects the quality of the code. Generated code is homogeneous due to the model- to-text transformation and often of good quality, as transformations are usually specified by experienced developers. Furthermore, models can help to detect problems in a software system early by validation and testing. Partly, models are also used to ensure invariants of software systems by translating them into formal problem descriptions that can be verified by model checking tools. In some application domains, such as in the field of embedded systems, these techniques have enhanced productivity, reusability and quality [ 67 ].
Corresponding to the bottom layer of the PROGRES spec- ification, the host graph is also sectioned in two main parts. One part is used to store the domain ontology, a second part to store design rules. Both parts are developed at tool runtime and specific to one class of buildings e. g. car garages. Looking at the bottom of figure 3, this separation is represented by two boxes. The ontology elements in the host graph, depicted in the left box, are instances from PROGRES node types defined in the domain ontology model package. This part describes the common, static PROGRES instantiation mechanism, its consistency is ensured by the PROGRES runtime environment. The design rule elements, depicted in the right lower box, are also instances from PROGRES node types, which are defined in the design rule model package. Up to here, these elements are still unspecific placeholders for storing design rule elements in the host graph. The assignment of design rules to the previously defined, domain specific concepts is established by way of associating ontology elements to design
Within the scope of this work, solution-mediated phase transformations in systems with and without additives have been carried out, the ultrasonic measuring technique having proven to be suitable for monitoring phase transformations of organic as well as inorganic substances. However, it has been demonstrated that a reliable prediction of induction times for phase transformations might not be possible, even with so-called “pure solutions” and limited to one substance, due to inevitably present by-products. A prediction of induction times according to the rule of thumb that nucleation of the stable phase only starts spontaneously if the solubility curve of the metastable phase is situated outside the metastable zone of the stable phase did also not lead to satisfactory results for all substances. Furthermore, although the extension and therewith the control of induction times by the use of additives has been successively demonstrated, it has also been shown, possibly due to the interaction with inevitably present by-products, that the effect of additives often might be too complex to be predicted.
Based on the development of the nondiagonal phase field model in Section 2.2.4 , two-dimensional simulations of solidification of a pure substance with elimination of all the abnormal interface effects have been performed. We point out that the quantitative modeling of dendritic solidification is considered to be a “gold standard” for the qualifica- tion of modeling descriptions due to the advanced understanding of this microstructure and the availability of high precision Green’s function descriptions. Therefore, such Green’s function calculations have been performed to benchmark the non-diagonal phase field simulations in the symmetric and one-sided limits. As detailed in [ 98 ], the non-diagonal phase field simulations reproduce the Green’s function results in a satisfactory manner with less than 5% deviation. For two-sided cases, the phase field simulations get closer to the analytical theory when the undercooling decreases. Besides, the robustness of the phase field results are supported by two other sets of simulations when (i) the kinetic cross-coupling vanishes, and (ii) the surface diffusion remains. The deviation approaches 20% when the kinetic cross-coupling vanishes, while it reaches 10% when the surface diffusion remains. We refer to [ 98 ] for further details of the comparison. The comparison emphasizes the necessity of the necessity of using the non-diagonal phase field model with elimination of surface diffusion. Therefore, the non-diagonal phase field model provides a quantitative basis to reproduce the microstructure evolution process during bainitic transformation with full consideration of the diffusion difference between the austenite and ferrite phases. We note that in contrast to solidification situations, where often the diffusional transport in one phase can be neglected or the transport can assumed to be the same (symmetrical model for thermal dendrites), solid state transformations are often in between these limiting cases. Before, quantitative phase field descriptions with controlled thin interface asymptotics have not been available, and the present extension towards a non-diagonal model allows to capture the unequal diffusional transport during bainitic press hardening processes quantitatively.
phase. Comparing Figure 6(f) with Figure 5(c), which both show the evolution of the martensitic phase for time step t = 120, reveals the martensitic plate growing slightly faster in purely elastic material. The plastic zone seems to constrain the martensitic transformation. This result is reasonable since the slip additionally dissipates elastic energy which is not available for the phase transition. Energetic considerations for this model are carried out in detail in Schmitt et al. (2013a,b).
especially fruitful in connection with institutional change approaches. But they also demonstrate important differences between actual transformations in Eastern Europe and systemic changes that took place in other regions and in other times. The changes in Eastern Europe and the FSU are more far-reaching than, let’s say, in southern Europe in the 70s. Every transformation is comparable with others, but each of them also shows its specific features. In the case of the FSU, simultaneous changes in the political, economic, social and cultural regulation are going on. This observation is the reason for stressing the importance of path dependencies – and their description – in the case of the FSU, which can and must be described in the context of institutional change approaches.