Nach oben pdf Metamodels and Transformations for Software and Data Integration

Metamodels and Transformations for Software and Data Integration

Metamodels and Transformations for Software and Data Integration

The taxonomy presented in the previous section obviously cannot aim to be complete. It represents an excerpt of properties we found to be relevant in the context of software integration. In order to address dynamic generation of addi- tional properties, we propose a property metamodel (PMM) for expressing user- defined NFP and assigning them to other model elements. The abstract syntax of the PMM is used to build non-functional property taxonomies, like the one in the previous section, enabling their usage in model-driven development environ- ments. In the following we make use of the typewriter font to refer to meta- model classes and attributes, and describe examples on instance level with italic font. In Figure 48 the basic structural features of the metamodel are shown. Properties, units and unit multiples are modeled separately and are grouped into categories. The categories and their elements together form the PropertyModel. We also define the NamedElement metaclass (not shown in the picture) with the attributes +shortName:String[1] and +longName:String[1] which are passed on to the following metaclasses: PropertyModel, Category, Property, Simple- Unit, UnitMultiple, EnumerationUnit and EnumerationLiteral. The short name is used to express abbreviations, the long name is the full name of an element instance (e. g., for a Property: MTTF, Mean Time To Failure).
Mehr anzeigen

87 Mehr lesen

Model-based Semantic Conflict Analysis for Software- and Data-Integration Scenarios

Model-based Semantic Conflict Analysis for Software- and Data-Integration Scenarios

To further support an automated detection of interface conflicts we propose semantic description of the integration scenario models at all levels of abstrac- tion. In section 1.1.5 the ontology metamodel was introduced. The ontology contains knowledge of the integration domain that is shared among different integration models or integration projects. It can be refined and extended dur- ing the projects. The association of heterogeneous artifacts such as documents, service interfaces, business processes, web resources and models with semantic concepts is called semantic annotation in general [34][20]. In the context of the BIZYCLE integration, semantic annotation is a process of creating relationships between the integration models at CIM, PSM, PIM level and the ontology con- cepts. At meta-level an annotation metamodel (AMM) was developed, linking all other metamodels within the BIZYCLE project to the semantic metamodel and also supporting the integration of existing external platform specific meta- models by adding associations between appropriate metaclasses. The advantage of the intermediate metamodel is that all instantiated models are independent of the semantic metamodel. Changes to models or even metamodels are managed at a central point. This is approach similar to the model weaving [23].
Mehr anzeigen

95 Mehr lesen

BRIDGE : a bioinformatics software platform for the integration of heterogeneous data from genomic explorations

BRIDGE : a bioinformatics software platform for the integration of heterogeneous data from genomic explorations

With the open-source O2DBI-II toolkit, data can be stored efficiently and at the same time the system provides an API that allows easy access to all information and individual exten- sions of the auto-generated classes. Consequently, using an object-oriented approach for the design of all data structures (e.g. by creating class hierarchies and employing inheritance) allows a rapid development and enhances the modularity and usability of all components. But nevertheless, additional mechanisms are needed that allow the interaction and commu- nication between different components. For example, the user might request experimental transcriptomics data for a selected gene while she/he is assigning functional classifications in a genome annotation system. In this case, immediate interaction, response, and maybe also some kind of visualization is needed to provide the user with the requested information. Since user interfaces that allow the integration of different data sources are highly dynamic, customizable views and dynamic visualizations are needed that exactly represent the desired information. It is also important that the different integrated graphical user frontends pro- vide a common look and feel where the user can easily find what she/he is looking for. For standard applications, the various existing GUI toolkits (Tk, Gtk, Swing, Qt) support differ- ent mechanisms (either callbacks or signals and slots) that allow the interaction of graphical elements (widgets).
Mehr anzeigen

260 Mehr lesen

Incremental Integration and Static Analysis of Model-Based Automotive Software Artifacts

Incremental Integration and Static Analysis of Model-Based Automotive Software Artifacts

The algorithm has been implemented within the artshop framework and integrated into the slicing algorithm. The model representation of the MATLAB/Simulink tool adapter (see Section 3.3.1) is used to access MATLAB/Stateflow elements. To extract information about assignments and references of states and transitions, we first implemented a parser capable of extracting relevant code sections from the labels attached to these elements. We use a built-in MATLAB function to create an abstract syntax tree (AST) for extracted code sections, e.g. guards and actions for transition labels and state operations for state labels. The AST is then used to extract variable assignments and references from the respective code sections. We further provide a wrapper that maps a given inport/outport to the respective input/output variable, performs the dependency analysis shown in Algorithm 2 and maps the variables of the result set to their respective inport/outport. We evaluated the impact of the shown algorithm by calculating the inport-outport relationships for 10 Stateflow statecharts taken from industrial and academic models. In total, the analyzed Stateflow models contain 39 outports and 61 inports. For each outport we calculated all inports that a given port depended on and vice versa for each inport. The data dependencies obtained during slicing can be reduced when the algorithm determines that an outport depends only on a subset of the inports and vice versa. This was the case for 5 of the 10 analyzed statecharts. In forward direction, the algorithm determined that 21 of the 61 inports that do not influence all outports of their respective Stateflow statecharts leading to an average weighted improvement of the data dependency relation between the in- and outports of the analyzed Stateflow statecharts of 17.47 %.
Mehr anzeigen

211 Mehr lesen

Incremental Integration and Static Analysis of Model-Based Automotive Software Artifacts

Incremental Integration and Static Analysis of Model-Based Automotive Software Artifacts

In this thesis, we present a method for the incremental integration and static analysis of model-based software artifacts comprising the extraction, storage, analysis and evolution of model data. The proposed incremental integration approach allows the conversion of supported artifacts into a well-defined representation and subsequent storage in a model repository, enabling seamless access to stored artifacts as well as synchroniza- tion with changes made to their source models. We further propose multiple static analysis techniques for MATLAB/Simulink models, a prevalent model-based software artifact in automotive software development. These analyses support various activities during different stages of a model-based development process. We present a signal reconstruction and slicing algorithm that supports debugging, testing and exploration activities of MATLAB/Simulink models. A clone detection procedure allows the auto- matic identification of cloned model fragments and their subsequent controlled reuse by refactoring into generic library blocks. Further quality and design defects are detected by a model smell analysis, identifying anti-patterns that negatively influence quality properties of MATLAB/Simulink models. Furthermore, we propose an inter-artifact consistency analysis targeting traceability links between artifacts of a product line and its accompanying variability documentation. All proposed techniques are realized in the form of an integrated software framework called artshop.
Mehr anzeigen

211 Mehr lesen

Data-driven transformations in small area estimation

Data-driven transformations in small area estimation

Germany ∗ and Southampton Statistical Sciences Research Institute, University of Southampton, UK † Small area models typically depend on the validity of model as- sumptions. For example, a commonly used version of the Empirical Best Predictor relies on the Gaussian assumptions of the error terms of the linear mixed model, a feature rarely observed in applications with real data. The present paper proposes to tackle the potential lack of validity of the model assumptions by using data-driven scaled transformations as opposed to ad-hoc chosen transformations. Dif- ferent types of transformations are explored, the estimation of the transformation parameters is studied in detail under a linear mixed model and transformations are used in small area prediction of lin- ear and non-linear parameters. The use of scaled transformations is crucial as it allows for fitting the linear mixed model with standard software and hence it simplifies the work of the data analyst. Mean squared error estimation that accounts for the uncertainty due to the estimation of the transformation parameters is explored using para- metric and semi-parametric (wild) bootstrap. The proposed methods are illustrated using real survey and census data for estimating in- come deprivation parameters for municipalities in the Mexican state of Guerrero. Extensive simulation studies and the results from the application show that using carefully selected, data driven transfor- mations can improve small area estimation.
Mehr anzeigen

39 Mehr lesen

Data Integration for Future Medicine (DIFUTURE): An Architectural and Methodological Overview

Data Integration for Future Medicine (DIFUTURE): An Architectural and Methodological Overview

All incoming data will be pooled in the DIC’s data lake, which is a staging area and working environment for data that is (al- most) an exact copy of the data extracted from the source systems [16]. From here, data is further processed for downstream utilization by use of a clearly defined set of services. This architecture is aligned with strategic goals of the DIFUTURE concept. First, we emphasize that we aim to gen- erally only load data into the data lake which is relevant to our internal, intra-con- sortium and trans-consortia use cases. This means that the data lakes will evolve in a gradual manner, which ensures that data harmonization remains manageable. Sec- ond, data provenance and data quality can be analyzed and documented from the beginning, i.e. before data has undergone significant transformations or has been ag- gregated for further purposes. Third, the DIFUTURE concept provides a blueprint for the technical properties of the environ- ment. We hope to be able to avoid dupli- cate efforts by exchanging and re-using code and data processing workflows, which will be packaged into containers [10, 11]. Sharing will be implemented through com- mon repositories and registries. Selected components for semantic harmonization
Mehr anzeigen

9 Mehr lesen

A change metamodel for the evolution of MOF-based metamodels

A change metamodel for the evolution of MOF-based metamodels

In a metamodeling infrastructure which allows the comparison of different versions of the same metamodel, the classification presented in this paper could be used to automatically determine the impact of a change to a certain metamodel on existing model data. This would require an automatic calculation of a change metamodel instance from either two versions of a metamodel or from the current editing process of a persisted metamodel. The model editing tool could then determine the severity of the changes, which would enable the metamodel editor to decide about changes to the metamodel regarding the projected impact on existing data as well as interface compatibility of the generated software. The implementation such guidance tools for model editing tools remains subject to future work. For users of the modeling tools, the formal description of metamodel evolution changes would ease the migration of their existing model data to new versions of that software. The incorporation of techniques alleviating metamodel evolution into modeling infrastrcutures would allow for faster adoptation of modeling tools to user needs.
Mehr anzeigen

16 Mehr lesen

Towards enterprise application integration principles for facility management software in hospitals

Towards enterprise application integration principles for facility management software in hospitals

Currently, hospitals have often installed up to 100 software applications within the non-medical support services (FM). The applications are mostly un-coordinated and only accessible for employees of certain sub-areas. Therefore, data is often stored redundantly, has to be transferred manually and some stakeholders are unaware of data locations and/or cannot access the data. In order to become more efficient, to save resources and to increase (data) quality, hospitals are now forced to find solutions in their processes and IT architecture - in the area of FM as well. Enterprise Application Integration (EAI) is an approach which has been applied to overcome this problem in other industries and partially also in the medical context in hospitals, but so far not including FM. During the latest research in non-medical support services in hospitals, connections and information needs between the different FM disciplines in hospitals became clearer. Extending these findings with the EAI principles in an explorative approach, the basis for future systematic integration of FM applications in hospitals is presented in this paper.
Mehr anzeigen

10 Mehr lesen

Visual analysis of multi-dimensional metamodels for manufacturing processes

Visual analysis of multi-dimensional metamodels for manufacturing processes

In this chapter, we presented the first important aspect for guaranteeing a high level of interactivity for memoSlice: its interaction concept. Here, we started by illustrating the foundations for its realization. These comprise ViSTA Widgets, the framework we fa- cilitated to implement the interaction concept and two menu systems, where EPMs were eventually chosen as the menu system for memoSlice. Finally, we detailed the interac- tion concept of memoSlice to clarify its capabilities of interactive data exploration. By first developing novel interaction approaches and then combining them with the CMV design of memoSlice in a comprehensible manner, we could realize a visualization solution, which makes the complex multidimensional nature of metamodels controllable and perceivable for users. Furthermore, the employed interaction concept does not only allow to do so in classic desktop settings, but also in IVEs. While we strongly focused on aspects of software architecture and HCI in this chapter, we did not answer questions regarding performance and interactivity so far. Nonetheless, these aspects are vital for users to profit from the approaches presented so far. To close this gap, we discuss them up next, before finally concluding on the benefits users gain from memoSlice as a whole by means of case studies.
Mehr anzeigen

195 Mehr lesen

Software system integration - Middleware - an overview

Software system integration - Middleware - an overview

Middleware, SOA, SOAP, Web Services 1. INTRODUCTION Software system integration is essential where communication between different applications running on different platform is needed. Suppose a system designed for payroll running with Human Resource System. In that case employees’ data need to be inserted in both systems. The system integration benefits a lot in these cases where data and services needed to be shared. Web services [1] are becoming very popular to share data between systems over the network and over the internet as well. In software industry the software integration carried same steps as software development and hence demands same kind of development procedures and testing.[2] This ensures the meaningful and clear communication between the systems. Systems integration becomes inevitable in Enterprise Systems where the whole organization needed to share data and services and give the feel to user as one system. [3] The core purpose of integration is to make the systems communicate and also to make the whole system flexible and expandable.
Mehr anzeigen

3 Mehr lesen

The Use of Data-driven Transformations and Their Applicability in Small Area Estimation

The Use of Data-driven Transformations and Their Applicability in Small Area Estimation

classical and linear mixed regression models instead of developing new theories, applying complex methods or extending software functions. Nevertheless, transfor- mations are often automatically and routinely applied without considering different aspects on their utility. For instance, a standard practice in applied work is to transform the target variable by computing its logarithm. However, this type of transformation does not adjust to the underlying data. Therefore, some research effort has been shifted towards alternative data-driven trans- formations, which includes a transformation parameter that adjusts to the data. The main contributions of this thesis focus on providing modeling guidelines for prac- titioners on transformations and on the methodological and practical development of the use of transformations in the context of small area estimation. The proposed approaches are complemented by the development of open source software packages. This aims to close pos- sible gaps between theory and practice. This paper is structured into three parts. In part I (section 2), some modeling guidelines for data analysts in the context of data-driven transformations are presented. This sum- marizes the papers by Medina and others (2019) and Medina (2017). In part II (section 3), transformations in the context of small area estimation are applied and further developed. This is based on the papers by Rojas-Perilla and others (2020), Kreutzmann and others (2019) and Tzavidis and others (2018). Finally, part III (section 4) presents a discussion of the applicability of transformations in the context of generalized linear models. The publications listed below are the result of this overview.
Mehr anzeigen

10 Mehr lesen

An MpCCI-based Software Integration Environment for Hypersonic Fluid-Structure Problems

An MpCCI-based Software Integration Environment for Hypersonic Fluid-Structure Problems

EnSight provides an easy CAD-like user interface for data visualization and post-processing. EnSight uses OpenGL graphics when run interactively and Mesa software rendering when run as a batch process. Modifying the visu- alization scene and requires no programming or even pseudo-programming skills to operate and produce images, animations, plots, or explorations of the data. Everything done in EnSight interactively is recorded in a journal file, called a »command file« for easy replay, modification, or building of macros. EnSight can export to standard image and animation or movie formats such as TIFF, JPG, AVI, MPEG, MPEG2. In addition CEI provides some unique formats to support additional features and cross platform compatibility. CEI EnVideo movie format (file extension .evo) is cross platform and pro- vides high quality graphics using less memory than uncompressed AVI and supports stereo playback. EnVideo is also useful for displaying a movie on multi-tile displays such as a PowerWall or CAVE Virtual Reality display. CEI’s EnLiten »scenario« geometry format (file extension .els) allows the saving of the EnSight 3D scene including all types of animation for playback and presentation. Both EnLiten and EnVideo are free to download and are used to share visualization work among colleagues, suppliers and customers, and with management.
Mehr anzeigen

160 Mehr lesen

MeltDB : a software platform for the analysis and integration of metabolomics experiment data

MeltDB : a software platform for the analysis and integration of metabolomics experiment data

The design of the MeltDB object model was influenced by ArMet and the recom- mendations of the Metabolomics Standards Initiative (MSI) workgroup. Various classes of ArMet have been adopted and beyond that the MeltDB data model also supports user access control, a more flexible, ontology based metabolomic experi- ment annotation, and the possibility to integrate and parameterize preprocessing algorithms and methods that can be submitted to a compute cluster. The sys- tem is realized using a three tier architecture consisting of a database layer, the business logic layer and the presentation layer (Figure 4.1a). The O2DBI software (unpublished) was used to design the data model and generate an XML document that formally describes all classes and hierarchies. Based on this formal definition, a documented application programming interface (API) was created in both Java and Perl to provide the core functionality of the MeltDB software framework. The API supplies create, retrieve, update and delete (CRUD) functionality for all modeled classes of the data model. The core functionality can easily be extended and new methods can be added to the objects on demand. Furthermore, the auto-generated API is the basis of the business logic layer and provides an object relational mapping for all modeled classes of the MeltDB data model.
Mehr anzeigen

141 Mehr lesen

Consistency-by-Construction Techniques for Software Models and Model Transformations

Consistency-by-Construction Techniques for Software Models and Model Transformations

Our technique builds upon the following basis (see Chapter 2 for more details): The de facto standard for defining modeling languages in practice are the Eclipse Mod- eling Framework (EMF) [34] for specifying meta-models and the Object Constraint Language (OCL) [99] for expressing additional constraints. Recent empirical find- ings suggest that OCL is especially fit to express complex constraints (compared to Java) [149]. Graph transformation [39] has been shown to be a versatile foundation for rule-based model transformation [40] focusing on the models’ underlying graph structure. To reason about graph properties, Habel and Pennemann [55] have devel- oped (nested) graph constraints being equivalent to first-order formulas on graphs. A construction of application conditions for transformation rules out of constraints was first developed for graphs by Heckel and Wagner [58] and then generalized in [55]. The first component of our technique is able to translate a reasonable subset of OCL constraints to nested graph constraints using the formally defined OCL translation in [114] as a conceptual basis. The second component of our technique integrates a graph constraint as an application condition into a transformation rule. The re- sulting application condition guarantees that the EMF model resulting from the suc- cessful application of the enhanced rule satisfies the original constraint. We call it a constraint-guaranteeing application condition. This integration does not change the rule’ actions. Note, our technique does not yet check in advance whether the given transformation rule with its application condition does already guarantee the constraint or not.
Mehr anzeigen

231 Mehr lesen

Process based data logistics: data integration for healthcare applications

Process based data logistics: data integration for healthcare applications

Data exchange between APPs does not always occur synchronously and at once, i.e. when data are entered into the system (e.g. at a particular modality). For example, before such data is passed on to some consuming systems (APPs) they must first be validated. As an example, we regard a special research process from our glaucoma research project that is collecting data from the diverse ophthalmic modalities and is storing them in a central data base, called glaucoma register [JLMM04]. Besides, patient identifications have to be distributed among all modalities; they originate from the glaucoma register. In some special cases, when patients are already registered in the central HIS, patient identifications have to be incorporated from there into the glaucoma register even. There are two principal solutions to the task of data integration in such a challenging environment: a first solution would be to use a sophisticated data management concept for the data integration problem. Centralized, distributed or federated data management systems are these three facets of this data base oriented solution. We argue in [Jabl06] that none of the three approaches is feasible. Another solution approach is to deploy process technology. This is the approach we are pursuing. Albeit, the usage of conventional workflow management technology – which seems to be an obvious solution – is also not practicable. In Section 4 this issue will be considered. In order to prepare this discussion and to present our principle solution idea, some most relevant constraints stemming from the clinical application area have to be identified:
Mehr anzeigen

12 Mehr lesen

Crystal Plasticity and Martensitic Transformations

Crystal Plasticity and Martensitic Transformations

The model introduced here is based on the phase field model for martensitic transformations proposed in Schmitt et al. (2013a) and Schmitt et al. (2013c). It considers the transformation induced eigenstrain, as a function of the order parameter. For this work, dislocation movement during the phase transformation is taken into account in the framework of crystal plasticity, using distinctive slip systems for the austenitic and the martensitic phase. Differing from the former ansatz, for the present approach a three well function is used to model the metastable and stable states of the system so that two martensitic orientation variants can be considered using a single order parameter. The idealized setting of two variants is reasonable since the model is limited to 2D. For the numerical realization, a finite element scheme is employed. It is noted that plasticity and phase transformation can interact in two ways - through kinetics and through the driving force, i.e. through the stresses. The former is well-studied in the literature (e.g. Olson and Cohen (1975)). However, the latter is also important but often overlooked. Therefore, in this work, the first is explicitly turned off to isolate the interactions through the stresses. This can easily be generalized. The continuum phase field model for crystal plasticity and martensitic transformations is introduced in section 2. In section 3, the numerical realization is explained, considering the two different time scales of the model. The numerical results are examined in section 4, while basic examples are chosen to gain a better understanding for the correlations between the different processes on the microstructure. In the last section the main results and methods are summarized which finally leads to some comments referring to future work.
Mehr anzeigen

16 Mehr lesen

Investigations on linear transformations for speaker adaptation and normalization

Investigations on linear transformations for speaker adaptation and normalization

The semi-tied covariances or MLLT approach has been extended in [Olsen & Gopinath 02] by increasing the degrees of freedom of the transformation, called extended maximum-likelihood linear transformation (EMLLT). The inverse covariance matrices are taken from a subspace of a chosen number of rank one ma- trices. The complexity of the (inverse) covariance modeling can be adjusted from diagonal covariances (which is equal to MLLT) up to full covariance modeling in a consistent way by choosing the number of rank one matrices which build up the sub- space to which the inverse covariances are restricted. The authors report recognition test results on an inhouse car navigation task, the recognition performance could be improved by 9.5% rel. using MLLT and by 35% rel. using EMLLT. A further extension to EMLLT has been presented in [Axelrod & Olsen 02]. Instead of using rank one matrices, the subspace is spanned by a chosen number of arbitrary sym- metric matrices. Although the overall recognition accuracy could not be improved compared to full covariance modeling, the authors achieved consistently better re- sults compared to EMLLT and diagonal covariance modeling given equal number of parameters. Thus this approach allows for a significant reduction in model com- plexity (and thus memory requirements and computation time) with only little loss in recognition accuracy.
Mehr anzeigen

172 Mehr lesen

Goodness of fit tests for type-II right censored data : structure preserving transformations and power studies

Goodness of fit tests for type-II right censored data : structure preserving transformations and power studies

χ 2 -distribution with m − 1 degrees of freedom. Neyman’s smooth tests (see Neyman (1937)) also possess the property of an approximate χ 2 -distribution but they grew from a completely different idea. In addition to practicable applicability of the test statistic a goodness-of-fit test is primarily judged by its power, i.e., if the null hypothesis does not hold true, the test should reject it with highest possible probability. Since goodness-of-fit tests are regarded as omnibus tests in this thesis, which means that the set of alternatives (here given by {F abs. cont. cdf : F  U(0, 1)}) is non-parametric and not restricted to any specific family of distribution, it is not possible to determine the power of such a test against every distribution from the alternative. Also, one cannot expect to find a test which yields the best overall power of any goodness-of-fit test.
Mehr anzeigen

126 Mehr lesen

"Digital Taylorism"? Data Practices and Governance in the Enterprise Software Salesforce

"Digital Taylorism"? Data Practices and Governance in the Enterprise Software Salesforce

Whereas techniques of scientific management consisted of distributed practices, digi- tal enterprise software connects the practices of labor division, assessing ideal ways of working, governing labor processes and monitoring, in one platform. The management of data and world is the central task of enterprise software in organizations: Data about revenue streams, resources and labor performance is permanently collected in order to prestructure and automate organizational workflows and decisions. Thus, the most important organizing power of enterprise software like Salesforce probably lies in its potential centralization of all kinds of digital flows of information: Salesforce digitally connects the data flows of process models, instructions, predictions and performance data in one system – or more precisely on one platform. This allows for dynamic and continuous “real-time” feedback to the workers/users. Everything is in one place: ca- lendar, e-mails, notes, supposedly even the interaction between employees: With Chat- ter, Salesforce provides a system of communication for employees that offers very si- milar features like Facebook. To put it exaggeratedly: There is no outside of Salesforce anymore, when all the operations of an organization can be displayed, controlled and monitored through this one system.
Mehr anzeigen

32 Mehr lesen

Show all 10000 documents...