This paper presents the Automotive Service-Oriented SoftwareArchitecture (ASOA) . ASOA enables software to be built using a runtime-integrated service-oriented soft- ware architecture. Software components in ASOA are split into platform-agnostic ser- vices, that can be integrated at runtime by a central controller. Services also possess generic interfaces, that allow any service supplying appropriate data to be connected to any other service requiring this data at runtime. The architecture also offers signifi- cant advantages related to reuse and replacement of services, as well as the interac- tion of the organizational structure of services. ASOA is already utilized in the UNICARagil project  and also provides organizational tools to aid organization in development.
For the purpose of quality assurance of biometric data no open standards or interfaces are available. Thus, a flexible provider-based architecture is required for quality assurance. As shown in Figure 2-2, a common QA module interface shall be used which chooses a QA module provider for quality assurance. The actual quality check of biometric data is implemented within the QA module provider. This allows for use of multiple QA module simultaneously.
Throughout the years, numerous ASRP approaches have been presented, including [BMP09, CG07a, DS95, FGGM10, GL05, GPHG + 03, Gra05, KM97, PDAC05, RSP03, RRU05, ST07a, ST06, WPC06, YCA04, ZL10], and others. Beyond absorbing DTMCs, other DTMC and CTMC types, as well as semi-Markov processes, have been used to represent the softwarearchitecture and its failure potentials; a comprehensive overview is given by the survey of Goseva-Popstojanova et al. [GPT01]. Further categories of approaches use less related formalisms, but are still counted towards ASRP by the survey. These categories include the path-based and the additive ap- proaches. Path-based approaches explicitly enumerate the possible control ﬂow paths through the architecture, together with their occurrence proba- bilities. While these approaches are not affected by the Markov assump- tion, the potentially high number of possible paths often makes a compre- hensive enumeration impossible and instead requires considering the most frequent paths only. Additive approaches calculate a system’s failure rate in a straightforward fashion as the sum of the individual component fail- ure rates, thereby imposing the strong assumption that service execution essentially always visits each of the system’s components.
The architectural design of the softwarearchitecture is highly focused on modularity to keep the software as flexible as possible. Fig. 7 shows the top-level components of the softwarearchitecture. Each blue box represents a single module in the software where the orange boxes stand for hardware components that must be accessed and controlled from the software. The software modules are designed as stand-alone components, which run in their own tasks. Every module has defined input and output slots to communicate with other modules. This way the side- effects were kept to a minimum and only appear at the hardware communication.
Abstract. This paper presents a flexible softwarearchitecture concept that allows the automatic generation of fully accessible PDF documents originating from various authoring tools such as Adobe InDesign  or Microsoft Word . The architecture can be extended to include any authoring tools capable of creating PDF documents. For each authoring tool, a software accessibility plug-in must be implemented which analyzes the logical structure of the document and creates an XML representation of it. This XML file is used in combination with an untagged non-accessible PDF to create an accessible PDF version of the document. The implemented accessibility plug-in prototype allows authors of documents to check for accessibility issues while creating their documents and add the additional se- mantic information needed to generate a fully accessible PDF document.
The ontology is depicted in Figure 7.3. It deﬁnes three general concepts, namely Arti- factThing, SourceCodeArtifact, and SourceCodeArtifactFile. The root concept ArtifactThing represents a so-called meta concept. The concepts SourceCodeArtifact and SourceCodeArtifactFile are sub-concepts of this concept and therefore inherit the characteristics and attributes of this meta concept. An ArtifactThing has a name (hasName) and is uniquely identiﬁed by an ID (hasID). Therefore, the sub-concepts SourceCodeArti- fact and SourceCodeArtifactFile are associated with a name and an ID. Additionally, individuals of the concept ArtifactThing are connected with a time stamp indicating when an ArtifactThing (i.e., SourceCodeArtifact and SourceCodeArtifactFile) has been created. The concept SourceCodeArtifact represents a product that is produced during the design and/or development of a software system. Diﬀerent artifacts can depend on each other. A dependency between artifacts is modeled by the object property dependOn. This object property models a very generic relationship that can be established between arbitrary things in the context of the source code artifact domain. This relation generalizes diﬀerent types of relations, e.g., dependencies between source code entities, relationships between documents (softwarearchitecture, requirements, source code comment etc). Such relations are reﬁned in more speciﬁc ontologies in which relations inherit from this relation. For example, in the Java
The softwarearchitecture for TET  and AsteroidFinder  satellites is a further step in the development line first used for the BIRD  satellite. The new improvements add dependability, flexibility and simplicity to a core avionic system already implemented, simulated and tested in similar ESA and industrial projects, proving the basic concept of the architecture. The new core avionic concept targets the problems of complexity, software- hardware interfaces, and the difficulties of merging many different interfaces into a single system by providing a very simple solution of integrated software and hardware, thus eliminating the barrier between the two. Through this concept both the bus control and payload control can be handled through one system. Implementing a complex parallel system safely requires the composition of a network of simple sequential cooperating applications which can communicate by using well defined interfaces. The basic communication principles common to all target systems are decribed in this paper.
The literature about recommender systems always assumes that items should be recom- mended to a particular user. In the case of decision making in softwarearchitecture, however, a person might be involved in multiple different software projects. Further- more, many stakeholders could collaborate on the same software system. Therefore, the main factor influencing the recommendation is not the user and its behavior but the architecture of the system under development. This architecture is described using an ar- chitecture profile as described in Section 2.2. The items that are being recommended for a particular softwarearchitecture are decision models for particular architectural aspects of a system and the design options to be selected within a particular decision model. As mentioned earlier, the system should make recommendations for architectural design decisions even if there is little known about the architecture of the software system. Therefore, the recommender system needs to deal with the cold start problem. The cold start problem is only addressed by a knowledge-based recommender system, which means that it has to be included to address this problem. On the other hand, the recommender system requires a learning-based technique like a collaborative-filtering or content-based method to become more precise when more architecture knowledge is available in the architecture knowledge base. These requirements show that multiple recommender meth- ods need to be applied. Therefore, hybridization techniques as presented in Section 3.6 need to be applied to build a hybrid recommender system which is necessary to satisfy all requirements.
The ARPA project on Domain-Specific SoftwareArchitecture ( DSSA ) [ 1 ] in the 1990’s has marked the start of increased interest in the software engineering community in developing reference architectures for specific application domains. This interest has lead to designs like IBM’s ADAGE reference model for avionics [ 8 ] or Philips’ Koala model for consumer electronics [ 203 ]. More recently, a group of leading automotive manufacturers and suppliers has initiated AUTOSAR [ 86 ], a joint initiative for developing an open reference architecture for automotive electronics systems that is agnostic from the implementation language and the execution environment. As a broad domain-specific architectural proposal, DASA shares many high-level goals with initiatives like AUTOSAR . Both attempt to address the existing complexity and inefficiencies in the software development process through the identification of core abstractions, standardization of interfaces and interaction patterns. Despite the fact that both initiatives target resource constrained embedded devices, the different application contexts have led to significantly different architectural decisions. These differences highlight the adaptation of DASA to the specific needs of the WSN domain.
2.1 General Architecture
The softwarearchitecture pursues the uniform strategy to integrate biometric processes in different
enrolment-, verifcation- and identifcation scenarios within the scope of German public sector applications. The SoftwareArchitecture is based on open standards, in particular BioAPI 2.0 [ISO_19784-1]. Applications are using the BioAPI 2.0 Framework to access the particular functionality for the specifc Application Profle, which is implemented in a BioAPI 2.0 Biometric Service Provider (BSP). Figure 2-1 gives an overview of the different enclosed layers, where the type of the applications and BSPs should be seen as an example.
2.1 General Architecture
The softwarearchitecture pursues the uniform strategy to integrate biometric processes in different
enrolment-, verification- and identification scenarios within the scope of German public sector applications. The SoftwareArchitecture is based on open standards, in particular BioAPI 2.0 [ISO_19784-1]. Applications are using the BioAPI 2.0 Framework to access the particular functionality for the specific Application Profile, which is implemented in a BioAPI 2.0 Biometric Service Provider (BSP). Figure 2-1 gives an overview of the different enclosed layers, where the type of the applications and BSPs should be seen as an example.
Head of Research Group Software Construction RWTH Aachen University, Germany
SoftwareArchitecture – Reconstruction, Evaluation, and Evolution
In civil engineering, the architecture of large constructions such as industrial buildings or entire cities plays an important role to achieve their desired properties. Similar, a software system’s architecture defines how the system is built from individual elements and how these are interconnected. Consequently, the softwarearchitecture directly influences important quality attributes, e.g., understandability, maintainability, modifiability, scalability or performance. In addition, central architectural design decisions are considered hard to change.
Because the architecture model abstracts from the actual system and its implementation, the models do not necessarily contain enough informa- tion to decide automatically for any change whether it changes function- ality or not. For example, the automated improvement method Perform- ance Booster (Xu, 2010) contains a rule that suggests to reduce the re- source demand of an LQN software task to improve performance. Such a change leads to a conforming model, because only the resource de- mand parameter—a double value in LQN—is changed in the suggested new model. However, we cannot decide automatically whether the changed software task can still provide the same functionality with this reduced ef- fort, because information on the used algorithms and the resulting function- ality is not contained in the LQN model. In this example, only humans can interpret the suggested new model with reduced resource demand and de- cide whether it is functionally-equivalent to the initial model. Furthermore, even if the architectural model contained a specification of the functional semantics, the resulting problem would be undecidable in general.
Second, the solution approach to the problem is discussed. These characteristics deter- mine how well a method can solve the posed problem, and also show the assumptions and simplifications a method makes in order to efficiently solve the problem. A relevant property is (S.1) the used quality model, which determines the expressiveness and validity of the predictions. The used optimization or improvement technique (S.2) described which actual optimization or improvement algorithms are used to find better solutions. Finally, we check whether and how domain-specific knowledge (S.3) is integrated into the method. S.1 Quality model: We survey what quality prediction model is used. In particular, the composition of quality properties from properties of single architecture elements is of interest. Here, simplified models assume aggregation functions, e.g. that a qua- lity property of the system is the sum of quality properties of architecture elements, such as components. While such a simplified models can be useful for some quality attributes (such as costs), other quality attributes are emerging properties of the system (e.g. performance or reliability) and their simplified handling requires strong assumptions. Thus, more expressive quality models are desirable in general. Howe- ver, in particular domains such as service-oriented systems where the performance of one service is independent of the performance of another, such assumptions are more realistic. Thus, depending on the domain of software architectures considered, less expressive quality can enable more efficient optimization approaches at the cost of being limited to that domain. In the table, we name the concrete quality models used in presented case studies in parenthesis.
tion cost and involves the danger of error sources. In particular, the observer pattern is inadequate for mes- saging in distributed systems with an RPC middleware like R-OSGi (Vandenhouten et. al. 2009). An alternative solution of using events for the communication be- tween local components is to use an event-orientated architecture also known as Event-Driven Architecture (EDA). Connected with EDA is the software technol- ogy for the realization of EDAs called Complex Event Processing (CEP). As CEP, besides the possibility to communicate by means of events, offers additional fea- tures which support further aspects of the intelligent system CEP is the preferred choice for the communica- tion layer of the intelligent system.
User space software— We use the component framework HIROSCO (HIgh-level RObotic Spacecraft COntroller)  to implement the application software running in user space. This framework has a service-oriented architecture, which means it provides services specified in the Packet Utilization Standard (PUS) of the European Cooperation for Space Standardization (ECSS) for the interaction of components implemented with it. In order not to burden the component developer with details of the PUS, HIROSCO promotes a data-centric approach. Data of a component just needs to be registered in a so- called dictionary. Using an XML-file the framework can be configured to provide this data to other components or to ground via the PUS interface, e.g. for housekeeping. Besides the framework, HIROSCO provides a component named supervisor that manages the components attached to it and their interaction. For example, it is responsible to start and stop them, to monitor their real-time behavior and to react on events. Mission specific events are handled by a library dynamically loaded at startup (denoted as EventLib in Figure 6). These events were required mainly to implement a temperature control system so that RJo does not overheat and to shutdown the force-feedback in case of internal anomalies. Finally, the HIROSCO framework comes with a component
The table also allows us to answer the question 1.2 from the GQM plan as follows: EJBmoX is able to reverse-engineer all EJB component-classes to BasicComponents and all architectural relevant Java interfaces to OperationInterfaces. It can also reverse-engineer ProvidedRoles , which are represented by an implements realisation between an EJB com- ponent class and an architectural relevant Java interface EJBmoX is also able to reverse- engineer RequiredRoles, which are represented by a private field, with the type of an architectural relevant Java interface, in an EJB component class. It is also able to cre- ate implicit interfaces for classes not providing an EJB interface, but considered as EJB component-classes. Even though we only applied Extract to larger projects, we can indi- cate that EJBmoX can be applied to larger projects as well, because both approaches share the same infrastructure. The architecture models created with EJBmoX do not violate any OCL constraint from the PCM metamodel. The reason is the same as for Extract: We tailored EJBmoX specifically in order to not violate any OCL constraints in the PCM metamodel. To calculate the ratio between compilation units and created components and interfaces, we can use the information about the reverse-engineered components and reverse-engineered interfaces from Table 6.4. We, furthermore, need the information that mRUBiS consists of 69 compilation units, while the MediaStore consists of 59 compilation units. Hence, we have a ratio of 0.41 for mRUBiS, while we have a ratio of 0.47 for the MediaStore. Even though the ratio is higher as for Extract, we can state that EJBmoX gives an high-level overview of the analysed software systems and abstracts from the source code.
The scientiﬁc context of the PCM-REL prediction approach is given through the reliability- tailored fraction of the analysis methods introduced in Section 2.1.1 under the aggregated term fault forecasting. While various methods have been researched extensively and are widely accepted, they do not necessarily focus on software or on the software part of IT systems. Most approaches that have gained a certain level of industrial acceptance by now are more generally tailored towards industrial products with mainly electronic com- ponents or parts. The approaches focus on the physical wear-out eﬀects of individual parts and on the various states of degraded service – referred to as failure modes – of a whole product or system resulting from the failure of its individual parts. Hence, their reasoning is based on a primarily hardware-oriented perspective. Target metrics of in- terest may be qualitative (such as identifying the diﬀerent failure modes of a system) or quantitative (such as estimating system failure rates or frequencies of occurrence of crit- ical failure modes). Available analysis methods include the Failure Modes And Eﬀects Analysis (FMEA) and its extension for consideration of criticality (FMECA), fault trees, reliability block diagrams, Markov-based analyses, reliability growth analyses and others. Each method comes in a number of variations, and often a combination of multiple anal- yses is applied to a certain system under study. A number of standards exist describing how the methods can be applied [Aut08, Int04, Int06a, Int06b, Int06c, Int06d, Uni06]. Both commercial tool suites and consulting services are oﬀered for conducting the anal- yses, and they are used mainly by automotive, aeronautics, telecommunications, medical and electronics industries. The term reliability engineering has been coined to denote the systematic consideration of reliability aspects throughout design and production processes (see [Bir10] for a comprehensive overview).
Framework-assisted dependency injection is a pattern more typically used in enterprise rather than research software. We are tracking other developments in the Java world which may be put to use for system-level simulation devel- opment. There is a pattern called reactive programming (see e.g. ), which combines syntax similar to the Java 8 Stream interface with the Observer pat- tern. This could be put to use in the Event architecture. It provides concise constructs for subscribing, unsubscribung, filtering, merging, grouping and flat- tening asynchronous streams. This pattern may be of interest for the MATSim event processing mechanism.