• Nem Talált Eredményt

Result and feedback for heterogeneous case

In document Óbuda University (Pldal 77-80)

3.6 Demonstration for heterogeneous ALM system

3.6.3 Result and feedback for heterogeneous case

The generated workflows are the very same as in the previous experiment. There is no reason to change it and with no any exact target development. So far they are modeling completely well the possible needs. The result of a generated issue can be seen on Fig.28. It is clear from the picture that there is no direct link, the source of the problem cannot be accessed by clicking on the identifier. However, the mentioned identifier is unique so it can be located easily. To make the search even more effective the DOORS URL is added which opens the

67

formal module where the problematic artifact can be found. There is also a timestamp to check whether the problem might still exist or it could have been solved.

28. Figure Generated workflow in heterogeneous system with reference to the place of finding

This solution has two ultimate weaknesses. First of all, it requires maintenance not only from JIRA, but also the DXL scripts has to be kept up-to-date. Furthermore, in case of huge databases the constant generation of CSV files (please note that every (!) test item should be added) has enormous resource need. This can be tolerated for this demonstration, but in practice it should be replaced with more effective methods. (A possible solution could be a framework which programmatically calls the DXL and JIRA scripts and this framework would be responsible to share the necessary information between the systems via function calls with proper parameters.)

In terms of technical action research method [84], the next step for this experiment should be the verification of conception with industrial partners. It has failed during the course of my PhD studies to effectively evaluate the idea with third-party. However, it is highly likely that the statements for the previous, homogeneous setup are valid here as well. Namely, it would be beneficial to modify an existing workflow and force users to follow every rule.

Furthermore, it would be possibly welcomed here as well to start the evaluation from a certain baselined condition of the system. This way, only the modification should be checked so the resource need is highly reduced by finding problems only which were newly introduced to the system.

68

However, the question is always open: How can be the above mentioned solutions further improved? As Biro at al. [KJ10] has stated in their study finding traceability gaps is not self-evident. According to this aspect, the plain analysis of quantity and quality of relationships is not enough because the decomposition/integration of requirements are required for the correct statement. This means that the content of the requirements play huge role here which cannot be easily processed by computers. This raise the need for semantic analysis of requirements which still not guarantees that every traceability gap or inconsistency would be found. As an alternative solution, machine learning could be utilized to discover relationship between artifacts, but the existing literature is limited for this topic [97].

On the other hand, the application of formal methods is an already applied technique. Indeed, the application of formal methods is recommended by the Capability Maturity Model Integration (CMMI) above safety integrity level (SIL) 2 and it is highly recommended for the highest safety-level (SIL 4) systems. This means that the lack of application of formal methods must be discussed in details where this mentioned reason will be checked by third-party during assessment (typically certification bodies). This fact together with the challenges posed by the increasing complexity of software [98] shows the importance of formal methods, still it is not generally welcomed by everyone in the industry.

Although, the total test coverage and spotless decomposition can be guaranteed only by formal methods in software engineering [99], yet it is still not unconditionally applied [100].

The main reason behind this is among others that artifacts should be formulated more computer-likely and less like natural language and also might be often faced with computational problems. When the numbers for formal methods related studies are compared with ones discussing verification or testing [100] it can be realized that most probably companies still to the classical approach.

Still, it can be used in practice, but the application is not seamless as Mashkoor at al. [101]

has already stated. According to their study, it is possible to use formal methods to decompose requirements while guaranteeing consistency and error free specifications, but it is hard to handle the abstractness compared to the product under validation. Also, the decomposition itself is not straightforward. However, completeness and complete mathematical description are still tempting properties which makes it worthy to use them in further studies.

These were the motivations among others to start the development of a tool which utilize the existing results and further improve them with the help of formal methods. One of the important features of this proposal is what we have learned from previous evaluation.

This process is called ‘graceful integration’ as the up-front effort need to process artifacts is reduced. The processing is less demanding as only newly created or modified items are considered as the existing part of the system is thought as complete and consistent. The system is called Requirements Traceability and Consistency checking Tool (RTCT) [KJ13].

The main aim is to provide a tool which can be integrated as a middleware practically into

69

any systems, which can prove formally that the stored artifacts are mutually consistent and they satisfy the applied requirement traceability model.

In document Óbuda University (Pldal 77-80)