Another consideration that had to be made was how serialized objects belonging to a user should be handled when a user session abruptly ends. According to the above described serialization process, the serialized data is only deleted when a Managed Bean request is received by an application. The main complication lies in, what should happen when a serialized Managed Bean was never requested before a user’s session was abruptly ended. It is not efficient to allow the server to continue storing session objects that are no longer valid within an application. To address this problem, a procedure was programmed where a session listener object listens for session-destroyed events and as soon as they occur, it extracts the session-id of the user who initiated the session-destroyed event. The application thereafter deletes all preserved session objects belonging to the user with the extracted session ID. If a user’s network connection breaks down abruptly but the browser remains open, his session can be later resumed and no session information will have been lost during the break down. The user’s browser application must not be closed because it stores the session information which is required to reconnect to the frameworkand resume the old session. Since the state of an application in JSF is stored on the server, it is certain that session information cannot be lost when connectivity problems occur on the client side.
The second proposed measure (see Section 4.3.2) is applicable if a direct alteration of the application’s source code is not feasible but the utilized application server andframework are under full control. Such situations mainly arise, if the operator of the ap- plication is a different entity than the application’s developer. This applies, for instance, for third-party components, closed-source applications, or legacy applications for which the original author has left the company long time ago. The described approach provides reliable protection, as every authentication process causes the framework to renew the SID value. Furthermore, the approach is very light-weight. This stems from two charac- teristics: For one, the mechanism is completely stateless. It does not require temporary storage of any data as it only reacts based on incoming password parameters. Further- more, it is closely integrated to the existing framework infrastructure. Consequently, there is no need to execute any complex operations on its own – all the hard work, such as parsing the HTTP headers and parameters, is done by the applicationframework. These characteristics result in a runtime behavior that, at least if implemented in the form ofa J2EE filter, does not cause noticeable performance overhead. Finally, as the mechanism operates completely transparent to the application, the patch of an existing application is easy and straightforward. The main drawback is that the implementationof the countermeasure is specific for an applicationframework. The protection might be lost and has to be reintroduced if changes in the runtime infrastructure are taken, such as exchanging the underlying application server – a characteristic that does not apply to handling session fixation directly on the source code level.
Targeted RF-induced heating is based on constructive and destructive interferences of electromagnetic (EM) waves transmitted with a multi-channel RF applicator. To achieve precise formation of the energy focal point, accurate thermal dose control and safety management, the transmitted RF signals’ frequency, amplitude and phase need to be regulated in real-time. Thus, the RF signal source is the key component for facilitating appropriate frequency, amplitude and phase settings of the RF signals. The radiation pattern of the single RF transmit element, the RF channel count and the RF frequency of the RF applicator are of high relevance for ensuring a patient and problem-oriented adaptation of the size, uniformity and location of the RF energy deposition in the target region [ 19 , 21 , 22 , 24 , 30 ]. The (re)designof multi-channel RF applicator configurations showed more than twofold enhancement of the RF power focusing capability by increasing the number of RF antennae from 12 to 20 [ 31 , 32 ]. Increasing the number of RF antennae resulted in higher RF power absorption and enhanced tumor coverage ratios in deep-seated brain tumors in children [ 33 ]. The optimal operating RF frequency depends on the RF applicator characteristics and the target tissue parameters [ 34 ]. Lower RF frequencies focus EM energy to larger regions and have lower energy losses inside and outside tissue. Higher RF frequencies facilitate focusing EM energy onto small targets. Numerical simulations and evaluation studies investigated the optimal RF frequency [ 24 ]. For regional hyperthermia improvement of the RF power absorption in the target region versus regions outside the target was demonstrated when increasing the RF frequency from 100 MHz to 150 MHz and 200 MHz [ 35 , 36 ]. The optimal heating frequency was examined for seven tumor locations using RF frequencies ranging from 400–900 MHz [ 37 ]. For superficial tumors, the highest average specific absorption ratio (aSAR) was obtained with higher frequencies where aSAR was improved with increasing the number of RF antennae. For deep-seated tumors, the highest aSAR was reported for lower frequencies. Studies on ultimate SAR amplification factors and RF applicator concepts suggested the use of high frequencies up to 1 GHz for a highly focused EM energy deposition [ 21 , 30 ]. Time-multiplexed beamforming, a mixed frequency approach and multi-frequency SAR focusing provide other directions into optimization of RF heating performance [ 38 – 40 ]. Recently, an iterative multiplexed vector field shaping (MVFS) approach was introduced to solve the time- and frequency multiplexed problem of constrained RF-induced hyperthermia [ 24 ]. This work underlined the need of wideband signal generators by demonstrating the contribution of distinct frequencies to the RF heating and by showing that these frequencies and contributions depend on the target geometry.
Feed readers aggregate all of this content into a simple, easy-to-view application, and do not intrude on your productivity tools, such as e-mail. Most feed readers have the same look and feel as e-mail applications or newsgroup readers, with folders on the left and content on the right. The folders on the left might represent different Web sites or different information channels. If you are an active blog reader, the folders represent each blog. Tt is inefficient to revisit a blog site multiple times a week to seek out when an author has posted new content; it's best to have that content
Finding an adequate base software and modifying it to incorporate the CycurHSM application data in program memory has been a core challenge of the thesis. This step required solid know-how of the microcontroller’s hardware design, the application’s linking process and the fundamental software concepts in setting up a new process on a bare-metal controller where no operating system is available. This thesis touches on a wide range of subjects: The particularities in the devel- opment of embedded software, aspects of security, network communication and protocol design, Continuous Integration and automated tests, containerization, the development ofa PC client in a high-level language and finally software stan- dards such as AUTOSAR. This shows in the variety of programming or formal languages used: Software is written in C, Python, Groovy, Ruby, Bash and the GCC linker command language and in addition description languages such as ASN.1 and the Dockerfile syntax are employed.
With the prototype, stroke patients are now able to set their own goals with the help ofawebapplication either in collaboration with a health professional or by themselves. How patients can use the prototype was one of the research questions for this thesis. However, even during the design process and the following implementation, it became clear that the result might lead to a complex user interface. The use of the OLD@HOME structure - consisting of the components problem, goals, activities and outcome as well as the appropriate connection between those components - makes it essential for the user has to create and link those components in the user interface. In spite of this assumption, the results of the usability tests indicated that the prototype is feasible for stroke patients. Most of the patients announced that the prototype seems complicated but admitted that the software will become easier after they have used it a few times. The use of the same color for connected components made it simpler for patients to understand the relation between the different OLD@HOME components. The input of the components via the user interface did not become a major difficulty. Most of the operating elements were located properly and the input of the component with the help of the different wizards were achieved after a certain amount of time. Small fonts or similar button captions ("arkivet" and "aktivitet") resulted in problems. The use of different icons and font sizes for control elements could avoid confusion [Wiedenbeck, 1999]. With the use ofa visualization for the goal progress via the GAS through a line chart, the patient is now able to see his achievements directly, and not just through the reports of his therapists, which could result in further motivation for the patient. However, the evaluation showed that all patients encountered problems in distinguishing between a simple anda SMART goal at the first go. To avoid this, a pure presentation that shows just the chart, with hints at the progress, might be a better solution.
A similar approach might be useful to evaluate dynamicweb pages. Before testing the web page, a test plan consisting of the possible states ofaweb page and the possible transitions from each state is created. Ideally, this plan should be created automatically. In its current version, the web-a11y-auditor only tests single pages, but not complete sites. The state model approach described for testing dynamic pages might also be useful for testing sites. In this case, the states are the individual pages (identified by a unique URL) of the site. The links for changing to another page are the state transitions. Web pages that are part ofa website often contain repeating components. One pos- sible approach for minimizing the work necessary for evaluating large websites is not to evaluate full pages but the components from which these pages are built. An open question is if the accessibility of components may depend on the context in which the components are used or if the accessibility ofacomponent is independent of its context. It would be useful to integrate accessibility checks into the workflows ofweb developers to support them with the creation of accessible web pages and applications. One possible approach for the integration of accessibility checks into the workflow is the integration of accessibility checks into a continuous integration pipeline. One option for this integration could be a tight integration with Selenium. Selenium and other similar frameworks are already used by many projects for automatically testing the user interface. A possible integration of accessibility checks would provide a single method or function that can be called inside a test case. This function checks the accessibility of the current state of the evaluated page. If possible, this function should only check the changed regions of the evaluated page. The results are recorded. For tests that can not be done automatically, the test tool would add a note to the report indicating that a manual test is necessary.
Model calibration, although indispensable (Boumans et al., 2001; Oliva, 2003; Straatman et al., 2004), is not integrated in most of the available modeling frameworks. In the SITE framework, calibration functionality is implemented in an integrated system component. In the current version, only genetic algorithms are available, but the component can be extended to house additional methodologies. Calibration algorithms used by SITE aim to find an op- timal or adequate solution of an arbitrarily defined parameter set (defined in the application script) based on an objective function. The objective function, in turn, can be freely selected from another system component (ModelTest), which provides a selection of map comparison algorithms. This design enables model operators to freely combine optimization algorithms and algorithms for objective functions. Apart from the process of parameter selection, which requires expert knowledge of the underlying rule set, SITE is capable of automated rule set calibration. The component that implements the different map comparison algorithms, can also be used independently from the calibration as an integrated tool for model tests based on map comparison methodologies. The SITE calibration methodology seamlessly interacts with the generic modeling functionality and integrated models, thus it can be used for all applica- tions that are operated within SITE. Moreover, the calibration methodology is not restricted to the land-use model, but also allows to simultaneously calibrate different integrated models. The explicit representation of scenarios in SITE is a further innovation in the field of land-use modeling frameworks. Performing a simulation in SITE always implies to use the underlying model rule set in combination with a quantified scenario. Model rule set and quantified scenario are separate instances. This concept allows simulation runs under different scenarios without having to edit model code, thus improving system handling and facilitating maintenance. With the possibility to interactively handle and alter scenarios based on an analysis of interim simulation results, it was possible to overcome a major limitation of scenario analysis (Alcamo et al., 2006).
For the albedo, i.e. variation of dark and bright surface areas, we made use of the TES albedo map provided by the USGS and applied a low-pass filter before performing a threshold classification using boundaries based on visual inspection. As the surface of Mars shows some variations of albedo as discussed above, it can easily be broken down to no more than five contiguous areas that could be considered as representative. Based on the average areal lightness we experimented with pattern fills of various line pattern densities. The challenge we encountered here is not related to the principle technical implementability but to the problem that dynamic zooms require a clever adaptation of fill pattern densities that are at this stage not automatically being done by CARTO. In order to accomplish adynamic adaptation a set of various fill pattern representations would have to be created. This work has, however, not been completed yet. At small scales and given print resolution patterns are not well discernible (cf. Figure 1a, c).
in R 2 , thus making it possible to use it as a view ofa dynamically generated affine subspace (like the intersection or join of other affine spaces). But let us first consider, that it displays a regular (non-degenerated) plane. In this case, we need to display three vector displays: one displaying the origin vector and two for the direction vectors. This is simply done by adding three MMNumberMatrixPanels, which is the standard display component for a vector of any dimension, but also of course for any number matrix of arbitrary form. These panels in turn contain all entries as MMNumberPanel, the symbolic representation for numbers of any type. 5 A simple update mechanism using the Java property support ensures, that changes performed by the user are recorded by the master MMObject. On the other hand, if the mathematical state of the MMObject changes (e.g. by updating or user interactions on the graphical level), the changes are immediately displayed, allowing the user to continuously watch the symbolic perspective of his actions.
Our two basic criteria for calibration parameter selection turned out to be not entirely sufficient, thus additional aspects were considered. Further constraints for calibration parameter selection are given by the rule set semantics. It might appear promising to se- lect both parameters P4 (max. dist. Between settlement and cocoa cells) and P15 (crop suitability reduction based on population density) as the ones with the strongest impact on model outcome. However, the sensitivity progression analysis for P4 (see figure 5.5) showed a clear maximum for most classes at 6000m, a result which was supported by empirical evidence from spatial analyzes of the reference maps of 1981 and 2002. Param- eter P1 (max. dist. between settlement and paddy cells) was not calibrated due to the same reasons as for P4. Parameter P12 (crop neighbor weight) also seemed to be a good candidate, but test simulations had shown that high values of P12 favored the formation of artificial cell clusters of identical land-use, clearly not corresponding with land-use pat- terns of the 1981 and 2002 reference maps. Parameter P9 (protection factor) was causing lower variations with respect to the objective function for the entire area. However, P9 is the trigger for determining the protection status of the national park, which is of specific interest for our analysis and could not be determined empirically. P10 (weight applied to biophysical suitability) is also of specific interest, since it weights the biophysical suitabil- ity factors against the socio-economic factors and thus provides important information about the contribution of each factor group to the simulated land-use decisions in the suitability calculations.
In this work, a novel soft computing-based controller concept is developed. The main aim of the concept is to enhance the performance of multi-input and multi- output (MIMO) system control. The structure of the introduced soft computing- based control of complex mechanical system is shown in figure 4.1. The structure consists of two modules; a modeling module anda control module. The modeling module has two functions; the first function is to capture the unknown dynamics of the targeted system ˙x = f (x, u). The resulting model is then used in the control module. The second function is to build an observer that estimates the internal states of the unknown system using its outputs. This observer is applied directly in the control loop. The control module is responsible of generating suitable control law using the model acquired by the modeling module so that the system follows the desired trajectory. Once the control law is acquired it can be applied to the control loop thus closing the control loop. It is assumed, that the system is stable, fully controllable, and fully observable. Additionally, it is assumed that the internal states are measured for the time period T allowing the modeling module to build both the observer anddynamic model.
advantages which are presented in this section. The devices can allow the teacher to include multimedia content, such as audio files, videos or animations into their lectures. This can benefit the learning process as it is easier to explain complex processes . The usage of tablets and smartphones often has a positive effect on the learning process of the participants . This effect can occur in a one-to-one scenario, where each participant receives one device, and in a many-to-one, with multiple participants sharing a device, scenario. When all participants ofa workshop have their own device, electronic assess- ments are possible which could give the lecturers immediate feedback on the learning process of the participants. At the same time an electronic assessment reduces the time the lecturers need for the correction of the tests. Through assessments or a logging of the user interaction with the devices a detailed insight into the learning behaviors of the participants is possible . This allows the lecturers to review the learning materials and improve them. Existing learning materials can be reused and adapted in different contexts.
This thesis studies also three particular aspects of (modal) specification theo- ries. The first aspect concerns the verification of refinements. For finite MIOs with data constraints involving infinite variable domains, modal refinement is in general undecidable. We propose predicate abstraction to derive over- and under-approximations for concrete and abstract specifications, respectively, such that refinement between approximations (which is decidable) implies refinement between original specifications. Second, we introduce modal refinement distan- ces for K -weighted MIOs. Modal refinement distances are a generalization of strong modal refinement and measure how close a K -weighted MIO is to refi- ne another one by taking into account distances on weights. Third, we propose a contract approach for interface specifications that explicitly distinguish bet- ween assumptions on the environment and guarantees ofacomponent, strictly following the principle of separation of concerns. We study the relation between specification theories and contract theories in an abstract setting, and we show how a contract theory can be built in a generic way on top of any specification theory. We identify behaviour and environment semantics of contracts which are the basis for further definitions of contract refinement and contract compositi- on. The latter raises the problem of finding most permissive assumptions such that the mutual assumptions of composed contracts are satisfied. For complete specification theories supporting quotient, conjunction anda maximal environ- ment operator, we show that a constructive definition of contract composition can be given. The generic contract framework is instantiated for strong specifi- cation theories based on deterministic MIOs and MIODs. In particular, we show that deterministic MIOs with strong modal refinement and strong environment correctness form a complete specification theory.
Each line is indented according to the depth of the message sends. Methods can be collapsed if they are of no interest (andof course expanded as well). Methods can be highlighted for remembering them easily. We can step through the highlighted lines. For the receiver and the return value of the selected method the object history can be viewed in the object history (4) over a context menu. This view selects always the current method in the trace. It can change due to interaction with another view, too. Object views (2). There are two views displaying objects according to the currently selected method in the method trace: one displays the receiver of the method and the temporary variables (on the left side, the ﬁrst line represents the receiver, beneath the temporary variables with the variable name), the other one the passed arguments (on the right side, with the argument’s name). If an object is instrumented, it can be expanded to display the instance variables. Thus each line represents an object: these lines can be inspected or used for the object history over the context menu. If an object is an instance variable of an instrumented object, the setter methods of this instance variable and the object it belongs to can be highlighted in the method trace using the context menu. This enables to quickly see where this instance variable changed. We can step through these highlighted methods in the method trace, or use the stepping functions provided by the UI: step to the next/previous/ﬁrst/last value of this instance variable to navigate through the variable’s assignments.
Microsoft . NET . Microsoft . NET is a successor to COM built on a common runtime for so-called managed code. The runtime has control over all as- pects of execution, including, in particular, loading and linking of compo- nents (called assemblies). Assemblies can be addressed by their file name relative to the application’s installation location, which means that a com- ponentized application can be installed by just copying a directory tree (termed XCOPY deployment, alluding to a DOS recursive file copy command). Assemblies can be addressed by any number of the following constituents: name, locale, processor architecture, and version number. Also, assemblies can be signed by a cryptographic key (called a strong name); in this case, the linker will fail unless it finds the exact same assembly at run time. Assem- blies can be installed into the global assembly cache, or they can be located using a search path. Every application domain (similar to component man- agers, but with strong isolation guarantees) can have its own search path. Application programmers can define custom loaders, but do not have full control: the default assembly loader is always tried first; only if it fails will the user have a chance of delegating to his own loading behavior.
different type of routing; Rails embraces the DRY principle; Ruby on Rails includes all the components necessary to build complete web applications from the database forward (even including a pure-Ruby web server for those who wish to develop immediately without setting up aweb server such as Apache), providing object- database linkage, a Model-View-Controller framework, unit and functional testing tools, out-of-the-box support for AJAX and CSS, support for multiple template systems, multi-environment deployments, support for automated local and remote deployments, inbound and outbound email support, web services support, etc. Ruby on Rails has many disadvantages. One potential constraint is that Ruby is relatively slow compared to other programming languages. Rails has a lot of techniques to compensate the slowness of Ruby in most scenarios, however those techniques involve more resources. Second constraint is the security. There are well established tools, libraries, and techniques in corporate environment that have leveraged C+ +/Java for a long time. Ruby on Rails is just too new anda bit immature to be able to challenge them yet. Another disadvantage is the lack of complete documentation online. Some of the generated rubydocs will just contain the name of the method, compared to php.net/ sample-function-name-here with lots of comments and tutorials or the javadocs from Sun. For more information please look at the documentation . PHP is a popular scripting language originally designed for producing dynamicweb pages, scales well, and offers many third party components. There are a number of MVC frameworks (including CakePHP and Zend), but none seem to be a standard. Another reported advantage of PHP over other languages is availability ofa large pool of developers. This leads to very large community support. Like Ruby, PHP has very fast development cycle. PHP has no formal specification.
The main purpose of the Rosetta 3dtool is supporting mission planning which requires numerous inter- actions between RSGS and the scientific community and between RSGS and RMOC. Due to aweb-based approach, 3dtool users are not required to install additional software packages on their computers. Fur- thermore, maintenance and updates are centralized on a server infrastructure. Thus, new features and case studies are made available to all users at the same time. In contrast to locally installed software with com- parable functionality, much shorter turn-around times are achieved. Ultimately, these advantages lead to a very low threshold for the user.