graphical user interface

Top PDF graphical user interface:

A model-driven approach for graphical user interface modernization reusing legacy services

A model-driven approach for graphical user interface modernization reusing legacy services

For designing the static layout of graphical user interface, a variety of sketching tools, such as WindowBuilder [6] or SwingGUIBuilder [16], exist. In addition, there are approaches in research to specify the static and dynamic parts of graphical user interfaces in a technology-agnostic way in accordance with the Object Management Group’s (OMG) Model-Driven Architecture (MDA) [9]. While the first is technology-specific and layout oriented, solutions from the second category provide no support for integrating existing services. Further, the GUI modeling approach as intro- duced by Vanderdonckt [22] requires a variety of models to be compliant with the MDA approach. Thereby, modelers have to oversee many models on a different level of abstraction and ensure their consistency to achieve reasonable results. Moreover, existing approaches often stop at the user interface level and allow only limited connections to existing services and data types. There- fore, data binding and service calls must be implemented manually. In contrast, the approach introduced by this paper provides a single graphical user-interface model that provides means to describe static and dynamic parts of the software solution. Additionally, production proven services and data types establish the foundation for retrieving, processing and storing business application data.
Mehr anzeigen

25 Mehr lesen

Making Traffic Visualization Movies by Scripting a Graphical User Interface

Making Traffic Visualization Movies by Scripting a Graphical User Interface

SUMO (Simulation of Urban Mobility, [1], [2]) is a microscopic road traffic simulation, developed at the Institute of Transportation Systems at the German Aerospace Center. SUMO’s development started in 2001, with a first version released for open use in 2002. Since that time, SUMO evolved to a mature state and became a popular simulation tool, mainly used in academic context for research on vehicular communications (V2X), see also [3]. SUMO has a built-in graphical user interface (GUI), based on a 2D visualization using openGL [4] which is embedded in a FOX-Toolkit [5] window.
Mehr anzeigen

11 Mehr lesen

Development of a graphical user interface for X-ray simulation of computed tomography images

Development of a graphical user interface for X-ray simulation of computed tomography images

One of the main aspects to consider to develop the application presented in this work is how the newly developed graphical user interface classes and code integrate with the underlying command line application facilities that are already implemented, namely the algorithms involving the radon transform, the filtered back projection and the analysis of the regions of interest. These algorithms constitute a big portion of the domain logic of the application. Consequently, the design of the software must mainly focus on the presentational logic and the complementary adaptations the existing code must be subject of.
Mehr anzeigen

74 Mehr lesen

A graphical user interface for flight control development

A graphical user interface for flight control development

Advanced graphical user interface technology in combination with interactive result visualization supports the user during all design phases: modeling, A/C-analysis, tuning & comprom[r]

6 Mehr lesen

An Interative Graphical User Interface for Maritime Security Services

An Interative Graphical User Interface for Maritime Security Services

of maritime accidents, marine pollution from vessels and the loss of human life at sea. Satellite data from six systems – namely Worldview-1, Worldview-2, Quickbird, GeoEye-1, IKONOS and EROS-B – with ground sampling distances below 1 m form the basis for NRT image delivery in less than one hour to EMSA. In addition, value adding services are provided to aid EMSA in delivering timely information on vessel locations and activities to EU member states. Value added services use a semi-automatic operational procedure to deliver results within one and half hours of image acquisition. In order to fulfil the demanding performance requirements a highly parallel processing chain, including the optimized Graphical User Interface (GUI) presented in this article, has been established. The processing chain includes image preparation (e.g. systematic correction, orthorectification), data transcription, Automatic Vessel Detection (AVD), GUI based Interactive Vessel Detection (IVD) and refinement as well as activity identifications, and finally delivery of standardized products to EMSA.
Mehr anzeigen

6 Mehr lesen

Comparison of a 2D- and 3D-Based Graphical User Interface for Localization Listening Tests

Comparison of a 2D- and 3D-Based Graphical User Interface for Localization Listening Tests

Localization errors of the participants’ responses were slightly smaller when the 3D-based GUI was used. However, the differ- ences in the localization error and normalized localization error were not significant. Nevertheless, the similar results are interest- ing, considering that participants reported that they got along much better with the 2D-based GUI. By the linear regression model, it could be revealed that for predicting the localization error other effects are much more relevant than the GUI. The effect size of the GUI type was small in the model and also not significant. Loud- speaker position and the signal type influenced the localization most. This was expected since these effects are known from estab- lished research. There are non-significant indications that training is important for reporting the localization by graphical user inter- faces. When the GUI was last used, the localization error was reduced according to the model.
Mehr anzeigen

6 Mehr lesen

GUI - Graphical User Interface

GUI - Graphical User Interface

Das Fenstersystem übernimmt die gesamte Bildschirmverwaltung, insbesondere der Fenster und grafischen Objekte. Es leitet die Eingaben von Tastatur, Maus und allen an- deren Eingabegeräten zu den jeweiligen Anwendungen und positioniert deren Daten, Texte und Grafiken in den zugewiesenen Darstellungsflächen. 31 Moderne Fenstersy- steme stellen in der Regel der Anwendung über das sogenannte API (Application Pro- gramming Interface) neben diesen elementaren Funktionen auch schon vorgefertigte In- teraktionsobjekte wie verschiedene Buttons, Fensterrahmen, Menüformen oder Scroll- bars zur Verfügung, so daß verschiedene Anwendungen unter dem gleichen Fenstersy- stem einen gleichen „Look“ aufweisen, sich im „Feel“ aber weiterhin unterscheiden können. Als reines Fenstersystem konzipiert ist beispielsweise das X-Window System, das auf UNIX aufbaut und nur die elementarsten Funktionen bereitstellt. 32 Üblicherwie- se sind Fenstersysteme allerdings zusammen mit einem Desktop-Programm erhältlich und mit diesem wie bei MS-Windows 3.X eng verflochten.
Mehr anzeigen

28 Mehr lesen

Robot Flow Control - Development of a Modular, Graphical User Interface

Robot Flow Control - Development of a Modular, Graphical User Interface

Furthermore, a new feature to rearrange single elements was implemented. It increases the possibility for the user to organize the control elements according to their needs. To gain more flexibility, this is used in the left and right side bars, providing additional information about the selected state and general information (e.g. state machine libraries, history). The logging view has been extended with a feature to choose which information (debug, info, warning and error) should be logged, which helps to improve finding important logging outputs. Additionally, a new toolbar is added next to the graphical editor, containing the execution control buttons (start, pause, stop).
Mehr anzeigen

30 Mehr lesen

Entwicklung und Evaluation eines Graspable User Interface für ein arbeitswissenschaftliches digitales Menschmodell

Entwicklung und Evaluation eines Graspable User Interface für ein arbeitswissenschaftliches digitales Menschmodell

Einsatzszenarien und Weiterentwicklungspotentiale 129 Abbildung 5.4: Änderung einer parametrisch generierten Bewegung über eine Zwischenpose Komplexe Bewegungen, wie sie beispielsweise bei Montagevorgängen im Fahrzeuginnenraum durchgeführt werden oder etwa die Handhabung von speziell entwickelten Betriebsmitteln, lassen sich bisher nicht parametrisch beschreiben (siehe 2.2.4). Diese Vorgänge können über die Keyframing-Methode beschrieben werden. Grundsätzlich können die einzelnen Posen dabei über die direkte Manipulation des Menschmodells mithilfe von Vorwärts- oder inverser Kinematik erzeugt werden. Anstelle eines auf der mausbasierten Graphical User Interface kann diese wiederum mit dem Human Input Device durchgeführt werden. Weiterhin lassen sich Posen parametrisch erzeugen. Hier werden z.B. der Typ der Verrichtung, wie etwa ein Hinlangen, vorgegeben, dazu wird das Zielobjekt, zu dem hingelangt werden soll, festgelegt. Letztlich wird aus der Gesamtheit der Eingaben eine Zielpose erzeugt, die als Keyframe in die Bewegung einfließt. Etwaige Zwischenposen werden nicht erzeugt und müssen über die direkte Manipulation des Menschmodells durchgeführt werden. Des Weiteren lassen sich noch Posen über Haltungsbibliotheken (siehe 2.2.3) erzeugen. Diese sind allerdings nicht parametrisch beschrieben. Dass sich eine Pose aus der Haltungsbibliothek für die individuellen, unterschiedlichen Randbedingungen bei komplexen Bewegungen eignet, stellt allerdings eine Ausnahme dar.
Mehr anzeigen

174 Mehr lesen

Two-arm robot teleoperation using a multi-touch tangible user interface

Two-arm robot teleoperation using a multi-touch tangible user interface

For controlling the robot arms during teleopera- tion, a combination of physical objects and virtual user interface elements is used. The motion control of the manipulator robot is realized using the 3D mouse SpaceNavigator by 3Dconnexion. This 6-DOF mouse can be used to intuitively control cartesian movement and rotation of the robot endeffector. The SpaceNav- igator is at the same time used as a tangible user in- terface element: When it is placed somewhere on the teleoperation UI (4), a Surface tag sticked to its bot- tom is recognized by the application. The user inter- face control shown in Fig. 5 is displayed and at the same time, movement of the manipulator robot arm is enabled. From then on, the SpaceNavigator controls the robot movement. The displayed control provides additional selectable options, in particular switching from direct mode to observer mode and vice versa.
Mehr anzeigen

6 Mehr lesen

Continued Advances in Supervised Autonomy User Interface Design for METERON SUPVIS Justin

Continued Advances in Supervised Autonomy User Interface Design for METERON SUPVIS Justin

DLR, NASA, and Roskosmos [17] with the aim to investigate effective telerobotic technologies for future space missions. As a part of METERON, the Haptics experiment suite fo- cused on the investigation of human perception of haptic- feedback in microgravity [18][19]. In contrast to the Kontur-2 experiments, a force-feedback joystick with a single DOF was deployed for the METERON Haptics experiments. The joystick was up-massed to the ISS in 2014 together with a tablet computer and a vest that allowed a body-mounted usage of the system in addition to a conventional wall-mounted setup. During the experiment sessions, various studies were conducted such as commanding a surface robot from the ISS via a communication link with a latency of about 800 ms. The wall-mounted joystick setup was found to be well suited for robotic operation [19]. The following METERON In- teract experiment used this assembly to conduct a more complex experiment scenario [20]. During the experiment, the Interact Centaur rover located at the European Space research and TEchnology Centre (ESTEC) was commanded to execute a force-feedback-teleoperated sub-millimeter peg- in-hole task. While direct mapping of Cartesian motions of the operator to Cartesian motions of the robot provides an intuitive approach towards robot command, current state- of-the-art control methods limit the use of this approach to setups with a communication link with sufficient bandwidth, low jitter, and minimal latency. In addition, the operator is always required to be in charge of all of the robot movements, resulting in high cognitive load. To relieve the astronaut, commanding the robot to navigate towards the target unit was done by placing visual assistance markers in the tablet computer Human-Robot-Interface (HRI). These markers aug- mented the live video feed of the robot and thus enabled intuitive commanding of the desired target robot position. In contrast to these telepresent force-feedback approaches, the METERON
Mehr anzeigen

11 Mehr lesen

Aktuelle Trends im Bereich interkultureller UX – Roadmap for Intercultural User Interface Design

Aktuelle Trends im Bereich interkultureller UX – Roadmap for Intercultural User Interface Design

Für die tägliche Arbeit sollen im Hin- blick auf Design, Methodik, Prozess und notwendige Kompetenzen praktische Empfehlungen abgeleitet werden. Die Workshop-Teilnehmer erhalten einen Überblick zum aktuellen ‚State of Art’ im Bereich Interkulturelle UX im deutschspra- chigen Raum. Dabei wird der Austausch mit Kollegen, die am gleichen Thema arbeiten gefördert und ein Überblick zu individuellen Schwerpunktthemen ermöglicht. Daraus erfolgt dann die Erstellung einer ‚Landkarte’ mit Themen- schwerpunkten und Experten im Bereich ‚Interkulturelle User Experience’ für den deutschsprachigen Raum. Der Workshop soll damit als Plattform für Dialog und Networking dienen und so die Diskussion, den Austausch zu aktuellen Ansätzen und Projekterfahrungen und der Festigung von persönlichen Kontakten und Kooperati- onsbeziehungen fördern, was zur Stärkung der Lobby und Gemeinschaftsprojekte der deutschsprachigen HCI-Community beitragen soll. Dieser Effekt kann noch weiter verstärkt werden, indem darüber hinaus die Ergebnisse des Workshops künftig auf internationalem Parkett prä- sentiert werden (z. B. Interact/CHI/UPA/ HCII), um den erarbeiteten Synergie-Effekt im deutschsprachigen Raum auch in den internationalen Raum zu projizieren und die Ergebnisse auch auf internationaler Ebene zu integrieren.
Mehr anzeigen

4 Mehr lesen

Continuous affect state annotation using a joystick-based user interface

Continuous affect state annotation using a joystick-based user interface

annotation interface. On the day of the test, before commencing the session, they were again given a short re- introduction to the valence-arousal model and the annotation procedure. They were instructed not to annotate the affective content of the video, but rather provide ratings based on their perception of their affect state. After this, they were given a short demo of the UI and were allowed to get a feel of the system. This was followed by a practice session containing 5 video-clips of 1 minute duration. During this session, the users annotated their affect states while watching the videos and were allowed to ask questions (pertaining to the annotation process) at the end of each video-clip. After the training session, the main experiment was started.
Mehr anzeigen

5 Mehr lesen

Model-driven user interface generation and adaptation in process-aware information systems

Model-driven user interface generation and adaptation in process-aware information systems

Abstract. The increasing adoption of process-aware information sys- tems (PAISs) has resulted in a large number of implemented business pro- cesses. To react on changing needs, companies should be able to quickly adapt these process implementations if required. Current PAISs, how- ever, only provide mechanisms to evolve the schema of a process model, but do not allow for the automated creation and adaptation of their user interfaces (UIs). The latter may have a complex logic and comprise, for example, conditional elements or database queries. Creating and evolv- ing the UI components of a PAIS manually is a tedious and error-prone task. This technical report introduces a set of patterns for transforming fragments of a business process model, whose activities are performed by the same user role, to UI components of the PAIS. In particular, UI logic can be expressed using the same notation as for process modeling. Furthermore, a transformation method is introduced, which applies these patterns to automatically derive UI components from a process model by establishing a bidirectional mapping between process model and UI. This mapping allows propagating UI changes to the process model and vice versa. Overall, our approach enables process designers to rapidly develop and update complex UI components in PAISs.
Mehr anzeigen

35 Mehr lesen

Design of a multi-modal user-interface for older adults

Design of a multi-modal user-interface for older adults

Within the FoSIBLE project, a Social TV community platform for older adults has been developed (Kötteritzsch et al. 2011). To ensure high usability and acceptance among the heterogeneous user-group of older adults, a novel combination of input methods is used. The central element of the application is a Smart-TV system, which is used to display messages, images and videos. A dedicated application is running on the Smart TV and provides chat functionality, as well as games and message boards. To encourage the users in dealing with this application, different input methods are supported. Multi-modal input approaches for controlling Smart-TV systems offer the user the possibility, to choose the appropriate input method depending on his actual abilities and needs individually for a given task. Especially the user group of older adults has special needs (e.g. regarding readability of text), which can easier be addressed using a variety of interaction concepts.
Mehr anzeigen

5 Mehr lesen

Konzeption, Gestaltung und Realisierung eines interaktiven Natural User Interface Social Network Prototypen

Konzeption, Gestaltung und Realisierung eines interaktiven Natural User Interface Social Network Prototypen

3.2 Interface Interaktionen Interaktionen in einem Körpererkennungs- system dürfen nicht wie Touchscreen auf Distanz gesehen werden. Berührungslose In- teraktionen besitzen ein eigenständiges Ein- und Ausgabe-Paradigma, für das speziell ge- staltet werden muss. Denn der Unterschied von Touchscreen und Körpererkennungssys- temen beruht auf der Tatsache, dass berüh- rungslose Interaktionen nicht über mehrere Zustände verfügen, sondern nur einen Zustand beinhalten. Bei einem Touchscreen ist dies einfach. Entweder befindet sich der Finger auf dem Bildschirm oder nicht (zwei Zustände). Ein Touchscreen-Gerät registriert nicht, ob sich der Nutzer am Kopf kratzt oder sich die Hand vor den Mund hält, da er nießen muss – das Körpererkennungssystem schon. Körpererkennungssysteme erkennen also kontinuierlich den Nutzer und verfügen so über nur einen Zustand. Sie unterscheiden nicht zwischen normalen Bewegungen und Gesten als Befehl und behandeln daher jede Bewegung dementsprechend als einen mög- lichen Befehl. Auch ungewollte Bewegungen werden so zwangsläufig interpretiert. Hierbei muss zwischen „falschen positiven Fehlern“ und „falschen negativen Fehlern“ unterschie- den werden. Falsche positive Fehler sind Bewegungen, die unabsichtlich vom Nutzer ausgeführt werden, da das System seine Bewegung als Geste interpretiert. Falsche negative Fehler sind Gesten die vom Nutzer gezielt ausgeführt werden, vom System aber nicht als Geste interpretiert werden (vgl. Wigdor, Wixon, 2011, p. 98-103).
Mehr anzeigen

143 Mehr lesen

Natural Virtual Reality User Interface to Define Assembly Sequences for Digital Human Models

Natural Virtual Reality User Interface to Define Assembly Sequences for Digital Human Models

In this study, we captured objective, as well as subjective, criteria. To measure the subjective workload of defining an assembly sequence for a digital human model with the WIMP and the VATS, we used the NASA-TLX questionnaire [30]. The questionnaire addresses the following dimensions: mental demand, physical demand, temporal demand, performance, effort, and frustration. With this questionnaire, we want to investigate the subjective workload users have when using the different user interfaces to define the assembly sequence. Furthermore, to investigate the learnability, we use the corresponding part of the standardized questionnaire for dialogue design [31,32]. The questionnaire is based on the ISONORM 9241/110 and has seven dimensions. The dimension we focused on was “suitability for learning.” This dimension contains five items: the time required to conduct the task, encouragement users have to test it, and the memorization of details, memorability, and learning without help. The average of these is the value for suitability for learning. The scale is between --- until +++. Additionally, we created a questionnaire with ten questions. The aim was to get feedback on which system (VR vs. WIMP) the participants liked more, and if they would use the VATS and why. Two questions also focused on technical issues, like if the participants can imagine using gloves instead of a Leap Motion and using an HMD at their daily workplace. The scale ranged from 1 (do not agree at all) to 6 (fully agree).
Mehr anzeigen

16 Mehr lesen

Graphical Magnetogranulometry of EMG909

Graphical Magnetogranulometry of EMG909

ferrofluid EMG909 from Ferrotec Co. A B S T R A C T The magnetization curve of the commercially available ferrofluid EMG909 is measured. It can ad- equately be described by a superposition of four Langevin terms. The effective dipole strength of the magnetic particles in this fluid is subsequently obtained by a graphical rectification of the mag- netization curve based on the inverse Langevin function. The method yields the arithmetic and the harmonic mean of the magnetic moment distribution function, and a guess for the geometric mean and the relative standard deviation. It has the advantage that it does not require a prejudiced guess of the distribution function of the poly-disperse suspension of magnetic particles.
Mehr anzeigen

4 Mehr lesen

GRAZIL (Graphical ZIB Language).

GRAZIL (Graphical ZIB Language).

Beschreibung der ZU(3?IFF-Dateischnittstelle für graphischen Input Aus den oben erwähnten Anforderungen und Gründen wurde ZUGRIFF (Zib Universal GRaphical Input File Format) entwickelt und imple- mentiert. Weitere Anwendungen spezieller A r t z.B. CAD oder allge- meinerer A r t z.B. Bildspeicherung u. ä. wurden bewußt außer a c h t gelassen, um d a s ZUGRIFF-Konzept nicht zu ü b e r f r a c h t e n .

57 Mehr lesen

Reconfiguration of User Interface Models for Monitoring and Control of Human-Computer Systems

Reconfiguration of User Interface Models for Monitoring and Control of Human-Computer Systems

identifying the correct interaction element for appropriate and accurate replacement (step Ì). The UIEditor’s reconfiguration module is based on the implementation of the PhysicalUIEd- itor, with the exception of the palette for adding interaction elements. Still, by double-clicking interaction elements, the user can change physical parameters of the interaction elements, in- cluding their position and size. Reconfiguration can be applied by using the buttons of a toolbar added at the top of the visual interface of the reconfiguration module. Figure 5.14 shows a screen-shot of the reconfiguration module. From left to right the following reconfigura- tion operations are implemented and can be used: (a) parallelization, (b) sequentialization, (c) discretization, (d) replacement and merging of interaction elements for output, (e) duplication, and (f) deletion. The button with the arrow icon can be used to restart the simulation using the newly reconfigured user interface. Before applying the reconfiguration operations to the user interface, however, the user has to select the interaction elements to be involved in the reconfiguration, for instance, the buttons labeled ‘SV1’ and ‘SV2’, by clicking on them and then pressing the parallelization button in the toolbar. A new button will be added to the user interface that triggers the parallelized interaction processes. The user may also have to enter other information, as is the case with sequentialization. Here, the interaction elements must be sorted into a list to define the sequence in which the associated interaction processes should take place. Here, too, a new button will be added to the user interface that automatically triggers the sequence of interaction processes.
Mehr anzeigen

250 Mehr lesen

Show all 1126 documents...