The detailed performance scores of the eight participants for the first pilot study are found in Table A.1 of Appendix A. Here, the results are visualized in the box plot of Figure 4.4. The first remark is that the median score of both trials using the mouse and monitor were much higher than those using the interface, but their spread was also considerably larger. Moreover, the increase in score between trial 1 and 3, and between trial 2 and 4 display evidence of a strong learning effect. The results of the questionnaire were gathered in Table A.2 of Appendix A. The results were interpreted numerically as explained in the Questionnaire design section, by interpreting each participant’s rating in a scale between -2 and 2, then averaged per trial. To obtain a single, unified metric of the interface in reference to the mouse and monitor, the resulting averages of Trial 3 were negated. Then, the results are averaged once again per metric. The visualization is presented in Figure 4.3. The most conclusive findings are the device’s lower precision, ease of use and higher physical load as perceived by the participants, rating values with norms higher than 0.5. Furthermore, with norm values between 0.2 and 0.5, it seems that on average the participants felt less safe and stated higher mental and visual loads. However, the participants felt more involved in the task at hand compared to the mouse and monitor setup. Finally, with norm values of less than 0.2, there is some evidence of perceived lower responsiveness and intuitiveness and provides evidence of higher engagement and the feeling that the HF was useful. These results, together with the post-experiment interviews support the idea that in the Tetris application, commanding the game with the hapticinterface was more challenging, posed a higher load, and its input capabilities were reduced but it was slightly more immersive and engaging.
The control of the interface is based on the decoupling of wide-area mo- tion control and force control, which is possible due to the planar redun- dant DOFs of semi-mobile haptic interfaces. A dedicated force control that makes use of a directional admittance model in order to shape the apparent dynamics of the hapticinterface and render the guidance information was presented. In order to solve the redundancy of the hapticinterface, and thus achieve wide-area motion, prepositioning algorithms for both setup con- ﬁgurations have been presented. For the frontal conﬁguration, a preposi- tioning algorithm that considers the curvature of the user path signiﬁcantly improves the utilization of the workspace in the user environment. The mir- ror conﬁguration considerably simpliﬁes the prepositioning algorithm, by positioning the PPU above the human operator and following his/her mo- tions. Moreover, this prepositioning avoids fast motions of the PPU (due to fast motions of the user’s hand) and permits the optimal utilization of the space in the user environment.
In virtual force-feedback simulations stiff walls are often modeled as spring-damper systems. To generate a realistic impression of hard contacts, the stiffness of virtual surfaces must be at least K = 2000N/m (Massie and Salisbury, 1994) if a PHANToM is used as hapticinterface. However, it is not possible to increase stiffness arbitrarily. A higher stiffness causes a higher energy gain (Basdogan and Srinivasan, 2001) that tends to destabilize the system. Furthermore, the motors and the mechanical structure of the haptic device are limiting the maximum displayable stiffness. For obtaining stability, collisions with stiff walls are often represented by a stiff spring K together with a damper B, that has the purpose of dis- sipating the energy generated and guaranteeing passivity (see Fig. 1). This spring-damper system is equivalent to a discrete PD-controller.
Force feedback supported user interaction is increasingly finding its way into various professional applications and areas of research. Tentative steps in the development of haptic supported Geographic Information Systems (GISs) led to the conclusion that force feedback has the capability to improve the user experience. This work was focused on the investigation of this assumption in the context of planetary surface analysis. For this purpose an existing plane- tary terrain visualization and exploration framework was extended by a hapticinterface consisting of a haptic device and a haptic rendering pipeline. A selection of commercial devices was reviewed and assessed regarding to their suitability for particular terrain analysis tools. A challenge was to find suitable force feedback related algorithms to tackle the massive amount of planetary terrain data and adapt them into the haptic rendering pipeline. Two geodesy analysis tools have been enhanced with virtual fixtures to assist in their oper- ation. A final pilot user study was conducted to compare the usability of the prototype hapticinterface implementation with the usability of the original terrain visualization and exploration framework.
The present publication presents the normalized stability boundaries for haptic devices which are considered as a physically damped mass that is colliding with a virtual wall represented by a spring-damper system. First, a normalized characteristic equation of the system is derived that is inde- pendent of the haptic device’s mass and the sampling time. It is used for determining the stability boundaries for several values of the physical damping. These results are compared to the passivity condition of Colgate, et al. , . Finally, the dependency of the maximum stable virtual stiffness on the physical parameters of the hapticinterface is discussed.
Telepresence enables the control of a remote robot by a human operator using a hapticinterface. The feedback of forces from the remote robot (slave) interacting with its environment improves the dexterity levels and tele- manipulation performance of the operator. The haptic device (master ) with the bilateral controller should ideally reproduce only the slave-environment interactions to the operator through which he feels he is directly interact- ing with the environment. Ranging from extremely light devices like sigma.7 described in Tobergte et al. (2011) to more massive systems like in the DLR teleoperation facility HUG as in Hulin et al. (2011), several designs for haptic interfaces and bilateral controllers have been studied and reported in Hokayem and Spong (2006). DLR- HUG is a bi-manual robotic facility to test haptic and teleoperation algorithms which has master devices com- prised of KUKA Light Weight Robots (LWR) transformed as haptic devices and is shown in Fig. 1. The benefits of such massive systems are larger work-spaces and higher levels of force interaction.
be increased while reducing the workload for the surgeon [ 9 – 13 ]. However, when haptic feedback and haptic assis- tance is combined on the same hapticinterface they can overlap and mask each other and make it impossible for the surgeon to distinguish between the two [ 14 , 15 ]. Therefore, this study investigates how to augment haptic information for surgical tasks. Initially, preliminary studies were car- ried out to define parameters for the main experiments and subsequently two main experiments based on simulated surgical procedures were conducted. All experiments are performed using a Phantom OMNI (Sensable Technolo- gies, Wilmington, MA, USA) in association with the real- time control software QUARC (Quanser, Markham, ON, Canada) and computer-based virtual test scenarios.
Abstract—Future challenges in teleoperation arise from a new complexity of tasks and from constraints in unstructured environments. In industrial applications as nuclear research facilities, the operator has to manipulate large objects whereas medical robotics requires extremely high precision. In the last decades, research optimized the transparency in teleoperation setups through accurate hardware, higher sampling rates and improved sensor technologies. To further enhance the performance in telemanipulation, the idea of haptic augmentation has been briefly introduced in [Panzirsch et al., IEEE ICRA, 2015, pp. 312317]. Haptic augmentation provides supportive haptic cues to the operator that promise to ease the task execution and increase the control accuracy. Therefore, an additional hapticinterface can be added into the control loop. The present paper introduces the stability analysis of the resulting multilateral framework and equations for multi-DoF coupling and time delay control. Furthermore, a detailed analysis via experiments and a user study is presented. The control structure is designed in the network representation and based on passive modules. Through this
The earliest results concerning haptic exploration of virtual objects were published by Colwell et al. in 1998 . They conducted a study on perceiving the size of virtual objects that were basic geomet- ric objects from 1.0-2.5 cm size. As hapticinterface a 3 Degree of Freedom (DoF) Force Feedback device was used. One of their main �ndings was that users “...perceive the sizes of larger virtual objects more accurately than those of smaller virtual objects...” and “...may not understand complex objects from purely haptic information”. Two years later, Gunnar Jansson used simple geometric objects (vir- tual and real models), which subjects had to identify . Thereby, the exploration of the virtual objects was performed using a 6 DoF Phantom haptic device, real objects were explored using one �nger or the complete hand. Compared to natural interaction with the whole hand, the proportion of correct object identi�cations was worse with the Phantom device. Also, the exploration time with just one point of contact (i.e., real �nger or haptic device) was distinctly higher. He also found out that larger objects (10-100 mm object size) are recognized faster and more reliably than smaller ones (5-9 mm). Though there is potential for improvement due to a learning e�ect, the limitation to one point of contact generally constricts the haptic perception dramatically. Kirkpatrik et al.  came to similar conclusions and also suggested more contact points to improve the haptic perception.
GS and Group CS was the order of the testing tasks. In the experiment section, the correlation between the sketching task and the modeling task of Group CS was significantly stronger than that of Group GS (z = 2.88, p < .01). The level of confident in the performance in the modeling task of Group CS was not signif- icantly higher than Group GS. And Group CS was considerably more confident than Group GS in the sketching task, and the di↵erence was statistically signifi- cant (p < .01), which could be an e↵ect brought by the di↵erence in task order. During the modeling task, the model under construction was rendering visual input to the participants. Since the modeling task carries aiding information (although minimized), participants could take the model as supportive clues to the origi- nal stimuli. As a result, it was possible that participants from Group CS either confirm, or modify/adjust their mental spatial model established through haptic- audio exploration depending on the models constructed by themselves. After that, the sketching task could be transformed from producing a sketch of the indoor environment perceived from the haptic-audio interface into producing a sketch of the indoor environment which could be represented by the model constructed and visually observed one moment ago. This may be the cause of the large correlation value between these two tasks done by Group CS. And it may also be the origin of the confidence in their performance of the sketching task. This could also partly explain why the correlation of the two tasks was increased, so was the confidence in sketch quality, while the correctness across all tasks was not significantly improved. On the other hand, in comparison with Group CS, Group GS found it was more challenging to figure out the analog properties of the furniture pieces, with statistically significant di↵erences in size estimation (p < .01) and position esti- mation (p < .01), which can be relevant to the fact that Group CS consumed significantly more time than Group GS exploring the furnished hotel floor plan in the experiment section ( mean GS = 7.13 min, mean CS = 11.88 min, p < .05, see Figure 4.11(b)).
learned by utilizing a dirt sensor that measures the impact of dirt particles. In both articles it is implicitly assumed that dirt is absorbed upon contact. Martinez et al.  investigate planning for robotic cleaning by wiping with a sponge under the assumption that the particles are pushed upon contact. Additionally, the authors propose to represent dirt accumulations as ellipses to have them accessible as semantic predicates  for automated planning . The ellipses are computed based on color image data recorded after each wiping motion. Similarly, Do et al.  utilize a perceptual representation of dirt distributions on a target surface to derive a scalar value to rate the task performance w. r. t. different object properties and action parameters. However, even though these research groups make implicit assumptions of the wiping effect, there is no underlying model available that could be used to predict the actual task performance without visual validation as physical contact is not explicitly modeled. Accordingly, the robots cannot make assumptions on the task performance from haptic feedback.
Beim User Interface wurde darauf geachtet, das es intuitiv und schlank gehalten wurde. Es kann dank der Auslagerung der Darstellung in eigene CSS Files jedoch einfach an Kundenwünsche angepasst werden. Es wurde mittels aktuellen Technologien aufgebaut und bietet die Möglichkeit, per Drag & Drop an jede gewünschte Stelle innerhalb der Webseite verschoben zu werden. Zudem kann es auch in einem eigenen Fenster geönet werden. Neue Nachrichten des Agent werden per AJAX geladen, somit wird nur das WebChatControl neu geladen und ein Refresh der ganzen Webseite vermieden.
sowie die Bewegungsfreiheit der Roboterhand in virtuellen Umgebungen auch zeitlich simuliert werden. Das Interface LVWGDPLWQLFKWO¦QJHUHLQHUHLQSK\VLVFKH6FKQLWWVWHOOH(V GXUFK]LHKW]DKOUHLFKHPHGLDOH6FKLFKWHQLQGHQHQVLFKHLQ 9RUJDQJDE]HLFKQHWGHU$OJRULWKPHQSK\VLVFKH8PZHOW- NU¦IWHXQGHLQHPDWHULHOOH%HVFKDHQKHLWLQHLQHUVR]LDOHQ Geste konzentriert. Ein Post-InterfaceHUNO¦UWHEHQQLFKWGDV 3K¦QRPHQGHU6FKQLWWVWHOOHLP=HLWDOWHUHLQHVVRJHQDQQWHQ ȌΖQWHUQHWGHU'LQJHȊI¾UREVROHWHVYHUZHLVWDEHUDXIGDV temporale NachGHUPXOWLSOHQ6FKLFKWHQLQGHQHQKHXWH PHKUDOVGDVEHUWUDJHQXQG'HFRGLHUHQYRQ6LJQDOHQ YHUKDQGHOWZLUG'HQQZRGLHSK\VLVFKORNDOLVLHUEDUHΖQSXW XQG2XWSXW6FKQLWWVWHOOHDOV2UWGHU$JHQF\GLXQGLHUW P¾VVHQGLH=XVFKUHLEXQJVWHFKQLNHQVRZRKOI¾U+DQGOXQJHQ DOVDXFKI¾U+DQGOXQJVU¦XPHQHXYHUKDQGHOWZHUGHQ'DV HYR]LHUW)UDJHQQDFKGHP6WDWXVGHV0HQVFKHQLQQHUKDOE GHU0HQVFK0DVFKLQHΖQWHUDNWLRQ$QZHOFKHU6WHOOHXQGLQ ZHOFKHU+DOWXQJVROOGHU5RERWHUVHLQHP3DUWQHUHQWJHJHQ- WUHWHQ":LHQDKGDUIHUVLFKȂVHLHVDOVDXWRQRPHVRGHU ORNRPRWLYHV6\VWHPVHLHVDOV5RERWHUDUPȂGHP.¸USHU Q¦KHUQ" :HOFKHV0D¡DQDHNWLYHUXQGHPRWLRQDOHU1¦KH LVWGDEHL]XO¦VVLJYJO9LQFHQWHWDO΅΄Έ6FKHXW]΅΄΄" 'RUWZR=¦XQHDOV$EWUHQQXQJXQG%HGLHQNRQVROHQDOV 6WHXHUXQJVLQVWUXPHQWHQLFKWPHKUDXIWDXFKHQWUHWHQ neue Kategorien um ein implizites Wissen des Operanden HLQ:¦KUHQGLPSOL]LWHV:LVVHQZLHGHU3KLORVRSKXQG &KHPLNHU0LFKDHO3RODQ\LIHVWK¦OWTXDVLHLQYHUOHLEWLVW XQGDOOMHQH9RUJ¦QJHXPIDVVWȌGLHZLUQLFKWDOVVROFKH HPSȴQGHQȊ3RODQ\L΅΄Ή΅΄ZLUGHVI¾UGDV6\VWHPGHV 5RERWHUVQRWZHQGLJVROFKHURXWLQLHUWHQ$XVI¾KUXQJHQ H[SOL]LW]XPDFKHQ]XIRUPDOLVLHUHQXQGVFKOLH¡OLFKDXIHLQH HUJRQRPLVFKXQGVR]LDOYHUWU¦JOLFKH:HLVHXP]XVHW]HQ (QWJHJHQGHU%HVFKUHLEXQJ3RODQ\LVLQGHUGLH1XW]XQJ von Werkzeugen zu einer Distanzierung ihrer Bedeutung YRQXQVI¾KUWGDVLHLQGHU)HUQHZLUNHQYHUZHLVHQGLH
Für die Entwicklung autonomer Interface-Agenten schlagen wir eine hybride Systemarchitektur bestehend aus einer deliberativen und einer reaktiven Kompo- nente vor. Die deliberative Komponente, realisiert als Planungssystem für Hand- lungsfolgen, erlaubt zielgerichtetes Agieren. Das reaktive System ist die Grundla- ge des Agenten, um autonom in seiner Umwelt agieren zu können. Es sichert, daß der Agent schnell und situationsangepaßt agiert. Die zugrunde liegende Einheit des reaktiven Systems bezeichen wir als Behavior.
Sprachliche Qualität in User Interfaces lässt sich nicht nur an fachlichen, sondern auch an linguis- tischen Kriterien festmachen. Sprachwissenschaftliche Forschungen belegen mangelhaftes Sprach- design als Ursache von Nutzungsproblemen (vgl. Wagner 2002) und benennen Ursachen auf sämtli- chen sprachlich-kommunikativen Ebenen (Semiotik, Grammatik, Semantik und Pragmatik). Der Zu- sammenhang zwischen den einzelnen Benennungen von Menüelementen (Semantik), deren Be- ziehungen untereinander, also der Anordnung von Menüelementen unter einem Menütitel (Syntax) und der Gesamtstruktur des User Interfaces, also für welche Objekte existieren eigene Dialogfenster, muss bei der Benennungsfindung berücksichtigt werden. Pragmatischen Aspekten kommt dabei eine besondere Bedeutung zu. So wie zwischen Sprache und ihrem Gebrauch 66 unterschieden wird, ist zwischen definierten Geschäftsprozessen mit der dort verwendeten ggf. konsistenten Terminologie und dem Umgang mit diesen Arbeitsprozessen unter Alltagsbedingungen und dem dort verwendeten Vokabular zu unterscheiden. Dies entspricht der Unterscheidung zwischen intendierter und tatsäch- licher Nutzung einer Applikation. Für eine nutzungszentrierte Gestaltung wird der Arbeitsablauf unter Alltagsbedingungen durch die Erhebung von Nutzungsszenarien ermittelt. Diese Szenarien halten den Wortschatz der Nutzer fest und haben in der Regel Anteile von Orientierungssuche (im Infor- mationsraum) aber auch von Handlungsfolgen (Interaktion mit dem Dialogsystem). Der pragmatische Charakter der Benennungen im User Interface ist nur durch Beobachtung im realen Arbeitskontext zu ermitteln. Nielsens Empfehlung, eben nicht zuzuhören, was Nutzer sagen, bezieht sich auf offizielle Nutzerbefragungen. In der Ethnologie wird sogar das Erlernen der im Forschungsgebiet gesprochenen
The local microstructure and chemical composition has been therefore characterized in details by HR-TEM, EELS and EDX. These analyses showed perfectly epitaxial thin films, free from grain boundaries, with homogeneous composition throughout the thickness in both directions parallel and perpendicular to the interface. The large compressive strain on the thin film due to the mismatch with the substrate naturally brings to the formation of a high density of misfit dislocations, which decreases after annealing at 700 °C. Similarly, the enhancement of the ionic conductivity near the interface also decreased upon annealing.
Humans use information from several modalities in everyday life, especially when interacting with objects in their environment. We refer to more than one sense simultaneously when we estimate the properties of an object in order to use it, grasp it or explore it properly. However, several properties of the environment seem to be more specifically linked to one sense as compared to the others. For example, many material properties of an object (such as thermal conductivity, weight or compliance) are most specifically linked to haptics. One important material property is softness, which is the psy- chological correlate of the compliance of a surface. Compliance is defined by the relation between the force applied to an object and the object’s deformation, including the position of its surface. The com- pliance of objects plays an important role in haptic discrimination, classification and identification of objects (Bergmann Tiest & Kappers, 2006 ; Hollins, Bensmaia, Karlof, & Young, 2000 ), as well as in the manipulation of objects, because it determines how the object is deformed by the hand (Srinivasan & Lamotte, 1995 ). Softness perception intrinsically relies on haptic information. Only the haptic sense directly measures both force and position information, which are necessary to judge softness. In contrast, the visual sense can directly measure position changes, but intrinsic visual cues to force are hardly available. However, through everyday experience, we can learn correspondences between the haptically perceived softness and the visual or auditory effects of exploratory movements that are executed to feel softness, thus providing indirect information on the object’s softness (cf. Ernst, 2007 ). For example, when a compliant object comes into contact with an indenter, such as our finger or another object, our vision can provide us with information concerning the time-course and pattern of the object’s surface deformation around the contact region. In this study, we investigated whether
Das Buch, als gebundener Kodex aus den Rollen, den Rotuli entwi- ckelt, hat seine heutige Form im spätmittelalterlichen Manuskript gefun- den. Kapitel, Sachregister, Fußnoten und Randbemerkungen sind Ein- führungen des zwölften und dreizehnten Jahrhunderts. Eine wesentliche Veränderung ist die Transformation des großen, schweren, demonstrati- ven Buchs zum tragbaren, handhabbaren Gegenstand. Der Codex Gigas, die Prager Teufelsbibel, geschrieben in der ersten Häfte des 13.Jahrhun- derts, misst 92x50 cm und wiegt 75 kg. Das ist sicher eine extreme Vari- ante, weil die Bibel meist als mehrbändige Ausgabe geschrieben wurde, aber sie ist doch typisch in ihrer Immobilität. Die auf Papier geschriebenen Bücher des 13. Jahrhunderts wurden leichter. Die Schrift wurde kleiner geschrieben, mit metallbasierten Tinten konnte man schneller abschrei- ben, die äußeren Maße schrumpften, die vorher starren Einbände wurde leichter und biegsam, das ganze Buch wurde »tragbar«. Um 1250 war dieser neue Buchtyp eingeführt und setzte sich langsam durch (vgl. Ivan Illich 1991). Zwei Jahrhunderte später ist er die perfekte Vorlage für Gutenbergs Erfindung des System »Satz-Druck-Vertrieb«. Das Interface des modernen Buchs, wie es von Verlagen und Buchhändlern seit dem 15. Jahrhundert benutzt wird, hat sich seitdem nur bescheiden weiterent- wickelt. Die gedruckte Seitenzahl ist hinzugekommen und die vervielfälti- ge Illustration. Kleiner und leichter ist es geworden, ein industrielles Pro- dukt, eines der ersten überhaupt und das Standardmedium intellektuellen Austauschs.
In theory, there are an unlimited number of different modes that can propagate on plate structures . The phase and group velocities of each mode are dispersive and a function of frequency. Hence, it is necessary to evaluate each mode concerning its cut-off frequency, wavelength and the vertical displacement, which is supposed to be the main parameter for haptic interaction. In this case the Rayleigh-Lamb-differential equations [7, 8] that describe the elastic transient behaviour depending on the geometry (plate thickness), the elasticity and the Poisson’s ratio are solved numerically using Comsol Multiphysics. The derived solution of the phase velocities enables the geometric design of the plate and the selection of single wave modes towards an optimized haptic feedback. Norm ally the solution is depicted within a dispersion diagram. As an example Fig. 3 a represents the dispersive characteristics of the velocities for different plates with thickness of d = 2 mm. In the range of the haptic frequency (< 300 Hz) the wavelength on selected display and surface materials varies from 160 mm to 290 mm for the asymmetric mode A0. Concerning transient elastic signals (pulse), the half of the wavelength can be regarded as the minimal dimension of the vibrating focus. In which the effective sensible focus width depends on the perceptual threshold and the energy of the signal also (Fig. 3 b).