Haptics and Telerobotics
Control Methods for Stable Interaction
Stable interaction with high stiffness virtual environments (VEs) still remains a challenging issue for kinesthetic haptic devices. In particular, it has been recognized that the maximum achievable impedance with the traditional digital control loop is limited by the lack of information to the controller caused by time discretization, time-delay, position quantization, related to the use of encoder as a position sensor, and zero-order hold (ZOH) of force command during each servo cycle. These lead to energy leak and eventually instability if not dissipated through the intrinsic friction of the device, controller, or damping from the user’s grasp.
Our lab has proposed several approaches guaranteeing stability, well received in the research community, including, but not limited to, time-domain passivity approach (TDPA), input-to-state stable (ISS) approach, successive force augment (SFA) approach, and successive stiffness increment (SSI) approach. As an ongoing research, we are trying to maximize the achievable impedance range of haptic and telerobotic systems towards highly transparent haptic interaction.
*Video is a courtesy of DLR.
Interactive Autonomy for Shared Teleoperation
Teleoperation may enjoy different levels of autonomy supporting the human during a task execution. Virtual fixtures correspond to one of the lowest levels of autonomy, where human receives assistance with orienting the tool, to follow a path, or to avoid dangerous regions in the workspace. However, it is difficult to generate virtual fixtures in unstructured environments. Our lab works on interactive and intuitive method for virtual fixture generation applicable in a wide range of teleoperation scenarios.
Another way of teleoperating is to use almost full autonomy, where the human helps computer to solve complicated tasks, such as motion planning. Our lab has introduced an approach to record human intuition through a haptic device in order to reduce the complexity of path planning algorithm for cluttered environments. At IRiS lab we are exploring various levels of autonomy and human-robot interaction techniques to improve teleoperation.
Skill Transfer Through Teleoperation
Direct teleoperation is useful and there are many potential applications. However, it requires heavy mental workload from the human operator for tasks that span over longer durations.
To counter this problem, we at IRiS Lab developed a haptically-coupled, cooperative teleoperation architecture employing multiple human operators to accomplish a task. Additionally, we have developed a Dynamic Authority Distribution (DAD) methodology to distribute the control of slave motion amongst the operators.
Recently, we have streamlined our research focus of cooperative teleoperation towards a Human-Agent teleoperation (HAT) architecture. Using Learning from Demonstrations (LfD) (aka Programming by Demonstrations (PbD)), we have developed robust end-to-end teleoperation systems where a human operator trains an autonomous artificial agent remotely through asynchronous, teleoperated demonstrations. This agent has shown to be even more promising when it is being coupled with a human operator in a shared control setting.
Our ongoing research in this area involves employing LfD through teleoperation methods for an human expert skill transfer though teleoperation while ensuring task success and minimal human workload.
Master Device for Intuitive Teleoperation
Lack of situational awareness and hand-eye coordination mismatch make teleoperation a challenge. Numerous research efforts have been made to resolve these issues. However, most of the previous research used conventional general master interfaces, such as Phantom, Omega, or Virtuose, which have limited capabilities of increasing intuitiveness. In IRiS Lab, we have been developing new types of task- or slave-specific master interfaces to further enhance the intuitiveness with improved situational awareness and intrinsic hand-eye coordination matching.
*Image is a courtesy of DLR.