Aaltodoc - homepage
Communities & Collections
Browse Aaltodoc publication archive
EN | FI |
Log In
  1. Home
  2. Browse by Author

Browsing by Author "Huang, Yanlong"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    A Closed-Loop Shared Control Framework for Legged Robots
    (2024-02-01) Xu, Peng; Wang, Zhikai; Ding, Liang; Li, Zhengyang; Shi, Junyi; Gao, Haibo; Liu, Guangjun; Huang, Yanlong
    A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
    Shared control, as a combination of human and robot intelligence, has been deemed as a promising direction toward complementing the perception and learning capabilities of legged robots. However, previous works on human–robot control for legged robots are often limited to simple tasks, such as controlling movement direction, posture, or single-leg motion, yet extensive training of the operator is required. To facilitate the transfer of human intelligence to legged robots in unstructured environments, this article presents a user-friendly closed-loop shared control framework. The main novelty is that the operator only needs to make decisions based on the recommendations of the autonomous algorithm, without having to worry about operations or consider contact planning issues. Specifically, a rough navigation path from the operator is smoothed and optimized to generate a path with reduced traversing cost. The traversability of the generated path is assessed using fast Monte Carlo tree search, which is subsequently fed back through an intuitive image interface and force feedback to help the operator make decisions quickly, forming a closed-loop shared control. The simulation and hardware experiments on a hexapod robot show that the proposed framework gives full play to the advantages of human–machine collaboration and improves the performance in terms of learning time from the operator, mission completion time, and success rate than comparison methods.
  • Loading...
    Thumbnail Image
    A probabilistic framework for learning geometry-based robot manipulation skills
    (2021-07) Abu-Dakka, Fares J.; Huang, Yanlong; Silvério, João; Kyrki, Ville
    A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
    Programming robots to perform complex manipulation tasks is difficult because many tasks require sophisticated controllers that may rely on data such as manipulability ellipsoids, stiffness/damping and inertia matrices. Such data are naturally represented as Symmetric Positive Definite (SPD) matrices to capture specific geometric characteristics of the data, which increases the complexity of hard-coding them. To alleviate this difficulty, the Learning from Demonstration (LfD) paradigm can be used in order to learn robot manipulation skills with specific geometric constraints encapsulated in SPD matrices. Learned skills often need to be adapted when they are applied to new situations. While existing techniques can adapt Cartesian and joint space trajectories described by various desired points, the adaptation of motion skills encapsulated in SPD matrices remains an open problem. In this paper, we introduce a new LfD framework that can learn robot manipulation skills encapsulated in SPD matrices from expert demonstrations and adapt them to new situations defined by new start-, via- and end-matrices. The proposed approach leverages Kernelized Movement Primitives (KMPs) to generate SPD-based robot manipulation skills that smoothly adapt the demonstrations to conform to new constraints. We validate the proposed framework using a couple of simulations in addition to a real experiment scenario.
  • Loading...
    Thumbnail Image
    Toward Orientation Learning and Adaptation in Cartesian Space
    (2020) Huang, Yanlong; Abu-Dakka, Fares J.; Silvério, João; Caldwell, Darwin G.
    A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
    As a promising branch of robotics, imitation learning emerges as an important way to transfer human skills to robots, where human demonstrations represented in Cartesian or joint spaces are utilized to estimate task/skill models that can be subsequently generalized to new situations. While learning Cartesian positions suffices for many applications, the end-effector orientation is required in many others. Despite recent advances in learning orientations from demonstrations, several crucial issues have not been adequately addressed yet. For instance, how can demonstrated orientations be adapted to pass through arbitrary desired points that comprise orientations and angular velocities? In this article, we propose an approach that is capable of learning multiple orientation trajectories and adapting learned orientation skills to new situations (e.g., via-points and end-points), where both orientation and angular velocity are considered. Specifically, we introduce a kernelized treatment to alleviate explicit basis functions when learning orientations, which allows for learning orientation trajectories associated with high-dimensional inputs. In addition, we extend our approach to the learning of quaternions with angular acceleration or jerk constraints, which allows for generating smoother orientation profiles for robots. Several examples including experiments with real 7-DoF robot arms are provided to verify the effectiveness of our method.
  • Loading...
    Thumbnail Image
    Uncertainty-Aware Imitation Learning using Kernelized Movement Primitives
    (2019) Silvério, João; Huang, Yanlong; Abu-Dakka, Fares J.; Rozo, Leonel; Caldwell, Darwin G.
    A4 Artikkeli konferenssijulkaisussa
    During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the robotics literature. One of their most prominent features is that, in addition to extracting a mean trajectory from task demonstrations, they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty using a single model. This rich set of information is used in combination with the fusion of optimal controllers to learn robot actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains otherwise.
Help | Open Access publishing | Instructions to convert a file to PDF/A | Errata instructions | Send Feedback
Aalto UniversityPrivacy notice | Cookie settings | Accessibility Statement | Aalto University Learning Centre