<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Robot. AI</journal-id>
<journal-title>Frontiers in Robotics and AI</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Robot. AI</abbrev-journal-title>
<issn pub-type="epub">2296-9144</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frobt.2018.00132</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Robotics and AI</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Trajectory-Based Skill Learning Using Generalized Cylinders</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Ahmadzadeh</surname> <given-names>S. Reza</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/357389/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Chernova</surname> <given-names>Sonia</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/580765/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science, University of Massachusetts Lowell</institution>, <addr-line>Lowell, MA</addr-line>, <country>United States</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Interactive Computing, Georgia Institute of Technology</institution>, <addr-line>Atlanta, GA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Adriana Tapus, ENSTA ParisTech &#x000C9;cole Nationale Sup&#x000E9;rieure de Techniques Avanc&#x000E9;es, France</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Alan R. Wagner, Pennsylvania State University, United States; Fran&#x000E7;ois Ferland, Universit&#x000E9; de Sherbrooke, Canada; David Filliat, ENSTA ParisTech &#x000C9;cole Nationale Sup&#x000E9;rieure de Techniques Avanc&#x000E9;es, France</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Sonia Chernova <email>chernova&#x00040;gatech.edu</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Human-Robot Interaction, a section of the journal Frontiers in Robotics and AI</p></fn></author-notes>
<pub-date pub-type="epub">
<day>18</day>
<month>12</month>
<year>2018</year>
</pub-date>
<pub-date pub-type="collection">
<year>2018</year>
</pub-date>
<volume>5</volume>
<elocation-id>132</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>07</month>
<year>2018</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>11</month>
<year>2018</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2018 Ahmadzadeh and Chernova.</copyright-statement>
<copyright-year>2018</copyright-year>
<copyright-holder>Ahmadzadeh and Chernova</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>In this article, we introduce Trajectory Learning using Generalized Cylinders (TLGC), a novel trajectory-based skill learning approach from human demonstrations. To model a demonstrated skill, TLGC uses a Generalized Cylinder&#x02014;a geometric representation composed of an arbitrary space curve called the spine and a surface with smoothly varying cross-sections. Our approach is the first application of Generalized Cylinders to manipulation, and its geometric representation offers several key features: it identifies and extracts the implicit characteristics and boundaries of the skill by encoding the <italic>demonstration space</italic>, it supports for generation of multiple skill reproductions maintaining those characteristics, the constructed model can generalize the skill to unforeseen situations through trajectory editing techniques, our approach also allows for obstacle avoidance and interactive human refinement of the resulting model through kinesthetic correction. We validate our approach through a set of real-world experiments with both a Jaco 6-DOF and a Sawyer 7-DOF robotic arm.</p></abstract>
<kwd-group>
<kwd>learning from demonstration</kwd>
<kwd>trajectory-based skill</kwd>
<kwd>robot learning</kwd>
<kwd>physical human-robot interaction</kwd>
<kwd>skill refinement</kwd>
</kwd-group>
<counts>
<fig-count count="18"/>
<table-count count="1"/>
<equation-count count="17"/>
<ref-count count="39"/>
<page-count count="18"/>
<word-count count="12106"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Learning from Demonstration (LfD) approaches provide the ability to interactively teach robots new skills, eliminating the need for manual programming of the desired behavior (Argall et al., <xref ref-type="bibr" rid="B5">2009b</xref>). By observing a set of human-provided examples and constructing a model, LfD approaches can reproduce the skill and generalize it to novel situations autonomously. These capabilities make LfD a powerful approach that has the potential to enable even non-expert users to teach new skills to robots with minimum effort<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref>. However, despite the existence of several trajectory-based skill learning approaches, the vast majority of the existing robotic platforms still rely on motion-level actions that are either hand-coded or captured through teleoperation by experts (Yanco et al., <xref ref-type="bibr" rid="B39">2015</xref>), highlighting the need for further advances in this area. To be effective, trajectory-based learning representations should: (a) require few tuning parameters and be easy to tune especially by non-experts, (b) perform effectively and be robust to sub-optimal demonstrations, (c) generalize not only over the initial and final states but also to unforeseen situations successfully, and (d) support methods for refinement of the model constructed with the set of sub-optimal demonstrations.</p>
<p>While many LfD techniques exist that offer some subset of these requirements, no existing method fulfills all of these needs. In this paper, we present a novel LfD approach that meets all the above requirements through a geometric representation used to construct a model of the desired skill. The geometric representation is a generalized form of a standard cylinder, called a <italic>Generalized Cylinder</italic> (GC), for which the main axis is a regular curve in 3D Cartesian space (instead of a straight line) and the cross-sections can vary in size and shape (instead of circular cross-sections with fixed radius). We refer to the proposed approach as <italic>Trajectory Learning using Generalized Cylinders (TLGC)</italic>. One of the major advantages of employing generalized cylinders in our approach is that it allows for representing the <italic>demonstration space</italic> that implicitly encodes main characteristics of the demonstrated skill (i.e., the spatial correlations across different demonstrations).</p>
<p>In order to extract the underlying characteristics of the skill from the raw observations, TLGC requires minimal parameter tuning and can reproduce a variety of successful movements inside the boundaries of the encoded model by exploiting the whole demonstration space, thereby minimizing the effort of the user. Moreover, our representation is visually perceivable, which makes it a powerful candidate for physical human-robot interaction. This capability also helps to overcome the issue of sub-optimal demonstrations by enabling the user to improve the learned model through physical motion refinement. We show that unlike other existing techniques, using our approach refinements can be applied both through incremental and constraint-based strategies. Consequently, the user can start from a set of (sub-optimal) demonstrations and refine the learned model interactively to reach the desired behavior.</p>
<p>To tackle the problem of generalization over terminal states, we use a nonrigid registration method to transfer the encoded model accordingly. This generalization approach preserves the main characteristics of the demonstrated skill while achieving the goal of the task and satisfying a set of constraints. We also discuss an alternate generalization method that offers enhanced robustness. Additionally, TLGC offers several strategies for dealing with obstacles during the reproduction of the skill. In summary, our approach (a) maintains the important characteristics and implicit boundaries of the skill by encoding the demonstration space, (b) requires minimal parameter tuning, (c) reproduces a variety of successful movements by exploiting the whole demonstration space, (d) generalizes over the terminal states of the skill by deforming the model while preserving its important characteristics, (e) enables users to provide physical feedback to improve the characteristics/quality of the learned skill interactively, and (f) offers multiple obstacle avoidance strategies.</p>
<p>In our prior work (Ahmadzadeh et al., <xref ref-type="bibr" rid="B3">2016</xref>), we encoded a set of demonstrations as a <italic>Canal Surface</italic> (CS), a simpler form of a generalized cylinder, and showed that multiple solutions of a skill can be reproduced inside the CS. We then considered a more flexible and generalized form for the representation with the use of generalized cylinders (Ahmadzadeh et al., <xref ref-type="bibr" rid="B2">2017</xref>). In this article, we merge prior work and extend the idea by introducing (a) a novel reproduction strategy with more flexibility, (b) an alternate method for generalization of skills with more robustness, (c) evaluation and comparison of generalization methods, (d) additional comparisons of skill reproduction and refinement against other approaches, and (e) three obstacle avoidance strategies.</p>
<p>We validate our approach in fourteen experiments using two physical 6 and 7-DOF robots, as well as demonstrate its use in comparison to Dynamic Movement Primitives (Ijspeert et al., <xref ref-type="bibr" rid="B21">2013</xref>), Gaussian Mixture Models (GMM) (Calinon et al., <xref ref-type="bibr" rid="B11">2007</xref>), and GMM with weighted Expectation-Maximization (Argall et al., <xref ref-type="bibr" rid="B6">2010</xref>).</p>
</sec>
<sec id="s2">
<title>2. Related Work</title>
<p>In this section, we review related work on LfD approaches that are designed for modeling and reproduction of trajectory-based skills (i.e., movements). LfD approaches differ in the way they encode a demonstrated skill and retrieve a generalized form of the skill (Argall et al., <xref ref-type="bibr" rid="B5">2009b</xref>). One category of approaches use probabilistic representations generated through regression (Vijayakumar et al., <xref ref-type="bibr" rid="B38">2005</xref>; Grimes et al., <xref ref-type="bibr" rid="B17">2006</xref>; Calinon et al., <xref ref-type="bibr" rid="B11">2007</xref>). Work by Calinon et al. (<xref ref-type="bibr" rid="B11">2007</xref>) uses a Gaussian Mixture Model (GMM) and retrieves a smooth trajectory using Gaussian Mixture Regression (GMR). The reproduced trajectory using GMR is attracted toward an average form of the demonstrated skill and cannot adapt to changes in initial and final states. To improve its generalization capabilities, a task parameterized extension of GMM/GMR was developed that assigns reference frames to task-related objects and landmarks (Calinon, <xref ref-type="bibr" rid="B10">2016</xref>). The resulting method generalizes better to novel situations but requires extensive parameter tuning for each trajectory (e.g., number of Gaussian components, scale, weight, kernel). Our approach generalizes to novel situations without the use of reference frames and requires minimal parameter tuning.</p>
<p>Grimes et al. (<xref ref-type="bibr" rid="B17">2006</xref>) employed Gaussian Process regression to learn and generalize over a set of demonstrated trajectories. Although Gaussian Processes (GPs) provide a non-parametric alternative, the computational complexity of conventional GP approaches scales cubically with the number of data points, limiting their effectiveness in trajectory-based LfD settings. To address this issue, in follow-on work, Schneider and Ertel (<xref ref-type="bibr" rid="B35">2010</xref>) used local Gaussian process regression. Another approach called LfD by Averaging Trajectories (LAT) used only one-dimensional normal distributions to reach lower computational cost (Reiner et al., <xref ref-type="bibr" rid="B34">2014</xref>). Both GPs and LAT reproduce an average form of the movement and cannot generalize to novel situations (e.g., terminal states). Our approach can reproduce multiple successful solutions of the demonstrated skill and generalize according to the changes in the terminal states as well as in the environment.</p>
<p>An alternative to probabilistic approaches is to use dynamic systems to encode and reproduce skills (Hersch et al., <xref ref-type="bibr" rid="B19">2006</xref>; Khansari-Zadeh and Billard, <xref ref-type="bibr" rid="B22">2011</xref>; Ijspeert et al., <xref ref-type="bibr" rid="B21">2013</xref>). Dynamic Movement Primitives (DMPs) represent demonstrations as movements of a particle subject to a set of damped linear spring systems perturbed by an external force (Ijspeert et al., <xref ref-type="bibr" rid="B21">2013</xref>). The shape of the movement is approximated using Gaussian basis functions and the weights are calculated using locally weighted regression. DMPs can handle generalization of the skill to new goal situations, however, the implicit definition of time as a canonical system the movement becomes slower as time increases. The implicit time dependency also makes the system sensitive to temporal perturbations. Finally, to maintain the shape of the movement during generalization, DMPs require significant tuning of continuous parameters, including those of the dynamic systems, such as time constants and scaling factors. Unlike DMPs, our approach is time-independent, requires minimal parameter tuning, and reproduces trajectories that do not converge to an average solution.</p>
<p>Khansari-Zadeh and Billard (<xref ref-type="bibr" rid="B22">2011</xref>) introduced the Stable Estimator of Dynamical Systems (SEDS) approach which uses a constrained optimization technique to model a set of demonstrations as a dynamic system. Unlike DMPs, SEDS is robust to perturbations and ensures global asymptotic stability at the target. However, it requires a goal state and fails if the demonstrations do not converge to a single final state. It also cannot handle via-points (i.e., points where all demonstrations pass through with a very small variance). Similar to DMPs, SEDS relies on the first derivative of the motion (i.e., velocity) whether it is given through demonstrations or computed internally. Our approach does not require velocity data, can learn to move through via-points, and can handle demonstrations with different final states.</p>
<p>Other approaches such as Probabilistic Movement Primitives (ProMP) (Paraschos et al., <xref ref-type="bibr" rid="B30">2013</xref>) and Combined Learning from demonstration and Motion Planning (CLAMP) (Rana et al., <xref ref-type="bibr" rid="B33">2017</xref>) approximate a stochastic control policy in the form of a Gaussian Process. ProMP directly fits a Gaussian distribution into demonstrations and then finds a control policy to reproduce the skill and satisfy the constraints from the skill and the environment. CLAMP, on the other hand, generates trajectories that naturally follow the demonstrated policy while satisfying the constraints. These approaches can reproduce various solutions within the Gaussian distribution, however, both are limited in modeling the movement as a discrete-time linear dynamic system.</p>
<p>Several other techniques utilize models with characteristics similar to generalized cylinders (Quinlan and Khatib, <xref ref-type="bibr" rid="B32">1993</xref>; Majumdar and Tedrake, <xref ref-type="bibr" rid="B26">2016</xref>). Quinlan and Khatib (<xref ref-type="bibr" rid="B32">1993</xref>) proposed elastic bands to tackle the real-time obstacle avoidance in motion planning. Similar to our representation, their approach assigns a set of bubbles (i.e., 2D circles) to a global solution. By applying small and local changes to the constructed model the global path can be deformed. However, the approach is limited to planar applications. The real-time motion planning approach proposed by Majumdar and Tedrake (<xref ref-type="bibr" rid="B26">2016</xref>) approximates a boundary around a trajectory (similar to elastic bands), which is visualized as a funnel. The generated funnels illustrate a similar representation to our approach, however, TLGC does not require extensive off-line computation. Dong and Williams (<xref ref-type="bibr" rid="B15">2012</xref>) proposed probabilistic flow tubes to represent trajectories by extracting covariance data. The learned flow tube consists of a spine trajectory and 2D covariance data at each corresponding time-step. Although the approach was applied to extract a human&#x00027;s intention, the flow tube representation can be seen as a special case of TLGC in which the cross-sections are formed using the covariance data. Generalized cylinders have been used in the context of providing safe human-robot interaction (Mart&#x00301;&#x00131;nez-Salvador et al., <xref ref-type="bibr" rid="B27">2003</xref>; Corrales et al., <xref ref-type="bibr" rid="B14">2011</xref>). To avoid collisions between a robot arm and a human, Mart&#x00301;&#x00131;nez-Salvador et al. (<xref ref-type="bibr" rid="B27">2003</xref>) used GCs to build a spatial model and proposed a computationally effective method for collision checking. In the approach proposed by Corrales et al. (<xref ref-type="bibr" rid="B14">2011</xref>), however, the shape of the GCs representing the robot also changes according to the robot&#x00027;s speed.</p>
<p>Regardless of the technique used for learning from demonstration, the capability of improving the learned model by refining its shape or spatial constraints is highly desirable. This can become available through physical human-robot interaction. There exist few approaches that enable the human to refine the initially given demonstrations. Argall et al. (<xref ref-type="bibr" rid="B6">2010</xref>) used tactile feedback for refining a given set of demonstrations and reusing the modified demonstrations to reproduce the skill through incremental learning. They used this approach for performing grasp-positioning tasks on a humanoid robot. Lee and Ott (<xref ref-type="bibr" rid="B24">2011</xref>) also proposed an incremental learning approach for iterative motion refinement. Their approach combines kinesthetic teaching with impedance control and represents the skill using a Hidden Markov Model (HMM). Our approach, on the other hand, can be used to refine the learned skill by applying user-provided corrections for both demonstrations and reproductions interactively.</p>
</sec>
<sec id="s3">
<title>3. Background</title>
<p>Consider a smooth simple closed-curve <italic>&#x003C1;</italic>, which is a non-self-intersecting continuous loop<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> in the plane &#x0211D;<sup>2</sup>, perpendicular to an arbitrary regular curve<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> &#x00393; in Cartesian space &#x0211D;<sup>3</sup> (see Figure <xref ref-type="fig" rid="F2">2A</xref>). A <italic>Generalized Cylinder</italic> (GC) is a 3D surface generated by translating the plane containing the closed-curve <italic>&#x003C1;</italic> along the arbitrary curve &#x00393; while keeping the plane perpendicular to &#x00393; at each step. We refer to &#x00393; and <italic>&#x003C1;</italic> as the <italic>directrix</italic> (also spine) and the <italic>cross-sectional curve</italic>, respectively. While translating along the directrix, the cross-sectional curve can vary smoothly in both shape and size. Figure <xref ref-type="fig" rid="F2">2B</xref> illustrates six GCs with identical directrices but different cross-sectional functions.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Two robots reproducing three trajectory-based skills encoded and learned using TLGC.</p></caption>
<graphic xlink:href="frobt-05-00132-g0001.tif"/>
</fig>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>(A)</bold> Formation of a generalized cylinder by translating the cross-sectional curve along the directrix. <bold>(B)</bold> Six generalized cylinders with identical directrices and different cross-sectional functions.</p></caption>
<graphic xlink:href="frobt-05-00132-g0002.tif"/>
</fig>
<p>Generalized cylinders play a fundamental role in Differential Geometry, and in the context of Computer Aided Graphic Design, their application includes construction of smooth blending surfaces, shape reconstruction, and transition surfaces between pipes (Hartmann, <xref ref-type="bibr" rid="B18">2003</xref>). In robotics, generalized cylinders have been used for finding flyable paths for unmanned aerial vehicles (Shanmugavel et al., <xref ref-type="bibr" rid="B37">2007</xref>). They also have been used for collision detection during physical human-robot interaction (Mart&#x00301;&#x00131;nez-Salvador et al., <xref ref-type="bibr" rid="B27">2003</xref>; Corrales et al., <xref ref-type="bibr" rid="B14">2011</xref>). To our knowledge, this is the first application of generalized cylinders to skill learning for manipulation. In this section, we first outline the mathematical definition and parameterized formulation of <italic>Canal Surfaces</italic> (CS) (Hilbert and Cohn-Vossen, <xref ref-type="bibr" rid="B20">1952</xref>), which are a simpler form of GCs, and then extend the formulae to generalized cylinders.</p>
<sec>
<title>3.1. Canal Surfaces</title>
<p>Let &#x0211D;<sup>3</sup> be Euclidean 3-space with Cartesian coordinates <italic>x</italic><sub>1</sub>, <italic>x</italic><sub>2</sub>, <italic>x</italic><sub>3</sub>. Let &#x003A6;<sub><italic>u</italic></sub> be the one-parameter pencil<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> of regular implicit surfaces<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref> with real-valued parameter <italic>u</italic>. Two surfaces corresponding to different values of <italic>u</italic> intersect in a common curve. The surface generated by varying <italic>u</italic> is the envelope<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref> of the given pencil of surfaces (Abbena et al., <xref ref-type="bibr" rid="B1">2006</xref>). The envelope can be defined by</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mtext>&#x003A6;</mml:mtext></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub><mml:mo>:</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>F</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:mi>u</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M3"><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula> and &#x003A6;<sub><italic>u</italic></sub> consists of implicit <italic>C</italic><sup>2</sup>&#x02212;surfaces which are at least twice continuously differentiable. A <bold>canal surface</bold> <inline-formula><mml:math id="M4"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is defined as an envelope of the one-parameter pencil of spheres and can be written as</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M5"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub><mml:mo>:</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>c</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>|</mml:mo><mml:mi>u</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where the spheres are centered on the regular curve &#x00393;: <bold>x</bold> &#x0003D;<bold>c</bold>(<italic>u</italic>) &#x02208; &#x0211D;<sup>3</sup> in Cartesian space. The radius of the spheres are given by the function <italic>r</italic>(<italic>u</italic>) &#x02208; &#x0211D;, which is a <italic>C</italic><sup>1</sup>-function. The non-degeneracy condition is satisfied by assuming <italic>r</italic> &#x0003E; 0 and <inline-formula><mml:math id="M6"><mml:mo>|</mml:mo><mml:mi>&#x01E59;</mml:mi><mml:mo>|</mml:mo><mml:mo>&#x0003C;</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mover accent="true"><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">c</mml:mtext></mml:mstyle></mml:mrow><mml:mo>&#x000B0;</mml:mo></mml:mover><mml:mo>|</mml:mo><mml:mo>|</mml:mo></mml:math></inline-formula> (Hartmann, <xref ref-type="bibr" rid="B18">2003</xref>). &#x00393; is the <italic>directrix</italic> (spine) and <italic>r</italic>(<italic>u</italic>) is the cross-sectional function which in this case is called the <italic>radii</italic> function. For the one-parameter pencil of <bold>spheres</bold>, Equation (3) can be written as</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M7"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">C</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub><mml:mo>:</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>c</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:mi>r</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Two canal surfaces with fixed and varying cross-sections have been depicted in Figure <xref ref-type="fig" rid="F2">2B</xref> (bottom-left) and Figure <xref ref-type="fig" rid="F2">2B</xref> (top-left) respectively.</p>
</sec>
<sec>
<title>3.2. Generalized Cylinders</title>
<p>Since canal surfaces are constructed using the one-parameter pencil of spheres, the cross-sectional curve is always a circle even though its radius can vary along the directrix. Generalized cylinders extrapolate this notion by considering an arbitrary cross-sectional curve that can vary in both shape and size while sweeping along the directrix. These variations make generalized cylinders a powerful candidate for modeling complicated constraints of trajectory-based skills captured through demonstrations. We define a <bold>generalized cylinder</bold> <inline-formula><mml:math id="M8"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> as</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>:</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>x</mml:mtext></mml:mstyle><mml:mo>;</mml:mo><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>c</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>&#x003C1;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>|</mml:mo><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>&#x003C1;</italic>(<italic>u, v</italic>) represents the cross-sectional function with parameters <italic>u</italic>, the arc-length on the directrix, and <italic>v</italic>, the arc-length on the cross-sectional curve. The dependence on <italic>u</italic> reflects the fact that the cross-section&#x00027;s shape may vary along the directrix. To obtain a parametric representation for generalized cylinders, it is useful to employ a local coordinate system defined with origin on the directrix. A convenient choice is the Frenet-Serret (or TNB) frame which is suitable for describing the kinematic properties of a particle moving along a continuous and differentiable curve in &#x0211D;<sup>3</sup>. TNB is an orthonormal basis composed of three unit vectors: the unit tangent vector <bold>e</bold><sub><italic>T</italic></sub>, the unit normal vector <bold>e</bold><sub><italic>N</italic></sub>, and the unit binormal vector <bold>e</bold><sub><italic>B</italic></sub>. For a non-degenerate directrix curve &#x00393;:<bold>x</bold>(<italic>u</italic>), the TNB frame can be defined by</p>
<disp-formula id="E7"><label>(6)</label><mml:math id="M11"><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>T</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>u</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>N</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>T</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>u</mml:mi></mml:mrow></mml:mfrac><mml:mo>/</mml:mo><mml:mo>&#x02016;</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>T</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>u</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x02016;</mml:mo><mml:mo>,</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>B</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>T</mml:mi></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>e</mml:mi></mml:mstyle><mml:mi>N</mml:mi></mml:msub><mml:mo>,</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>where ||.|| denotes the Euclidean norm of a vector and &#x000D7; denotes the cross product operation. Since <bold>e</bold><sub><italic>T</italic></sub> is tangent to the directrix, we keep the cross-sectional curve in the plane defined by <bold>e</bold><sub><italic>N</italic></sub> and <bold>e</bold><sub><italic>B</italic></sub>. Using this convention, we form a parametric representation of generalized cylinders as</p>
<disp-formula id="E9"><label>(7)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>:</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>c</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Calculating Frenet-Serret frames for real data is prone to noise. The reason is that at some points the derivative vector <inline-formula><mml:math id="M14"><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>u</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula> vanishes and the formulae cannot be applied anymore (i.e., <bold>e</bold><sub><italic>N</italic></sub> becomes undefined at the points where the curvature is zero). This problem can be addressed by calculating the unit normal vector <bold>e</bold><sub><italic>N</italic></sub> as the cross product of a random vector by the unit tangent vector <bold>e</bold><sub><italic>T</italic></sub>. In this article, we use an improved technique called Bishop frames (Bishop, <xref ref-type="bibr" rid="B8">1975</xref>) which tackles the issue by employing the concept of relatively parallel fields. Alternate techniques, such as Beta frames (Carroll et al., <xref ref-type="bibr" rid="B13">2013</xref>) can also be employed.</p>
</sec>
</sec>
<sec id="s4">
<title>4. Skill Learning Using Generalized Cylinders</title>
<p>In this section, we explain how generalized cylinders can be used to encode, reproduce, and generalize trajectory-based skills from demonstrations. We assume that multiple examples of a skill are demonstrated and captured as a set of trajectories. To capture demonstrations we use kinesthetic teaching (Figure <xref ref-type="fig" rid="F3">3</xref>), however, alternate demonstration techniques, such as teleoperation and shadowing, can be employed (Argall et al., <xref ref-type="bibr" rid="B5">2009b</xref>). Given the set of captured demonstrations, our approach first calculates the directrix (i.e., an average form of the movements) and then extracts the main characteristics of the set (i.e., spatial correlations across demonstrations) and forms the cross-sectional function by identifying its boundaries.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>One of the authors of the paper demonstrating a reaching skill using kinesthetic teaching. Captured task-space poses of the end-effector are used as input for learning.</p></caption>
<graphic xlink:href="frobt-05-00132-g0003.tif"/>
</fig>
<p>When the GC is constructed, a geometric approach is used for generating new trajectories starting from arbitrary initial poses. We also estimate a transformation function that generalizes the encoded skill over terminal constraints (i.e., novel initial and final poses). Algorithm <xref ref-type="table" rid="T2">1</xref> shows a pseudo code of the proposed approach.</p>
<table-wrap position="float" id="T2">
<caption><p><bold>Algorithm 1</bold> Skill Learning using Generalized Cylinders</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace>1:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;procedure <sc>Encoding demonstrations</sc> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 2:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;Input:set of <italic>n</italic> demonstrations <bold>&#x003BE;</bold>&#x02208;&#x0211D;<sup>3 &#x000D7; <italic>N</italic>&#x000D7;<italic>n</italic></sup> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 3:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;Output:Generalized cylinder <inline-formula><mml:math id="M116"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, TNB frames <inline-formula><mml:math id="M117"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 4:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>m</bold>(<italic>u</italic>)&#x02190;<italic>mean</italic>(<bold>&#x003BE;</bold>) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 5:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M118"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">estimateCSpline</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>&#x003BE;</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 6:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M119"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">makeGeneralizedCylinder</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>m</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"> <monospace> 7: </monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;<bold>procedure</bold>REPRODUCING TRAJECTORY </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 8:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;Input:initial point <inline-formula><mml:math id="M120"><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>, <inline-formula><mml:math id="M121"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="M122"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 9:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;Output:New trajectory <bold>&#x003C1;</bold>&#x02208;&#x0211D;<sup>3 &#x000D7; <italic>N</italic></sup> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 10:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M123"><mml:mi>&#x003B7;</mml:mi><mml:mo>&#x02190;</mml:mo><mml:mfrac><mml:mrow><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>c</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>g</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>c</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mo>|</mml:mo></mml:mrow></mml:mfrac></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 11:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>p</bold><sub><italic>i</italic></sub>&#x02190;<bold>p</bold><sub>0</sub> , <bold>&#x003C1;</bold>&#x02190;<bold>p</bold><sub>0</sub> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 12:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;for each <inline-formula><mml:math id="M124"><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">frame</mml:mtext></mml:mstyle><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> <bold>do</bold> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 13:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M125"><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">project</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B7;</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 14:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>&#x003C1;</bold>&#x02190;<bold>p</bold><sub><italic>i</italic>&#x0002B;1</sub> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 15:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<italic>i</italic>&#x02190;<italic>i</italic>&#x0002B;1 </monospace></td></tr>
</tbody>
</table>
</table-wrap>
<sec>
<title>4.1. Skill Encoding</title>
<p>Consider <italic>n</italic> different demonstrations of a task performed and captured in task-space. For each demonstration, the 3D Cartesian position of the target (i.e., robot&#x00027;s end-effector) is recorded over time as <inline-formula><mml:math id="M15"><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mover accent='true'><mml:mi>&#x03BE;</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:mstyle><mml:mi>j</mml:mi></mml:msup><mml:mo>=</mml:mo><mml:mo>&#x0007B;</mml:mo><mml:msubsup><mml:mi>&#x003BE;</mml:mi><mml:mn>1</mml:mn><mml:mi>j</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>&#x003BE;</mml:mi><mml:mn>2</mml:mn><mml:mi>j</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>&#x003BE;</mml:mi><mml:mn>3</mml:mn><mml:mi>j</mml:mi></mml:msubsup><mml:mo>&#x0007D;</mml:mo><mml:mo>&#x022A4;</mml:mo><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mi>&#x0211D;</mml:mi><mml:mrow><mml:mn>3</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:msup><mml:mi>T</mml:mi><mml:mi>j</mml:mi></mml:msup></mml:mrow></mml:msup></mml:math></inline-formula>, <italic>j</italic> &#x0003D; 1, &#x02026;, <italic>n</italic>, where <italic>T</italic><sup><italic>j</italic></sup> is the number of data-points within the <italic>j</italic>th demonstrated trajectory. Since <italic>T</italic><sup><italic>j</italic></sup> can vary among demonstrations, we use interpolation and resampling in order to gain a frame-by-frame correspondence mapping among the recorded demonstrations and align them temporally. To achieve this, for each demonstration, we obtain a set of piecewise polynomials using cubic spline interpolation. Then, we generate a set of temporally aligned trajectories by resampling <italic>N</italic> new data-points from each obtained polynomial. This process results in a set of <italic>n</italic> resampled demonstrations <bold>&#x003BE;</bold> &#x02208; &#x0211D;<sup>3 &#x000D7; <italic>N</italic>&#x000D7;<italic>n</italic></sup> each of which consists of <italic>N</italic> data-points. An advantage of this technique is that when the velocity and acceleration data are unavailable, the first and second derivatives of the estimated polynomials can be used instead. An alternate solution is to employ Dynamic Time Warping (Myers et al., <xref ref-type="bibr" rid="B28">1980</xref>).</p>
<sec>
<title>4.1.1. Estimating the Directrix</title>
<p>To estimate the directrix &#x00393;, we calculate the directional mean (axis-wise arithmetic mean) for the set of demonstrations. Let <bold>m</bold> &#x02208; &#x0211D;<sup>3 &#x000D7; <italic>N</italic></sup> be the directional mean of set <bold>&#x003BE;</bold> (Line 4 in Algorithm <xref ref-type="table" rid="T2">1</xref>). Note that all the cross-sections will be centered on <bold>m</bold>. Alternatively, the directrix can be produced using Gaussian Mixture Regression (GMR) (Calinon et al., <xref ref-type="bibr" rid="B11">2007</xref>). In that case, GMR generates the directrix by sampling from the joint probability learned by GMM. However, using GMR requires an explicitly defined time vector.</p>
<table-wrap position="float" id="T3">
<caption><p><bold>Algorithm 2</bold> Generating GC with arbitrary cross-section</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace> 1:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;procedure <sc>makeGeneralizedCylinder</sc> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 2:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;Input:directrix <bold>m</bold>(<italic>u</italic>), boundary function <inline-formula><mml:math id="M126"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 3:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;Output:Generalized cylinder <inline-formula><mml:math id="M127"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, TNB frames <inline-formula><mml:math id="M128"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 4:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;for each <italic>u</italic><sub><italic>i</italic></sub> <bold>do</bold> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 5:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;{<bold>e</bold><sub><italic>T</italic></sub>(<italic>u</italic><sub><italic>i</italic></sub>), <bold>e</bold><sub><italic>N</italic></sub>(<italic>u</italic><sub><italic>i</italic></sub>), <bold>e</bold><sub><italic>B</italic></sub>(<italic>u</italic><sub><italic>i</italic></sub>)}&#x02190;<italic>estimateFrame</italic>(<bold>m</bold>(<italic>u</italic><sub><italic>i</italic></sub>)) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 6:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M129"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow><mml:mo>&#x02190;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 7:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M130"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02190;</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>m</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>e</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>4.1.2. Estimating the Cross-Section Function</title>
<p>Given the demonstration set <bold>&#x003BE;</bold> and the estimated directrix <bold>m</bold>, in this step, we explain methods for calculating the cross-sectional function <italic>&#x003C1;</italic>(<italic>u, v</italic>). For each point <italic>u</italic> on the directrix, we gather one corresponding point (aligned with <italic>u</italic> on the same cross-sectional plane) from each demonstration; we call this set the <italic>effective points</italic>. As an example, for a set of five demonstrations, one point on the directrix and the corresponding effective points are depicted in Figure <xref ref-type="fig" rid="F4">4</xref> in blue and red respectively. We use the effective points to calculate the cross-section at each step with a smooth closed curve. The circumference of a cross-section represents the implicit local constraints of the task (i.e., boundaries) imposed by the set of demonstrations. Figure <xref ref-type="fig" rid="F4">4</xref> illustrates three different types of cross-sections calculated for the same set of effective points. In its simplest form, we can employ (4) and construct a canal surface which has a circular cross-section. The radius of each circle is equal to the distance from the point on the directrix to the furthest effective point (i.e., point with maximum distance). As shown in Figure <xref ref-type="fig" rid="F4">4</xref> (left), the estimated cross-section bounds the other effective points as well and consequently the formed canal surface encloses all the demonstrations. The radii function <italic>r</italic>(<italic>u</italic>) &#x02208; &#x0211D; of the obtained canal surface assigns a radius for each point <italic>u</italic>. We use a constant range <italic>v</italic> to parameterize the circumference of the circular cross-section (e.g., <italic>v</italic> &#x0003D; [02&#x003C0;]). More detail on encoding skills using canal surfaces can be found in Ahmadzadeh et al. (<xref ref-type="bibr" rid="B3">2016</xref>). To cover the cross-sectional area more effectively and precisely while maintaining the implicit local constraints of the task, we can also construct generalized cylinders with elliptical cross-sections [see Figure <xref ref-type="fig" rid="F4">4</xref> (middle)]. The radii function for elliptical cross-section <bold>r</bold>(<italic>u</italic>):&#x0211D;&#x021A6;&#x0211D;<sup>3</sup> produces the major and minor axes and the rotation angle of the ellipse at each step <italic>u</italic>. In a more general form, we generate cross-sections by interpolating closed splines to the data. Given a set of break points <italic>v</italic><sub><italic>j</italic></sub>, <italic>j</italic> &#x0003D; 1, &#x02026;, <italic>m</italic> on the interval [<italic>v</italic><sub>0</sub>, <italic>v</italic><sub><italic>m</italic></sub>] such that <italic>v</italic><sub>0</sub>&#x0003C;<italic>v</italic><sub>1</sub> &#x0003C; &#x02026; &#x0003C; <italic>v</italic><sub><italic>m</italic>&#x02212;1</sub>&#x0003C;<italic>v</italic><sub><italic>m</italic></sub>, we can fit a cubic polynomial <inline-formula><mml:math id="M16"><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>v</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>v</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>v</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> to each interval described with four coefficients <italic>a</italic><sub>0</sub>, <italic>a</italic><sub>1</sub>, <italic>a</italic><sub>2</sub>, <italic>a</italic><sub>3</sub>. The accumulated square root of chord length is used to find the breaks and the number of polynomials. Since each polynomial is <italic>C</italic><sup>2</sup>-continuous, by applying the boundary condition <inline-formula><mml:math id="M17"><mml:msup><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x02033;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x02033;</mml:mo></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> and joining the polynomials, we construct a smooth piecewise polynomial curve called a closed cubic spline. The obtained closed-spline denoted by <inline-formula><mml:math id="M18"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> is <italic>C</italic><sup>2</sup>-continuous within each interval and at each interpolating nodes. Figure <xref ref-type="fig" rid="F4">4</xref> (right) shows a closed-spline cross-section constructed on the same set of effective points. Figure <xref ref-type="fig" rid="F5">5</xref> depicts three GCs with circular, elliptical and closed-spline cross-sections constructed for a reaching skill toward an object (the green sphere).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Different types of cross-section on the same set of data. Point on the directrix and the effective points are shown in red and blue respectively.</p></caption>
<graphic xlink:href="frobt-05-00132-g0004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>A reaching skill encoded using three generalized cylinders with different cross-section types. The demonstrations and the object are shown in red and green respectively.</p></caption>
<graphic xlink:href="frobt-05-00132-g0005.tif"/>
</fig>
<p>Finally, as mentioned earlier, the presented approach requires minimal parameter tuning in that only the shape of the cross-section needs to be defined. However, we have found the closed-spline cross-section to be most effective in encoding a wide range of trajectories, thus serving as a useful default for this single parameter.</p>
</sec>
</sec>
<sec>
<title>4.2. Skill Reproduction</title>
<p>During the reproduction phase, the initial position of the end-effector <italic>p</italic><sub>0</sub> in the cross-sectional plane <italic>S</italic><sub>0</sub> (perpendicular to the directrix at <italic>c</italic><sub>0</sub>) is used as input. This point can be either provided by the current pose of the robot end-effector or generated randomly. By drawing a ray starting from <italic>c</italic><sub>0</sub> and passing through <italic>p</italic><sub>0</sub>, we find <italic>g</italic><sub>0</sub>, the intersection of the ray and the cross-sectional curve (see Figure <xref ref-type="fig" rid="F6">6</xref>, for <italic>i</italic> &#x0003D; 0). We consider the distance between the given point <italic>p</italic><sub>0</sub> to <italic>g</italic><sub>0</sub> as a measure of the similarity of the movement we want to reproduce to the nearest neighbor on the GC. We define this similarity by measuring the ratio &#x003B7; (Line 10 in Algorithm <xref ref-type="table" rid="T2">1</xref>) by</p>
<disp-formula id="E10"><label>(8)</label><mml:math id="M19"><mml:mi>&#x003B7;</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mover accent='true'><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:msub><mml:mi>c</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover><mml:mo>&#x02016;</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mover accent='true'><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:msub><mml:mi>c</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover><mml:mo>&#x02016;</mml:mo></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>To calculate the next point of the trajectory, we first transform <italic>p</italic><sub>0</sub> from the current TNB frame <inline-formula><mml:math id="M20"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> to the next frame <inline-formula><mml:math id="M21"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> using <inline-formula><mml:math id="M22"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo>&#x00301;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo>=</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi>T</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and then similar to the previous step, we find <italic>g</italic><sub>1</sub>, the intersection of the ray started from <italic>c</italic><sub>1</sub> passing through <inline-formula><mml:math id="M23"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo>&#x00301;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. We then use &#x003B7; to adjust the projected point <inline-formula><mml:math id="M24"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo>&#x00301;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> according to the measured ratio and the size and shape of the cross-section <italic>S</italic><sub>1</sub> by</p>
<disp-formula id="E11"><label>(9)</label><mml:math id="M25"><mml:msub><mml:mi>p</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B7;</mml:mi><mml:mo>&#x02016;</mml:mo><mml:mover accent='true'><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover><mml:mo>&#x02016;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mover><mml:mi>p</mml:mi><mml:mo>&#x000B4;</mml:mo></mml:mover><mml:mn>1</mml:mn></mml:msub><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>We then repeat this process for each cross-section. Since throughout the process, the ratio &#x003B7; is kept fixed, we call this reproduction method the <italic>fixed-ratio rule</italic>. An illustration of a single-step reproduction process using the fixed-ratio rule can be seen in Figure <xref ref-type="fig" rid="F6">6</xref>. Using this method we can generate new trajectories from any point inside the generalized cylinder (i.e., within the demonstration space) and ensure that the essential characteristics of the demonstrated skill are preserved. Another advantage of this reproduction strategy is in its high computational efficiency since calculating each point requires a projection followed by a scaling. A demo implementation of TLGC with the fixed-ratio reproduction strategy is available online<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref>.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Reproduction from a random initial pose <italic>p</italic><sub><italic>i</italic></sub> on the <italic>i</italic>th cross-section <italic>S</italic><sub><italic>i</italic></sub> to the next cross-section <italic>S</italic><sub><italic>i</italic>&#x0002B;1</sub> using projection <inline-formula><mml:math id="M26"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo>&#x00301;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo>=</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi>T</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and scaling <inline-formula><mml:math id="M27"><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B7;</mml:mi><mml:mo>.</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>|</mml:mo><mml:mo>|</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo>&#x00301;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>.</p></caption>
<graphic xlink:href="frobt-05-00132-g0006.tif"/>
</fig>
<sec>
<title>4.2.1. Adaptive-Ratio Strategy</title>
<p>While keeping a fixed ratio enables the robot to reproduce the skill while preserving the important characteristics of the movement (i.e., shape), being able to change the ratio from its initial value to a specific value introduces new capabilities. We call this procedure the adaptive-ratio strategy and later in section 7.2, we show the effectiveness of this strategy for handling obstacles. In this strategy, the transition from the initial ratio &#x003B7;<sub>0</sub> to a target ratio &#x003B7;<sub><italic>f</italic></sub> should be done smoothly. Although different methods can be used to achieve this goal, we utilize the exponential decay given by</p>
<disp-formula id="E12"><label>(10)</label><mml:math id="M28"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003B3;</mml:mi><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003B7;<sub><italic>i</italic>&#x0002B;1</sub> is the ratio at step <italic>u</italic><sub><italic>i</italic></sub> and &#x003B3; denotes the decay constant. Using (10) together with (8) and (9), the ratio smoothly converges from its initial value &#x003B7;<sub>0</sub> to the desired value &#x003B7;<sub><italic>f</italic></sub>. Figure <xref ref-type="fig" rid="F7">7A</xref> shows one reproduction of the reaching skill from a given initial point using the fixed-ratio of &#x003B7;<sub>0</sub> &#x0003D; 0.55. The reproduced trajectory preserves its shape and remains with similar distance from the directrix. Whereas, Figure <xref ref-type="fig" rid="F7">7D</xref> shows the reproduction of the reaching skill from the same initial point using the adaptive-ratio strategy with &#x003B7;<sub>0</sub> &#x0003D; 0.55 and &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 0.15. It can be seen that the reproduced trajectory converges toward the directrix but keeps a distance according to the final ratio value. In section 7.2, we provide a method for estimating &#x003B7;<sub><italic>f</italic></sub> and &#x003B3; automatically from a detected obstacle. In the rest of this section, we discuss two cases where selection of &#x003B7;<sub><italic>f</italic></sub> produces two different reproduction behaviors.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Reproduction of the reaching skill using different reproduction strategies (section 4.2). <bold>(A)</bold> fixed-ratio rule with &#x003B7;<sub>0</sub> &#x0003D; 0.55, <bold>(B)</bold> adaptive-ratio rule with &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 0 (i.e., convergent), <bold>(C)</bold> adaptive-ratio rule with &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 1 (i.e., divergent), <bold>(D)</bold> adaptive-ratio rule (in dark green) with &#x003B7;<sub>0</sub> &#x0003D; 0.55 and &#x003B7;<sub>0</sub> &#x0003D; 15. The directrix and the reproductions are depicted in blue and magenta respectively. All trajectories were reproduced from the same initial point.</p></caption>
<graphic xlink:href="frobt-05-00132-g0007.tif"/>
</fig>
</sec>
<sec>
<title>Convergent Strategy</title>
<p>In most LfD approaches, the reproduction always converges toward the mean of the learned model regardless of the location of the initial point (for example see results produced using DMPs Calinon et al., <xref ref-type="bibr" rid="B12">2010</xref> in Figure 11). This behavior is suitable when for instance the covariance information of the learned model represent the uncertainty and a reproduced trajectory should avoid staying in uncertain areas and converge toward the mean which is considered to be the most certain shape of the skill learned from the demonstrations. We show that TLGC can mimic such behavior by decaying the initial ratio exponentially from &#x003B7;<sub>0</sub> to &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 0. In this case, Equation (10) can be written as <inline-formula><mml:math id="M29"><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003B3;</mml:mi><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:math></inline-formula>. Using this formula together with (8) and (9), the ratio smoothly converges to zero while consequently the reproduced trajectory gradually converges toward the directrix. Figure <xref ref-type="fig" rid="F7">7B</xref> depicts the reproduction of the reaching skill using the convergent strategy (i.e., &#x003B7;<sub>0</sub> &#x0003D; 0.55 and &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 0). This experiment shows that TLGC can reproduce trajectories similar to the ones reproduced by Dynamic Movement Primitives (DMPs) (Ijspeert et al., <xref ref-type="bibr" rid="B21">2013</xref>).</p>
</sec>
<sec>
<title>Divergent Strategy</title>
<p>While using the convergent strategy the reproduction mimics the behavior of the directrix, we can think of a case where we want the trajectory to remain on the boundary of the generalized cylinder. Such reproductions use the limits of the demonstration space provided by the human teacher. This can be seen as another method for avoiding uncertain areas. Since the provided demonstrations by the teacher do not include any information about the area they enclose, reproducing a trajectory similar to the directrix might not always be desirable or even safe. In other words, the user might prefer to stay as close as possible to the the known areas of the demonstration space and reproduce the skill similar to the nearest observed examples. Such behavior can be achieved by decaying the initial ratio exponentially from &#x003B7;<sub>0</sub> to &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 1. Equation (10) can be simplified to <inline-formula><mml:math id="M30"><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003B3;</mml:mi><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>.</p>
<p>Figure <xref ref-type="fig" rid="F7">7C</xref> shows the reproduction of the reaching skill using the divergent strategy (i.e., &#x003B7;<sub>0</sub> &#x0003D; 0.55 and &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 1). The reproduction uses the exponential growth to diverge from the directrix toward the boundary of the GC while achieving the main goal of the task which is reaching toward the object. Table <xref ref-type="table" rid="T1">1</xref> summarizes the settings and properties of the discussed reproduction strategies.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>A guideline for setting reproduction strategies for TLGC.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Strategy</bold></th>
<th valign="top" align="left"><bold>Setting</bold></th>
<th valign="top" align="left"><bold>Property</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Fixed-ratio</td>
<td valign="top" align="left">Use (8), (9), (10) with &#x003B7;<sub><italic>f</italic></sub> &#x0003D; &#x003B7;<sub>0</sub></td>
<td valign="top" align="left">Preserves shape</td>
</tr>
<tr>
<td valign="top" align="left">Convergence</td>
<td valign="top" align="left">Use (8), (9), (10) with &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 0</td>
<td valign="top" align="left">DMP-like reproduction</td>
</tr>
<tr>
<td valign="top" align="left">Divergence</td>
<td valign="top" align="left">Use (8), (9), (10) with &#x003B7;<sub><italic>f</italic></sub> &#x0003D; 1</td>
<td valign="top" align="left">Avoids demonstration space</td>
</tr>
<tr>
<td valign="top" align="left">Adaptive</td>
<td valign="top" align="left">use (8), (9), (10) with &#x003B7;<sub><italic>f</italic></sub> &#x0003D; automatically</td>
<td valign="top" align="left">Obstacle avoidance</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec>
<title>4.3. Generalization</title>
<p>The approach described thus far enables a robot to reproduce the skill inside the GC and under similar start and goal states. However, to have a robust and flexible skill model we must ensure it can generalize to novel situations. We use a nonrigid registration technique (Maintz and Viergever, <xref ref-type="bibr" rid="B25">1998</xref>) to achieve this goal. Given a set of points in source geometry (i.e., the environment during the demonstration) and a corresponding set of points in target geometry (i.e., the environment during reproduction), the nonrigid registration technique computes a spatial deformation function. To adapt to the new states of the skill during reproduction, the constructed generalized cylinder uses this deformation function.</p>
<p>Nonrigid registration techniques have been widely used in medical imaging (Maintz and Viergever, <xref ref-type="bibr" rid="B25">1998</xref>), computer vision (Belongie et al., <xref ref-type="bibr" rid="B7">2002</xref>), and 3D modeling communities (Pauly et al., <xref ref-type="bibr" rid="B31">2005</xref>). Recently, Schulman et al. (<xref ref-type="bibr" rid="B36">2016</xref>) demonstrated the usefulness of nonrigid registration in LfD by employing it for autonomous knot tying. Their proposed <italic>trajectory transfer</italic> method is based on the classic Thin Plate Splines (TPS) registration algorithm (Bookstein, <xref ref-type="bibr" rid="B9">1989</xref>) extended to 3D Cartesian space, which we also utilize here.</p>
<p>Consider a source geometry composed of a set of <italic>N</italic> landmark points in 3D Cartesian space, <inline-formula><mml:math id="M31"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="italic"><mml:mi>L</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo>|</mml:mo><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>, and a target geometry composed of the corresponding set of landmark points, <inline-formula><mml:math id="M32"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mstyle mathvariant="italic"><mml:mi>L</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo>|</mml:mo><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. The nonrigid registration problem then is to find an interpolation function <bold>z</bold>:&#x0211D;<sup>3</sup>&#x021A6;&#x0211D;<sup>3</sup> constrained to map the points in <inline-formula><mml:math id="M33"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:math></inline-formula> to the points in <inline-formula><mml:math id="M34"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>. However, there are infinitely many such interpolation functions. To address this issue, TPS finds an interpolation function that achieves the optimal trade-off between minimizing the distance between the landmarks and minimizing the so-called <italic>bending energy</italic>, in effect finding a smooth interpolator. The TPS formulation is given by</p>
<disp-formula id="E13"><label>(11)</label><mml:math id="M35"><mml:munder><mml:mstyle mathsize='140%' displaystyle='true'><mml:mrow><mml:mi>min</mml:mi></mml:mrow></mml:mstyle><mml:mtext>z</mml:mtext></mml:munder><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:msub><mml:msup><mml:mi>L</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mi>n</mml:mi></mml:msub><mml:mo>&#x02212;</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>z</mml:mi></mml:mstyle><mml:mo stretchy='false'>(</mml:mo></mml:mrow></mml:mstyle><mml:msub><mml:mi>L</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:msup><mml:mo>&#x02016;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mstyle displaystyle='true'><mml:mrow><mml:msub><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x0211D;</mml:mi><mml:mn>3</mml:mn></mml:msup></mml:mrow></mml:msub><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mo>&#x0007B;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:munder><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:msup><mml:mo>&#x025BD;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:msub><mml:mi>z</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mo stretchy='false'>)</mml:mo><mml:msubsup><mml:mo>&#x02016;</mml:mo><mml:mi>F</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mstyle></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M36"><mml:msup><mml:mrow><mml:mi>&#x025BD;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> represents the Hessian matrix of the <italic>i</italic>th dimension of the image of <bold>z</bold>, &#x003BB; is a regularization parameter, and ||.||<sub><italic>F</italic></sub> is the Frobenius norm. The integral term represents the bending energy. Minimizing the bending energy term in our case is equivalent to minimizing the dissimilarity between the initial and deformed generalized cylinder (i.e., preserving the shape of the skill). The interpolation function <bold>z</bold> which solves (11) consists of two parts: an affine part and a non-affine part. The affine part approximates the overall deformation of the geometry acting globally, while the non-affine part represents the local residual adjustments forced by individual landmark points. With the non-affine part expanded in terms of the basis function &#x003D5;, <bold>z</bold> can be represented as <inline-formula><mml:math id="M37"><mml:mstyle class="text"><mml:mtext mathvariant="bold">z</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">b</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">A</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">w</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mi>&#x003D5;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="italic"><mml:mi>L</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> where <bold>b</bold> &#x02208; &#x0211D;<sup>3</sup>, <bold>A</bold> &#x02208; &#x0211D;<sup>3 &#x000D7; 3</sup> and <inline-formula><mml:math id="M38"><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">w</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> are the unknown parameters while the basis function is defined as <inline-formula><mml:math id="M39"><mml:mi>&#x003D5;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="italic"><mml:mi>L</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="italic"><mml:mi>L</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mo>&#x02200;</mml:mo><mml:mstyle class="text"><mml:mtext mathvariant="bold">x</mml:mtext></mml:mstyle><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. The unknown parameters <bold>b</bold>, <bold>A</bold>, and <bold>w</bold><sub><italic>n</italic></sub> can be found using matrix manipulation (Bookstein, <xref ref-type="bibr" rid="B9">1989</xref>).</p>
<p>In the generalization procedure using TPS (detailed in Algorithm <xref ref-type="table" rid="T4">3</xref>), the source geometry is composed of the locations of the important landmarks in the workspace during the demonstration while the corresponding target geometry is composed of the new locations of the landmark points. For instance, in the reaching skill, the location of the object is considered as the source landmark and the target landmark is the new location of the object during the reproduction. Given the landmarks, the algorithm first finds the interpolation function <bold>z</bold> using the nonrigid registration method (line 4 in Algorithm <xref ref-type="table" rid="T4">3</xref>). The algorithm then uses <bold>z</bold> to transform the directrix <bold>m</bold>&#x021A6;<bold>m</bold>&#x02032; and the cross-sectional function <inline-formula><mml:math id="M40"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x021A6;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> (lines 5 and 6 in Algorithm <xref ref-type="table" rid="T4">3</xref>). The new generalized cylinder <inline-formula><mml:math id="M41"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is then constructed using the mapped parameters (line 7 in Algorithm <xref ref-type="table" rid="T4">3</xref>). To reproduce the skill in <inline-formula><mml:math id="M42"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, the reproduction methods from section 4.2 can be employed. It has to be noted that the set of landmarks is not limited to only the initial and final points on the trajectory (e.g., location of the object) and can include any point on the trajectory. In section 7, we use this concept for obstacle avoidance.</p>
<table-wrap position="float" id="T4">
<caption><p><bold>Algorithm 3</bold> Generalization of GC using TPS</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace> 1:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;procedure <sc>GeneralizeGC</sc> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 2: </monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;Input:<inline-formula><mml:math id="M158"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, frames <inline-formula><mml:math id="M159"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, source &#x00026; target landmarks <inline-formula><mml:math id="M160"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M161"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 3:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;Output:target <inline-formula><mml:math id="M162"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, target frames <inline-formula><mml:math id="M163"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 4:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M164"><mml:mstyle mathvariant="bold"><mml:mtext>z</mml:mtext></mml:mstyle><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">findTPS</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> <italic>solve</italic> (11) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 5:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;<bold>m</bold>&#x02032;(<italic>u</italic>)&#x02190;<bold>z</bold><bold>m</bold>(<italic>u</italic>) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 6:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M165"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02190;</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>z</mml:mtext></mml:mstyle><mml:mstyle mathsize="1.19em"><mml:mrow></mml:mrow></mml:mstyle><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mstyle mathsize="1.19em"><mml:mrow></mml:mrow></mml:mstyle></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 7:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M166"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">makeGeneralizedCylinder</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>m</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
</tbody>
</table>
</table-wrap>
<p>To evaluate the proposed approach, we used TPS to generalize the GC learned for a pick-and-place skill (experiment five in section 5.1) over four novel final states of the task. We selected two landmarks including the locations of the object and the box. While the location of landmark on the object remains unchanged, the location of the target landmark on the box was changed during the reproduction. The result of employing Algorithm <xref ref-type="table" rid="T4">3</xref> are depicted in Figure <xref ref-type="fig" rid="F10">10A</xref>. It can be seen that the skill can be successfully generalized to the four desired novel locations of the box.</p>
<p>One of the drawbacks of the non-rigid registration technique is that it can lead to non-linear deformations (see section 5.2 for a discussion about the robustness of the TPS approach). To address this issue, alternate generalization techniques such as Laplacian Trajectory Editing (LTE) (Nierhoff et al., <xref ref-type="bibr" rid="B29">2016</xref>) can be used. LTE interprets a trajectory <inline-formula><mml:math id="M43"><mml:mstyle mathvariant="bold"><mml:mtext>P</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula> as an undirected graph and assigns uniform umbrella weights, <italic>w</italic><sub><italic>ij</italic></sub> &#x0003D; 1, to the edges <italic>e</italic><sub><italic>ij</italic></sub> if <italic>i</italic> and <italic>j</italic> are neighbors and <italic>w</italic><sub><italic>ij</italic></sub> &#x0003D; 0 otherwise.</p>
<p>Local path properties are specified using the discrete Laplace-Beltrami operator as</p>
<disp-formula id="E14"><label>(12)</label><mml:math id="M44"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>&#x003B4;</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02211;</mml:mo><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>For the whole graph, (12) can be written as <bold>&#x00394;</bold> &#x0003D; <bold>LP</bold> where <inline-formula><mml:math id="M45"><mml:mstyle mathvariant="bold"><mml:mtext>&#x00394;</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>&#x003B4;</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>&#x003B4;</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x022A4;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula> and <bold>L</bold> is defined as</p>
<disp-formula id="E15"><label>(13)</label><mml:math id="M46"><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>L</mml:mi></mml:mstyle><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mn>1</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>j</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mstyle></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>isNeighbor</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mtext>otherwise</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow> </mml:mrow></mml:math></disp-formula>
<p>LTE solves <bold>&#x00394;</bold> &#x0003D; <italic><bold>LP</bold></italic> using least square by specifying additional constraints <bold>C</bold> (similar to landmarks in non-rigid registration) as</p>
<disp-formula id="E16"><label>(14)</label><mml:math id="M47"><mml:msub><mml:mstyle mathvariant='bold-italic' mathsize='normal'><mml:mi>P</mml:mi></mml:mstyle><mml:mi>d</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold-italic' mathsize='normal'><mml:mi>L</mml:mi></mml:mstyle></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold-italic' mathsize='normal'><mml:mi>P</mml:mi></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x02020;</mml:mo></mml:msup><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold-italic' mathsize='normal'><mml:mi>&#x00394;</mml:mi></mml:mstyle></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mstyle mathvariant='bold-italic' mathsize='normal'><mml:mi>C</mml:mi></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where <bold><italic>P</italic></bold><sub><italic>d</italic></sub> is the deformed graph and &#x02020; denotes the pseudo-inverse. To handle non-linear deformations, LTE calculates elements of the homogeneous transformation that maps the source landmarks <bold>p</bold><sub><italic>L</italic><sub><italic>S</italic></sub>, <italic>i</italic></sub> to the target landmarks <bold>p</bold><sub><italic>L</italic><sub><italic>T</italic></sub>, <italic>i</italic></sub> through Singular Value Decomposition by</p>
<disp-formula id="E17"><label>(15)</label><mml:math id="M48"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow></mml:munder></mml:mstyle><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mo stretchy='false'>|</mml:mo><mml:mo stretchy='false'>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mstyle mathvariant='bold'><mml:mtext>z</mml:mtext></mml:mstyle><mml:mstyle mathsize="1.19em"><mml:mrow></mml:mrow></mml:mstyle><mml:msub><mml:mrow><mml:mstyle mathvariant='bold'><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mstyle mathsize="1.19em"><mml:mrow></mml:mrow></mml:mstyle><mml:mo stretchy='false'>|</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <bold>z</bold> is the homogeneous transformation including a scalar scaling factor, a rotation matrix, and a translational vector. The generalization procedure for GCs using LTE is detailed in Algorithm <xref ref-type="table" rid="T5">4</xref>. Given the source and target landmark sets, we first calculate the Laplacian (Line 4 in Algorithm <xref ref-type="table" rid="T5">4</xref>) and then find the mapping <bold>z</bold> by applying least square and then Singular Value Decomposition to deal with non-linear deformations. We then use <bold>z</bold> to map the directrix <bold>m</bold> &#x021A6; <bold>m</bold>&#x02032; and the cross-sectional function <inline-formula><mml:math id="M49"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mo>&#x021A6;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> (Lines 6 and 7 in Algorithm <xref ref-type="table" rid="T5">4</xref>). The new generalized cylinder <inline-formula><mml:math id="M50"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is then constructed using the mapped parameters (line 8 in Algorithm <xref ref-type="table" rid="T5">4</xref>). The reproduction methods from section 4.2 can be employed to reproduce the skill in <inline-formula><mml:math id="M51"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>.</p>
<table-wrap position="float" id="T5">
<caption><p><bold>Algorithm 4</bold> Generalization of GC using LTE</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><monospace> 1:</monospace></td>
<td align="left" valign="top"><monospace>&#x000A0;&#x000A0;procedure <sc>GeneralizeGC-LTE</sc> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 2:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; Input:<inline-formula><mml:math id="M176"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, frames <inline-formula><mml:math id="M177"><mml:msub><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, source &#x00026; target landmarks <inline-formula><mml:math id="M178"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M179"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 3:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; Output:target <inline-formula><mml:math id="M180"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, target frames <inline-formula><mml:math id="M181"><mml:msubsup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 4:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; [<inline-formula><mml:math id="M182"><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle><mml:mo>,</mml:mo><mml:mi>&#x00394;</mml:mi></mml:mrow><mml:mo>]</mml:mo><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">calcLaplacian</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>m</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> <italic>using</italic> (12), (13) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 5:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; <inline-formula><mml:math id="M183"><mml:mstyle mathvariant="bold"><mml:mtext>z</mml:mtext></mml:mstyle><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">SVD</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>p</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> <italic>using</italic> (15) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 6:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; <bold>m</bold>&#x02032;(<italic>u</italic>)&#x02190;<bold>z</bold><bold>m</bold>(<italic>u</italic>) </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 7:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; <inline-formula><mml:math id="M184"><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02190;</mml:mo><mml:mstyle mathvariant="bold"><mml:mtext>z</mml:mtext></mml:mstyle><mml:mstyle mathsize="1.19em"><mml:mrow></mml:mrow></mml:mstyle><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mstyle mathsize="1.19em"><mml:mrow></mml:mrow></mml:mstyle></mml:math></inline-formula> </monospace></td></tr>
<tr><td align="left" valign="top"><monospace> 8:</monospace></td>
<td align="left" valign="top"><monospace> &#x000A0; <inline-formula><mml:math id="M185"><mml:msubsup><mml:mrow><mml:mi mathvariant="-tex-caligraphic">G</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="-tex-caligraphic">F</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x02032;</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x02190;</mml:mo><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">makeGeneralizedCylinder</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>m</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> </monospace></td></tr>
</tbody>
</table>
</table-wrap>
<p>Similar to Algorithm <xref ref-type="table" rid="T4">3</xref>, we evaluated Algorithm <xref ref-type="table" rid="T5">4</xref> for the pick-an-place task over the same four final situations. The location of the box and the ball are considered as landmarks. The result of employing Algorithm <xref ref-type="table" rid="T5">4</xref> are depicted in Figure <xref ref-type="fig" rid="F10">10C</xref>. It can be seen that the skill can be successfully generalized to the four desired novel locations of the box during reproduction and the deformed GCs are very similar to the ones obtained using Algorithm <xref ref-type="table" rid="T4">3</xref>. For a discussion about the robustness of the LTE approach compared against TPS see section 5.2.</p>
</sec>
</sec>
<sec id="s5">
<title>5. Experimental Results</title>
<p>We conducted eight experiments on two robotic platforms to demonstrate the encoding of the GC model, as well as its reproduction and generalization capabilities, on multiple trajectory-based skills<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref>. For each experiment, we gathered a set of demonstrations through kinesthetic teaching using either a 6-DOF Kinova Jaco2 robot or a 7-DOF Sawyer robotic arm (Figures <xref ref-type="fig" rid="F1">1</xref>, <xref ref-type="fig" rid="F3">3</xref>). The data was recorded at 100 Hz.</p>
<sec>
<title>5.1. Learning and Reproduction</title>
<p>In this section, we present examples of six trajectory-based skills encoded using the generalized cylinder model. In the first experiment, we performed a reaching skill toward an object (green sphere) from above (Figure <xref ref-type="fig" rid="F5">5</xref>). We present circular, elliptical and closed-spline cross-sections to showcase how GCs with different cross-section types encode the demonstration space. Ten reproductions of the skill from various initial poses produced by the fixed-ratio rule are depicted in Figure <xref ref-type="fig" rid="F9">9A</xref>.</p>
<p>The demonstrations recorded for the second experiment (Figure <xref ref-type="fig" rid="F8">8A</xref>) show an example of a movement that can be started and ended in a wide task-space but in the middle is constrained to pass through a narrow area. This movement resembles threading a needle or picking up an object in the middle of the movement. The obtained GC extracts and preserves the important characteristics of the demonstrated skill, i.e., the precision and shape of the trajectory throughout the movement. Figure <xref ref-type="fig" rid="F9">9B</xref> shows 10 successful reproductions of the skill from various initial poses using the fixed-ratio rule. LfD approaches such as SEDS (Khansari-Zadeh and Billard, <xref ref-type="bibr" rid="B22">2011</xref>) that require a single end point fail to model this skill successfully.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Five real-world experiments performed using TLGC. The skills are encoded by extracting the directrix (blue) and generalized cylinder (gray) from demonstrations (red).</p></caption>
<graphic xlink:href="frobt-05-00132-g0008.tif"/>
</fig>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>Ten reproductions of the skill for each experiment. The reproduced trajectories (magenta) are generated using the fixed-ratio rule from arbitrary initial poses.</p></caption>
<graphic xlink:href="frobt-05-00132-g0009.tif"/>
</fig>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Generalization of the learned pick-and-place skill over four novel final states. <bold>(A,B)</bold> Using TPS through Algorithm <xref ref-type="table" rid="T4">3</xref>, and <bold>(C,D)</bold> using LTE through Algorithm <xref ref-type="table" rid="T5">4</xref>. New locations of the box in <bold>(A,C)</bold> are close to the original location of the box, while the new box locations are farther in <bold>(B,D)</bold>.</p></caption>
<graphic xlink:href="frobt-05-00132-g0010.tif"/>
</fig>
<p>The third experiment (Figure <xref ref-type="fig" rid="F8">8B</xref>) shows a reaching/placing skill similar to the first experiment with a curved trajectory. The robot learns to exploit a wider demonstration space while reaching the object and maintaining trajectories with precision near the object. Figure <xref ref-type="fig" rid="F9">9C</xref> illustrates ten reproductions of the skill from various initial poses produced by the fixed-ratio rule<xref ref-type="fn" rid="fn0009"><sup>9</sup></xref>.</p>
<p>The fourth experiment (Figure <xref ref-type="fig" rid="F8">8C</xref>) shows a circular movement around an obstacle, which is unknown to the robot. Since the given demonstrations avoid the obstacle, and the encoded GC guarantees that all the reproductions of the task remain inside the cylinder, the reproduced path is guaranteed to be collision-free. Figure <xref ref-type="fig" rid="F9">9D</xref> illustrates ten reproductions of the skill from various initial poses produced by the fixed-ratio rule. It can be seen that all the reproductions stay inside the boundaries while exploiting the demonstration space represented by GC. Figure <xref ref-type="fig" rid="F1">1</xref> (middle) also shows a snapshot captured during the reproduction of the skill.</p>
<p>The fifth task represents a pick-and-place movement in which the robot picks up an object and places it in a box (Figure <xref ref-type="fig" rid="F8">8D</xref>). The encoded GC shows that the initial and final poses of the movement are the main constraints of the task while in the middle of the trajectory, the end-effector can pass through a wider space while preserving the shape of the movement. Figure <xref ref-type="fig" rid="F1">1</xref> (left) shows a snapshot during the reproduction of this skill. Figure <xref ref-type="fig" rid="F9">9E</xref> depicts ten reproductions of the skill generated by the fixed-ratio rule.</p>
<p>The sixth experiment illustrates a pressing skill with multiple goals [Figure <xref ref-type="fig" rid="F1">1</xref>(right)]. The robot starts from a wide demonstration space, reaches to the first peg, presses it down, retracts from it, reaches for the second peg, and presses it down (Figure <xref ref-type="fig" rid="F8">8E</xref>). Unlike many existing LfD approaches, TLGC can handle this skill although it includes more than one goal and does not require skill segmentation. In addition, to show that the proposed approach is robot-agnostic, we have conducted this experiment on a 7-DOF Sawyer robot.</p>
</sec>
<sec>
<title>5.2. Generalization</title>
<p>In this section, we demonstrate the generalization capability of the proposed approach using data from the fifth experiment. By detecting a change in the location of the objects during the reproduction the generalization is performed to satisfy the new conditions. After encoding the skill in the fifth experiment (Figure <xref ref-type="fig" rid="F8">8D</xref>), we relocated the box four times and each time used Algorithm <xref ref-type="table" rid="T4">3</xref> to adapt the encoded model to the new situations. The results can be seen in Figure <xref ref-type="fig" rid="F10">10A</xref>. We then repeated the experiment by employing Algorithm <xref ref-type="table" rid="T5">4</xref>. Figure <xref ref-type="fig" rid="F10">10C</xref> illustrates the results. As noticeable in both generalization experiments, the overall shape of the generalized cylinders is preserved while accordingly expanding or contracting for different final poses. It also can be seen that the two experiments resulted in very similar GCs.</p>
<p>As mentioned before, one of the drawbacks of the non-rigid registration technique is that it leads to non-linear deformations as the distance between the source and target landmark locations increases. Figure <xref ref-type="fig" rid="F10">10B</xref> shows an example of such instability caused by increasing the distance of the target landmark (i.e., the box) from the source landmark (i.e., initial location of the box). Although the magenta GC in Figure <xref ref-type="fig" rid="F10">10B</xref> satisfies the initial and final states of the task in the new environment, it does not preserve the shape of the skill. On the other hand, when employing Algorithm <xref ref-type="table" rid="T5">4</xref> for the same source and target landmark locations, the results depicted in Figure <xref ref-type="fig" rid="F10">10D</xref> indicate that LTE is more robust to the increase in the distance between the source and target landmarks. Unlike TPS, the magenta GC generalized using LTE not only satisfies the initial and final states of the skill but also preserves the shape of the skill as close as possible. As a direct consequence of the ratio rule for reproduction, this successfully enables reproduction of the skill to unforeseen situations while preserving the important features of the skill.</p>
</sec>
<sec>
<title>5.3. Comparison to DMPs and GMM/GMR</title>
<p>In this section, we compare the presented approach to two widely-used LfD techniques, Dynamic Movement Primitives (DMPs) (Ijspeert et al., <xref ref-type="bibr" rid="B21">2013</xref>), and Gaussian Mixture Model/Gaussian Mixture Regression (GMM/GMR) (Calinon et al., <xref ref-type="bibr" rid="B11">2007</xref>). Since the original DMP representation (Ijspeert et al., <xref ref-type="bibr" rid="B21">2013</xref>) is limited to learn from a single demonstration, we compare our approach to a variant of DMPs that constructs a representation from a set of demonstrations (Calinon et al., <xref ref-type="bibr" rid="B12">2010</xref>). We performed two experiments comparing the behavior of the above approaches to TLGC.</p>
<sec>
<title>5.3.1. Comparison I</title>
<p>Figure <xref ref-type="fig" rid="F11">11A</xref> shows two demonstrations of a skill performed by a teacher. The demonstrations are simple direct trajectories (i.e., movement of robot&#x00027;s end-effector from left to right). Unlike many common scenarios, in this case the size of the demonstration space does not decrease at the end of the skill. When providing such demonstrations, the user is showing not only the shape of the movement but also the boundaries of the movement. The goal of this experiment is to show that existing approaches have not been designed to deal with the demonstration space and this fact serves as one of the main motivations for proposing TLGC.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p>Results of the comparison between GMM, DMPs, and TLGC in section 5.3.1 <bold>(A)</bold> demonstrations <bold>(B&#x02013;D)</bold> GMM with two, four, and eight components respectively, <bold>(E)</bold> generalized cylinder and ten reproductions <bold>(F&#x02013;H)</bold> DMPs with two, four, and eight attractors and 10 reproductions, respectively.</p></caption>
<graphic xlink:href="frobt-05-00132-g0011.tif"/>
</fig>
<p>We first employed TLGC with a circular cross-section and generated 10 reproductions from various initial poses (Figure <xref ref-type="fig" rid="F11">11E</xref>). The encoded GC represents the demonstration space and the reproductions maintain the important characteristics of the movement. TLGC requires no parameter tuning and can reproduce multiple nonidentical yet valid solutions in the demonstration space. For GMM/GMR we tuned the number of Gaussian components to 2, 4, and 8, in Figures <xref ref-type="fig" rid="F11">11B&#x02013;D</xref>, respectively, and for DMPs, we tuned the number of attractors to 2, 4, and 8, in Figures <xref ref-type="fig" rid="F11">11F&#x02013;H</xref>, respectively. The results show that both DMPs and GMM/GMR are capable of learning the skill. However, unlike TLGC, GMM/GMR reproduces a single solution for this task. The reproduced trajectory by GMM/GMR imitates the demonstrations, however, the trajectory oscillates in the middle under the influence of different Gaussian components placed at the demonstration space. As shown in Figures <xref ref-type="fig" rid="F11">11C,D</xref>, increasing the number of components to four and then to eight fails to improve the shape of the reproduced trajectory and although the amplitude decreases the frequency increases. For DMPs, increasing the number of attractors helps with imitating the given demonstrations. However, starting from different initial points, reproduced trajectories converge to an average solution (similar to the directrix in TLGC). This experiment highlights the role and the importance of the demonstration space on the behavior of the learning approach.</p>
</sec>
<sec>
<title>5.3.2. Comparison II</title>
<p>Figure <xref ref-type="fig" rid="F12">12</xref> shows a comparison of results on the data from experiment four. The demonstrations and the object are shown in Figure <xref ref-type="fig" rid="F12">12A</xref>. For TLGC, we encoded the demonstrations using a generalized cylinder with a closed-spline cross-section and generated five reproductions from various initial poses (Figure <xref ref-type="fig" rid="F12">12B</xref>). As mentioned before, TLGC requires no parameter tuning beyond specifying the cross-section type, and by extracting the characteristics of the movement it learns to avoid the obstacle. For GMM/GMR we tuned the number of Gaussian components to 5 and 10, in Figures <xref ref-type="fig" rid="F12">12C,D</xref>, respectively, and for DMPs, we tuned the number of attractors to 5 and 10, in Figures <xref ref-type="fig" rid="F12">12E,F</xref>, respectively. The results show that both DMPs and GMM/GMR can learn the skill. In Figure <xref ref-type="fig" rid="F12">12C</xref>, and to a lesser degree in Figure <xref ref-type="fig" rid="F12">12D</xref>, GMM/GMR produces a more angular trajectory than seen in the demonstrations. In Figure <xref ref-type="fig" rid="F12">12E</xref> we see that DMPs deviating from the demonstrations and colliding with the object. In all four examples, we also observe that the five reproductions starting from different initial locations converge to a single path. In contrast, the reproductions by TLGC produce a more natural set of motions that are not identical and exploit the demonstration space while preserving the shape of the skill. Note that TLGC can reproduce analogous trajectories by using the convergent reproduction strategy if that behavior is desirable.</p>
<fig id="F12" position="float">
<label>Figure 12</label>
<caption><p>Results of the comparison between GMM, DMPs, and TLGC in section 5.3.2 <bold>(A)</bold> demonstrations <bold>(B)</bold> GC and five reproductions, <bold>(C,D)</bold> GMM with five and ten components, respectively, <bold>(E,F)</bold> DMPs with five and ten attractors, respectively and five reproductions.</p></caption>
<graphic xlink:href="frobt-05-00132-g0012.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec id="s6">
<title>6. Skill Refinement</title>
<p>So far, we have shown that TLGC can be used to extract, reproduce, and generalize skills from <italic>reliable</italic> human demonstrations. In practice, however, due to morphological differences between the robot and the human, user-provided demonstrations of a task are usually <italic>sub-optimal</italic>. Multiple solutions to address this problem have been proposed. Argall et al. (<xref ref-type="bibr" rid="B4">2009a</xref>) showed that a skill can be corrected by having the teacher to assign weights to each demonstration based on its quality. However, assigning weights to demonstrations is not trivial. Another solution is to start from the sub-optimal model and explore better solutions. For instance, an LfD approach can be combined with Reinforcement Learning to refine the sub-optimal behavior of the model (Kormushev et al., <xref ref-type="bibr" rid="B23">2010</xref>). This family of solutions usually suffer from two drawbacks (a) a manually engineered reward function is required, and (b) finding an improved solution entails extensive trial and error. An alternative approach is to refine the skill through physical human-robot interaction (Argall et al., <xref ref-type="bibr" rid="B6">2010</xref>). In this work, we differentiate two types of refinement. <italic>Incremental Refinement</italic> occurs during the learning process in which the user applies modifications while the robot is replaying a demonstration (or a reproduction) and the updated data is used to retrain the model. Once a model is learned, <italic>constraint-based refinement</italic> can be used to refine the model further by applying new constraints. In this section, we show that both approaches can be applied to TLGC. Note that we have selected simple tasks for analysis and illustrative purposes.</p>
<sec>
<title>6.1. Incremental Refinement</title>
<p>In its first form, skill refinement can be performed during the learning process incrementally. After encoding the skill using TLGC, the user identifies a <italic>target trajectory</italic> (either a demonstration or a reproduction) that needs to be modified. We execute the target trajectory with the robot in compliant control mode, allowing the joints and the end-effector position to be adjusted while the robot is moving. Therefore, while the robot is replaying the target trajectory, the teacher can reshape the movement through kinesthetic correction. The obtained trajectory either replaces the initial demonstration or is added to the set as a new demonstration. Given the new set, the algorithm updates the model and reproduces new trajectories that inherit the applied corrections.</p>
<p>To evaluate this method, we initially demonstrated three simple trajectories and encoded the skill with a closed-spline cross-section (<bold>Figure 14A</bold>), and reproduced the skill using the fixed-ratio rule (<bold>Figure 14B</bold>). Now assume we would like the robot end-effector to dip downwards in the middle of the first (top) demonstration. While the robot is replaying the target demonstration, the teacher reshapes the demonstration through kinesthetic correction. <bold>Figure 14C</bold> illustrates the original and refined demonstrations. <bold>Figure 14D</bold> shows the updated GC after replacing the target with the refined demonstration, as well as a reproduction of the skill from a given initial point, that reflects the performed refinements<sup>9</sup>. This experiment shows that TLGC can be used to refine a learned skill incrementally. Although many approaches can benefit from a similar process (Argall et al., <xref ref-type="bibr" rid="B6">2010</xref>), our representation is visually perceivable and has the potential to enable even non-experts to observe and interpret the effects of the refinement on the model.</p>
</sec>
<sec>
<title>6.2. Constraint-Based Refinement</title>
<p>In this section, we show that skill refinement can be performed after the model has been encoded by applying new constraints. We consider the skill from previous experiment (<bold>Figure 15A</bold>). Assume during a reproduction, the user observes and kinesthetically modifies the reproduced trajectory. When a correction is imposed, we compare the original and the modified trajectories, calculate point-to-point translation vectors <bold>v</bold><sub><italic>i</italic></sub>, and form a refinement matrix <inline-formula><mml:math id="M52"><mml:mover accent="true"><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula>, by concatenating the vectors. Figure <xref ref-type="fig" rid="F13">13</xref> depicts the formation of the refinement matrix. The refinement matrix acts as a geometric constraint on the GC that would affect future reproductions. The green trajectory in <bold>Figure 15C</bold> shows how the reproduction in <bold>Figure 15B</bold> is refined by the teacher through kinesthetic correction; the teacher has applied downward forces (&#x02212;<italic>x</italic><sub>3</sub> direction) to keep the end-effector at a certain level. We calculated the refinement matrix <inline-formula><mml:math id="M53"><mml:mover accent="true"><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext mathvariant="bold">v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>&#x000D7;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and applied it as a constraint to our fixed-ratio rule. A reproduction remains unaffected if it is generated below the constraining plane. This case can be seen as the lower reproduction in <bold>Figure 5D</bold>. On the other hand, if a reproduction intersects with the constraining plane, the refinement matrix applies to it. The upper reproduction in <bold>Figure 15D</bold> shows the effect of the constraint while the dashed line shows reproduction without applying the constraint<sup>9</sup>. This experiment indicates that using constraint-based refinement, the user can apply new constraints to the model without modifying it. One of the advantages of this approach is that the imposed constraints later can be removed or combined with other constraints without updating the encoded model. To our knowledge there is no other LfD approach with similar capabilities.</p>
<fig id="F13" position="float">
<label>Figure 13</label>
<caption><p>Formation of the refinement matrix from the original and modified reproductions.</p></caption>
<graphic xlink:href="frobt-05-00132-g0013.tif"/>
</fig>
<fig id="F14" position="float">
<label>Figure 14</label>
<caption><p>Incremental refinement of a skill by correcting a demonstration. <bold>(A)</bold> Demonstrations (red), directrix (blue) and the obtained GC, <bold>(B)</bold> reproduction from a random pose (magenta), <bold>(C)</bold> first demonstration was refined (red) by user, <bold>(D)</bold> updated GC, directrix, and a new reproduction.</p></caption>
<graphic xlink:href="frobt-05-00132-g0014.tif"/>
</fig>
<p>Note that, the refined reproduction might not be continuous at the point where the constraint applies first (see the top reproduction in Figure <xref ref-type="fig" rid="F15">15D</xref>). However, our experiments show that the low-level controller of the robot can handle this during execution<sup>9</sup>. Another solution is to smooth the trajectory before execution.</p>
<fig id="F15" position="float">
<label>Figure 15</label>
<caption><p>Constraint-based refinement of a skill by correcting a reproduction. <bold>(A)</bold> Demonstrations (red), directrix (blue), and the obtained GC, <bold>(B)</bold> reproduction from a random pose (magenta), <bold>(C)</bold> refined reproduction (green), <bold>(D)</bold> two new reproductions; upper one affected by the refinement, while lower one is not.</p></caption>
<graphic xlink:href="frobt-05-00132-g0015.tif"/>
</fig>
</sec>
<sec>
<title>6.3. Comparison to GMM-wEM</title>
<p>As mentioned before, the approach proposed by Argall et al. (<xref ref-type="bibr" rid="B6">2010</xref>) enables a human to refine the trajectories during the learning process using tactile feedback. The refined trajectories are later used as new demonstrations to reproduce the skill through incremental learning. Their approach GMM-wEM combines GMM/GMR with a modified version of the Expectation-Maximization algorithm that uses a forgetting factor to assign weight values to the corrected points in the dataset. In this section, we compare TLGC to GMM-wEM on two refinement experiments.</p>
<sec>
<title>6.3.1. Comparison I</title>
<p>In the first comparison, we repeated the third experiment from section 5.1, refined one of the four demonstrations, replaced it in the set, and retrained the model. We performed this experiment using both TLGC and GMM-wEM as presented in Argall et al. (<xref ref-type="bibr" rid="B6">2010</xref>). Figures <xref ref-type="fig" rid="F16">16A,B</xref> depict that the model encoded using TLGC adapts to the new set and updates the demonstration space. It can be seen that the directrix is also moved toward the new demonstration. Figures <xref ref-type="fig" rid="F16">16C,D</xref> show the results for GMM-wEM where the encoded Gaussian model with three components also adapts to the new set of demonstrations. However, the reproduction, both before and after the refinement, oscillates toward different Gaussian components. Although the reproduction achieves the goal of the task, it is dissimilar to the demonstrations.</p>
<fig id="F16" position="float">
<label>Figure 16</label>
<caption><p>Comparing incremental skill refinement on the reaching skill in section 5.1. <bold>(A,B)</bold> results using TLGC, <bold>(C,D)</bold> results using GMM-wEM.</p></caption>
<graphic xlink:href="frobt-05-00132-g0016.tif"/>
</fig>
</sec>
<sec>
<title>6.3.2. Comparison II</title>
<p>In the second comparison, we repeated the refinement experiment in section 6.1. Since the refined demonstration replaces the original trajectory in the set, for GMM-wEM we assign weight values 0 and 1 to the original and refined demonstrations respectively. As depicted in Figures <xref ref-type="fig" rid="F17">17A,B</xref>, given the refined trajectory and using TLGC, the encoded model is adapted to the new set and can reproduce new trajectories accordingly. However, as illustrated in Figures <xref ref-type="fig" rid="F17">17C,D</xref>, because of the wide demonstration space, the model encoded using GMM-wEM cannot represent the skill properly and the reproduction of the skill is oscillating toward different Gaussian components. The reproduced trajectories using the updated GC exploit the whole demonstration space while maintaining the main and refined characteristics of the skill. GMM-wEM, on the other hand, fails to represent the demonstration space.</p>
<fig id="F17" position="float">
<label>Figure 17</label>
<caption><p>Comparing incremental skill refinement on the first experiment in section 6.1. <bold>(A,B)</bold> results using TLGC, <bold>(C,D)</bold> results using GMM-wEM.</p></caption>
<graphic xlink:href="frobt-05-00132-g0017.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec id="s7">
<title>7. Obstacle Avoidance</title>
<p>In this section, we discuss a few strategies for dealing with stationary obstacles using TLGC. Handling dynamic obstacles is out of the scope of this article. A stationary obstacle can either be known and present during both the demonstration and the reproduction phases or just appear during the reproduction phase. We refer to the first case as <italic>known</italic> obstacles and the second case as <italic>unknown</italic> obstacles. We discuss each scenario separately in the following sections.</p>
<sec>
<title>7.1. Avoiding Known Obstacles</title>
<p>For a known stationary obstacle which is present during both the demonstration and the reproduction, we can safely assume that it was avoided by the teacher during the demonstration phase. This means that neither the set of demonstrations nor the formed demonstration space intersects with the obstacle. As we have shown in experiment four (Figure <xref ref-type="fig" rid="F8">8C</xref>), the constructed GC with convex cross-sections avoids the obstacle. Therefore, the reproduced trajectories from this GC also will avoid the obstacle as long as the obstacle remains stationary. While this feature is inherent to our representation, the process becomes more complicated when the obstacle is not present during the demonstration phase. Note that, a GC with circular cross-sections that encodes the skill might intersect with the obstacle, since it can bound an unintended volume as the demonstration space (Figure <xref ref-type="fig" rid="F4">4</xref>).</p>
</sec>
<sec>
<title>7.2. Avoiding Unknown Obstacles</title>
<p>It should be noted that the term <italic>unknown</italic> obstacle refers to the scenario where a stationary obstacle is not present during the demonstration but can be detected during the reproduction. We propose two methods for avoiding unknown obstacles that intersect with the constructed generalized cylinder.</p>
<sec>
<title>7.2.1. Method I</title>
<p>Consider the GC constructed for the reaching task in the first experiment. Now we assume an obstacle was detected during the reproduction. In the first step, we estimate a bounding sphere for the detected obstacle. Among several existing algorithms, we use Fischer&#x00027;s algorithm (Fischer et al., <xref ref-type="bibr" rid="B16">2003</xref>) which is fast and efficient. Geometrically, an obstacle can either be inside the GC or partially intersect with it. Both cases are illustrated in Figures <xref ref-type="fig" rid="F18">18A,C</xref>. We intentionally select an identical initial pose for which in both cases reproducing the skill using the fixed-ratio rule (with ratio &#x003B7;<sub>0</sub>) causes collision with the obstacles (magenta trajectories in Figures <xref ref-type="fig" rid="F18">18A,C</xref>).</p>
<fig id="F18" position="float">
<label>Figure 18</label>
<caption><p><bold>(A,C)</bold> Reproductions using fixed-ratio strategy (magenta) collide with obstacles. Adapted reproductions using adaptive-ratio strategy (green) avoid obstacles. <bold>(B,D)</bold> Planar views for calculating final ratio. <bold>(E)</bold> Pick-and-place in the presence of an obstacle. GC was deformed using Algorithm <xref ref-type="table" rid="T5">4</xref> to avoid collision.</p></caption>
<graphic xlink:href="frobt-05-00132-g0018.tif"/>
</fig>
<p>Our goal is to generate a trajectory that adapts to the new condition and avoids the obstacle while preserving the main characteristics of the skill as much as possible. Given the size and location of the obstacle (i.e., the bounding sphere), our method calculates a new ratio &#x003B7;<sub><italic>f</italic></sub> and a decay constant &#x003B3; accordingly and employs the adaptive-ratio rule (section 4.2.1) to generate a collision-free reproduction of the skill.</p>
<p>We use the diagrams depicted in Figures <xref ref-type="fig" rid="F18">18B,D</xref> to explain our method for estimating the final ratio &#x003B7;<sub><italic>f</italic></sub> in both cases. Assume the sphere representing the obstacle is centered at point <italic>c</italic><sub><italic>o</italic></sub> with radius <italic>r</italic><sub><italic>o</italic></sub>. We first find <italic>c</italic><sub><italic>i</italic></sub> the closest point on the directrix to the center of the sphere <italic>c</italic><sub><italic>o</italic></sub>. We use the corresponding cross-section of the GC centered at <italic>c</italic><sub><italic>i</italic></sub> for our calculation. In the next step, we find <italic>p</italic><sub><italic>i</italic></sub> the closest point on the reproduced trajectory with the fixed-ratio rule to <italic>c</italic><sub><italic>o</italic></sub>. The cosine of &#x003B8; which is the angle between <inline-formula><mml:math id="M54"><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:math></inline-formula> and <inline-formula><mml:math id="M55"><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:math></inline-formula> can be calculated as <inline-formula><mml:math id="M56"><mml:mo class="qopname">cos</mml:mo><mml:mi>&#x003B8;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo class="qopname">&#x00304;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo class="qopname">&#x00304;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo class="qopname">&#x00304;</mml:mo></mml:mover><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo class="qopname">&#x00304;</mml:mo></mml:mover><mml:mo>|</mml:mo><mml:mo>|</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. Since <inline-formula><mml:math id="M57"><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, we can find the points where <inline-formula><mml:math id="M58"><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:math></inline-formula> intersects with the obstacle by solving the quadratic equation given by <inline-formula><mml:math id="M59"><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:mn>2</mml:mn><mml:mi>x</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo class="qopname">cos</mml:mo><mml:mi>&#x003B8;</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo class="qopname">&#x00304;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> for &#x025B3;(<italic>c</italic><sub><italic>o</italic></sub><italic>q</italic><sub><italic>i</italic></sub><italic>c</italic><sub><italic>i</italic></sub>). From the set of solutions <inline-formula><mml:math id="M60"><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>, we consider ones that are inside the generalized cylinder (the solutions that satisfy <inline-formula><mml:math id="M61"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mo>&#x02264;</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>). Both solutions in Figure <xref ref-type="fig" rid="F18">18B</xref> are valid while in Figure <xref ref-type="fig" rid="F18">18D</xref>, <italic>q</italic><sub><italic>i</italic></sub> is outside the GC and hence invalid. For each valid solution, we can calculate a final ratio that forces the reproduction to pass through that point. For instance, in Figure <xref ref-type="fig" rid="F18">18B</xref>, the final ratio for passing through <italic>q</italic><sub><italic>i</italic></sub> can be calculated as <inline-formula><mml:math id="M62"><mml:msub><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mo>/</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
<p>Now we estimate the decay constant &#x003B3; that defines how fast the final ratio &#x003B7;<sub><italic>f</italic></sub> is reached. This constant can be estimated by finding <italic>u</italic><sub><italic>c</italic></sub> the minimum distance from the initial pose <italic>p</italic><sub>0</sub> to the sphere along the directrix. The ratio at arc-length <italic>u</italic><sub><italic>c</italic></sub> should reach and stay within a range of certain percentage<xref ref-type="fn" rid="fn0010"><sup>10</sup></xref> of &#x003B7;<sub><italic>f</italic></sub>. We call this the critical ratio denoted by &#x003B7;<sub><italic>c</italic></sub>. Finally, we can calculate the critical constant decay &#x003B3;<sub><italic>c</italic></sub> by substituting &#x003B7;<sub><italic>c</italic></sub> and <italic>u</italic><sub><italic>c</italic></sub> in (10) and solving for &#x003B3; that gives &#x003B3;<sub><italic>c</italic></sub> &#x0003D; (1/<italic>u</italic><sub><italic>c</italic></sub>)ln((&#x003B7;<sub><italic>o</italic></sub>&#x02212;&#x003B7;<sub><italic>f</italic></sub>)/(&#x003B7;<sub><italic>c</italic></sub>&#x02212;&#x003B7;<sub><italic>f</italic></sub>)).</p>
<p>By employing the adaptive-ratio strategy with the calculated &#x003B7;<sub><italic>f</italic></sub> and &#x003B3;<sub><italic>c</italic></sub>, we see in Figures <xref ref-type="fig" rid="F18">18A,C</xref> that the adapted reproductions avoid the obstacle. It has to be noted that the reproduced trajectory using our method is tangent to the sphere at the intersection point <italic>q</italic><sub><italic>i</italic></sub>. By increasing the size of the bounding sphere (e.g., increasing the radius by a certain percentage), we can force the reproduction to avoid the obstacle from a safe distance. The proposed method is computationally efficient and requires no parameter tuning. As soon as an obstacle is detected the method automatically generates an adapted reproduction of the skill. This method also can be easily extended to GCs with closed-spline cross-sections. One of the limitations of this approach, however, is the assumption of spherical obstacles which in some cases can be inefficient and perform an unnecessary deformation. For instance, when dealing with cylindrical or cubical obstacles.</p>
</sec>
<sec>
<title>7.2.2. Method II</title>
<p>Alternatively, a collision can be avoided by deforming the GC using our generalization methods in section 4.3. In practice, this deformation strategy behaves similarly to path planners when dealing with obstacles. However, in skill learning, since the shape of the movement is important, the dissimilarity between the original and the deformed GCs should be minimized. To achieve this, similar to the previous method, we first estimate a bounding sphere for the detected obstacle. Nierhoff et al. (<xref ref-type="bibr" rid="B29">2016</xref>) have shown that by considering an obstacle as a positional constraint in (14), the trajectory can adapt to avoid the obstacle. Using this feature and by introducing the bounding sphere as a positional constraint in Algorithm <xref ref-type="table" rid="T5">4</xref>, we estimate a transformation function that deforms the GC to avoid the obstacle satisfying the landmarks and preserving the shape of the movement as much as possible. We have evaluated this modification to our generalization algorithm for the task of pick-and-place in experiment five. The deformed GC shown in Figure <xref ref-type="fig" rid="F18">18E</xref> avoids the obstacle and can be used to generate collision-free trajectories. Since representing non-spherical obstacles as positional constrains is nontrivial, this method also might deform the GC more than required when dealing with non-spherical obstacles.</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusions" id="s8">
<title>8. Conclusions</title>
<p>We have presented a novel LfD approach for learning and reproducing trajectory-based skills using a geometric representation which maintains the crucial characteristics and implicit boundaries of the skill and generalizes it over the initial and final states of the movement. The proposed approach, TLGC, represents and exploits the demonstration space to reproduce a variety of successful movements. TLGC requires minimal parameter tuning that not-only simplifies the usage of the algorithm and makes the result consistent but also can make the approach more convenient for non-expert users. We have shown that TLGC enables users to refine a learned skill through both incremental and constraint-based refinement strategies interactively. We also have introduced three obstacle avoidance strategies for TLGC and have compared it to two existing LfD approaches.</p>
</sec>
<sec id="s9">
<title>Author Contributions</title>
<p>SA and SC significantly contributed to the development of the presented approach, execution of the experiments, analysis of the results, and preparation of the manuscript. Authors approved the submitted version of the manuscript.</p>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Abbena</surname> <given-names>E.</given-names></name> <name><surname>Salamon</surname> <given-names>S.</given-names></name> <name><surname>Gray</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <source>Modern Differential Geometry of Curves and Surfaces With Mathematica</source>. <publisher-name>CRC Press</publisher-name>.</citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahmadzadeh</surname> <given-names>S. R.</given-names></name> <name><surname>Asif</surname> <given-names>R. M.</given-names></name> <name><surname>Chernova</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>Generalized cylinders for learning, reproduction,generalization, and refinement of robot skills</article-title> in <source>Robotics: Science and Systems (RSS)</source> (<publisher-loc>Cambridge, MA</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation></ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahmadzadeh</surname> <given-names>S. R.</given-names></name> <name><surname>Kaushik</surname> <given-names>R.</given-names></name> <name><surname>Chernova</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Trajectory learning from demonstration with canal surfaces: a parameter-free approach</article-title> in <source>16th IEEE-RAS International Conference on Humanoid Robots (Humanoids)</source> (<publisher-loc>Cancun</publisher-loc>), <fpage>544</fpage>&#x02013;<lpage>549</lpage>. <pub-id pub-id-type="doi">10.1109/HUMANOIDS.2016.7803328</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Argall</surname> <given-names>B. D.</given-names></name> <name><surname>Browning</surname> <given-names>B.</given-names></name> <name><surname>Veloso</surname> <given-names>M.</given-names></name></person-group> (<year>2009a</year>). <article-title>Automatic weight learning for multiple data sources when learning from demonstration</article-title> in <source>International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Kobe</publisher-loc>), <fpage>226</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1109/ROBOT.2009.5152668</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Argall</surname> <given-names>B. D.</given-names></name> <name><surname>Chernova</surname> <given-names>S.</given-names></name> <name><surname>Veloso</surname> <given-names>M.</given-names></name> <name><surname>Browning</surname> <given-names>B.</given-names></name></person-group> (<year>2009b</year>). <article-title>A survey of robot learning from demonstration</article-title>. <source>Robot. Auton. Syst.</source> <volume>57</volume>, <fpage>469</fpage>&#x02013;<lpage>483</lpage>. <pub-id pub-id-type="doi">10.1016/j.robot.2008.10.024</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Argall</surname> <given-names>B. D.</given-names></name> <name><surname>Sauser</surname> <given-names>E. L.</given-names></name> <name><surname>Billard</surname> <given-names>A. G.</given-names></name></person-group> (<year>2010</year>). <article-title>Tactile guidance for policy refinement and reuse</article-title> in <source>9th International Conference on Development and Learning (ICDL)</source> (<publisher-loc>Ann Arbor, MI</publisher-loc>), <fpage>7</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1109/DEVLRN.2010.5578872</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Belongie</surname> <given-names>S.</given-names></name> <name><surname>Malik</surname> <given-names>J.</given-names></name> <name><surname>Puzicha</surname> <given-names>J.</given-names></name></person-group> (<year>2002</year>). <article-title>Shape matching and object recognition using shape contexts</article-title>. <source>IEEE Trans. Pattern Analys. Mach. Intell.</source> <volume>24</volume>, <fpage>509</fpage>&#x02013;<lpage>522</lpage>.</citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bishop</surname> <given-names>R. L.</given-names></name></person-group> (<year>1975</year>). <article-title>There is more than one way to frame a curve</article-title>. <source> Am. Math. Month.</source> <volume>82</volume>, <fpage>246</fpage>&#x02013;<lpage>251</lpage>.</citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bookstein</surname> <given-names>F. L.</given-names></name></person-group> (<year>1989</year>). <article-title>Principal warps: thin-plate splines and the decomposition of deformations</article-title>. <source>IEEE Trans. Pattern Analys. Mach. Intell.</source> <volume>11</volume>, <fpage>567</fpage>&#x02013;<lpage>585</lpage>.</citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calinon</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>A tutorial on task-parameterized movement learning and retrieval</article-title>. <source>Intell. Serv. Robot.</source> <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1007/s11370-015-0187-9</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calinon</surname> <given-names>S.</given-names></name> <name><surname>Guenter</surname> <given-names>F.</given-names></name> <name><surname>Billard</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>On learning, representing, and generalizing a task in a humanoid robot</article-title>. <source>IEEE Trans. Cybern.</source> <volume>37</volume>, <fpage>286</fpage>&#x02013;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1109/TSMCB.2006.886952</pub-id><pub-id pub-id-type="pmid">17416157</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Calinon</surname> <given-names>S.</given-names></name> <name><surname>Sardellitti</surname> <given-names>I.</given-names></name> <name><surname>Caldwell</surname> <given-names>D. G.</given-names></name></person-group> (<year>2010</year>). <article-title>Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies</article-title> in <source>2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source> (<publisher-loc>Taipei</publisher-loc>), <fpage>249</fpage>&#x02013;<lpage>254</lpage>. <pub-id pub-id-type="doi">10.1109/IROS.2010.5648931</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carroll</surname> <given-names>D.</given-names></name> <name><surname>K&#x000F6;se</surname> <given-names>E.</given-names></name> <name><surname>Sterling</surname> <given-names>I.</given-names></name></person-group> (<year>2013</year>). <article-title>Improving frenet&#x00027;s frame using bishop&#x00027;s frame</article-title>. <source>arXiv preprint arXiv:1311.5857</source>.</citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Corrales</surname> <given-names>J.</given-names></name> <name><surname>Candelas</surname> <given-names>F.</given-names></name> <name><surname>Torres</surname> <given-names>F.</given-names></name></person-group> (<year>2011</year>). <article-title>Safe human&#x02013;robot interaction based on dynamic sphere-swept line bounding volumes</article-title>. <source>Robot. Comput. Integr. Manuf.</source> <volume>27</volume>, <fpage>177</fpage>&#x02013;<lpage>185</lpage>. <pub-id pub-id-type="doi">10.1016/j.rcim.2010.07.005</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dong</surname> <given-names>S.</given-names></name> <name><surname>Williams</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>Learning and recognition of hybrid manipulation motions in variable environments using probabilistic flow tubes</article-title>. <source>Int. J. Soc. Robotics</source> <volume>4</volume>, <fpage>357</fpage>&#x02013;<lpage>368</lpage>. <pub-id pub-id-type="doi">10.1007/s12369-012-0155-x</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fischer</surname> <given-names>K.</given-names></name> <name><surname>G&#x000E4;rtner</surname> <given-names>B.</given-names></name> <name><surname>Kutz</surname> <given-names>M.</given-names></name></person-group> (<year>2003</year>). <article-title>Fast smallest-enclosing-ball computation in high dimensions</article-title> in <source>European Symposium on Algorithms</source> (<publisher-loc>Budapest</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>630</fpage>&#x02013;<lpage>641</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-540-39658-1_57</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Grimes</surname> <given-names>D. B.</given-names></name> <name><surname>Chalodhorn</surname> <given-names>R.</given-names></name> <name><surname>Rao</surname> <given-names>R. P.</given-names></name></person-group> (<year>2006</year>). <article-title>Dynamic imitation in a humanoid robot through nonparametric probabilistic inference</article-title> in <source>Robotics: Science and Systems (RSS)</source> (<publisher-loc>Cambridge, MA</publisher-loc>), <fpage>199</fpage>&#x02013;<lpage>206</lpage>.</citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hartmann</surname> <given-names>E.</given-names></name></person-group> (<year>2003</year>). <source>Geometry and Algorithms for Computer Aided Design.</source> Darmstadt University of Technology.</citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hersch</surname> <given-names>M.</given-names></name> <name><surname>Guenter</surname> <given-names>F.</given-names></name> <name><surname>Calinon</surname> <given-names>S.</given-names></name> <name><surname>Billard</surname> <given-names>A. G.</given-names></name></person-group> (<year>2006</year>). <article-title>Learning dynamical system modulation for constrained reaching tasks</article-title> in <source>6th IEEE-RAS International Conference on Humanoid Robots (Humanoids)</source> (<publisher-loc>Genoa</publisher-loc>), <fpage>444</fpage>&#x02013;<lpage>449</lpage>. <pub-id pub-id-type="doi">10.1109/ICHR.2006.321310</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hilbert</surname> <given-names>D.</given-names></name> <name><surname>Cohn-Vossen</surname> <given-names>S.</given-names></name></person-group> (<year>1952</year>). <source>Geometry and the Imagination, Vol. 87</source>. American Mathematical Society.</citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ijspeert</surname> <given-names>A. J.</given-names></name> <name><surname>Nakanishi</surname> <given-names>J.</given-names></name> <name><surname>Hoffmann</surname> <given-names>H.</given-names></name> <name><surname>Pastor</surname> <given-names>P.</given-names></name> <name><surname>Schaal</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Dynamical movement primitives: learning attractor models for motor behaviors</article-title>. <source>Neural Comput.</source> <volume>25</volume>, <fpage>328</fpage>&#x02013;<lpage>373</lpage>. <pub-id pub-id-type="doi">10.1162/NECO_a_00393</pub-id><pub-id pub-id-type="pmid">23148415</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khansari-Zadeh</surname> <given-names>S. M.</given-names></name> <name><surname>Billard</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Learning stable nonlinear dynamical systems with gaussian mixture models</article-title>. <source>IEEE Trans. Robot.</source> <volume>27</volume>, <fpage>943</fpage>&#x02013;<lpage>957</lpage>. <pub-id pub-id-type="doi">10.1109/TRO.2011.2159412</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kormushev</surname> <given-names>P.</given-names></name> <name><surname>Calinon</surname> <given-names>S.</given-names></name> <name><surname>Caldwell</surname> <given-names>D. G.</given-names></name></person-group> (<year>2010</year>). <article-title>Robot motor skill coordination with EM-based reinforcement learning</article-title> in <source>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source> (<publisher-loc>Taipei</publisher-loc>), <fpage>3232</fpage>&#x02013;<lpage>3237</lpage>. <pub-id pub-id-type="doi">10.1109/IROS.2010.5649089</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>D.</given-names></name> <name><surname>Ott</surname> <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>Incremental kinesthetic teaching of motion primitives using the motion refinement tube</article-title>. <source>Auton. Robots</source> <volume>31</volume>, <fpage>115</fpage>&#x02013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1007/s10514-011-9234-3</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maintz</surname> <given-names>J. B.</given-names></name> <name><surname>Viergever</surname> <given-names>M. A.</given-names></name></person-group> (<year>1998</year>). <article-title>A survey of medical image registration</article-title>. <source>Med. Image Anal.</source> <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>36</lpage>. <pub-id pub-id-type="pmid">10638851</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Majumdar</surname> <given-names>A.</given-names></name> <name><surname>Tedrake</surname> <given-names>R.</given-names></name></person-group> (<year>2016</year>). <article-title>Funnel libraries for real-time robust feedback motion planning</article-title>. <source>arXiv preprint arXiv:1601.04037</source>.</citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mart&#x000ED;nez-Salvador</surname> <given-names>B.</given-names></name> <name><surname>P&#x000E9;rez-Francisco</surname> <given-names>M.</given-names></name> <name><surname>Del Pobil</surname> <given-names>A. P.</given-names></name></person-group> (<year>2003</year>). <article-title>Collision detection between robot arms and people</article-title>. <source>J. Intell. Robot. Syst.</source> <volume>38</volume>, <fpage>105</fpage>&#x02013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.1023/A:1026252228930</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Myers</surname> <given-names>C.</given-names></name> <name><surname>Rabiner</surname> <given-names>L.</given-names></name> <name><surname>Rosenberg</surname> <given-names>A.</given-names></name></person-group> (<year>1980</year>). <article-title>Performance tradeoffs in dynamic time warping algorithms for isolated word recognition</article-title>. <source>IEEE Trans. Acoust. Speech Signal Process.</source> <volume>28</volume>, <fpage>623</fpage>&#x02013;<lpage>635</lpage>.</citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nierhoff</surname> <given-names>T.</given-names></name> <name><surname>Hirche</surname> <given-names>S.</given-names></name> <name><surname>Nakamura</surname> <given-names>Y.</given-names></name></person-group> (<year>2016</year>). <article-title>Spatial adaption of robot trajectories based on laplacian trajectory editing</article-title>. <source>Auton. Robots</source> <volume>40</volume>, <fpage>159</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1007/s10514-015-9442-3</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Paraschos</surname> <given-names>A.</given-names></name> <name><surname>Daniel</surname> <given-names>C.</given-names></name> <name><surname>Peters</surname> <given-names>J. R.</given-names></name> <name><surname>Neumann</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Probabilistic movement primitives</article-title> in <source>Advances in Neural Information Processing Systems</source> (<publisher-loc>Lake Tahoe, NV</publisher-loc>), <fpage>2616</fpage>&#x02013;<lpage>2624</lpage>.</citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pauly</surname> <given-names>M.</given-names></name> <name><surname>Mitra</surname> <given-names>N. J.</given-names></name> <name><surname>Giesen</surname> <given-names>J.</given-names></name> <name><surname>Gross</surname> <given-names>M. H.</given-names></name> <name><surname>Guibas</surname> <given-names>L. J.</given-names></name></person-group> (<year>2005</year>). <article-title>Example-based 3d scan completion</article-title> in <source>Symposium on Geometry Processing</source>, EPFL-CONF-149337 (<publisher-loc>Vienna</publisher-loc>), <fpage>23</fpage>&#x02013;<lpage>32</lpage>.</citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Quinlan</surname> <given-names>S.</given-names></name> <name><surname>Khatib</surname> <given-names>O.</given-names></name></person-group> (<year>1993</year>). <article-title>Elastic bands: connecting path planning and control</article-title> in <source>IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Atlanta, GA</publisher-loc>), <fpage>802</fpage>&#x02013;<lpage>807</lpage>. <pub-id pub-id-type="doi">10.1109/ROBOT.1993.291936</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rana</surname> <given-names>M. A.</given-names></name> <name><surname>Mukadam</surname> <given-names>M.</given-names></name> <name><surname>Ahmadzadeh</surname> <given-names>S. R.</given-names></name> <name><surname>Chernova</surname> <given-names>S.</given-names></name> <name><surname>Boots</surname> <given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>Towards robust skill generalization: unifying learning from demonstration and motion planning</article-title> in <source>Proceedings of 1st Annual Conference on Robot Learning</source> (<publisher-loc>Mountain view, CA</publisher-loc>), <fpage>109</fpage>&#x02013;<lpage>118</lpage>.</citation></ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Reiner</surname> <given-names>B.</given-names></name> <name><surname>Ertel</surname> <given-names>W.</given-names></name> <name><surname>Posenauer</surname> <given-names>H.</given-names></name> <name><surname>Schneider</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Lat: a simple learning from demonstration method</article-title> in <source>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source> (<publisher-loc>Chicago, IL</publisher-loc>), <fpage>4436</fpage>&#x02013;<lpage>4441</lpage>.</citation></ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schneider</surname> <given-names>M.</given-names></name> <name><surname>Ertel</surname> <given-names>W.</given-names></name></person-group> (<year>2010</year>). <article-title>Robot learning by demonstration with local gaussian process regression</article-title> in <source>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source> (<publisher-loc>Taipei</publisher-loc>), <fpage>255</fpage>&#x02013;<lpage>260</lpage>.</citation></ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schulman</surname> <given-names>J.</given-names></name> <name><surname>Ho</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>C.</given-names></name> <name><surname>Abbeel</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Learning from demonstrations through the use of non-rigid registration</article-title> in <source>Robotics Research</source> (<publisher-loc>Springer</publisher-loc>), <fpage>339</fpage>&#x02013;<lpage>354</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-28872-7_20</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Shanmugavel</surname> <given-names>M.</given-names></name> <name><surname>Tsourdos</surname> <given-names>A.</given-names></name> <name><surname>Zbikowski</surname> <given-names>R.</given-names></name> <name><surname>White</surname> <given-names>B. A.</given-names></name></person-group> (<year>2007</year>). <article-title>3D path planning for multiple UAVs using pythagorean hodograph curves</article-title> in <source>AIAA Guidance, Navigation, and Control Conference</source> (<publisher-loc>Hilton Head, SC</publisher-loc>), <fpage>20</fpage>&#x02013;<lpage>23</lpage>.</citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vijayakumar</surname> <given-names>S.</given-names></name> <name><surname>D&#x00027;Souza</surname> <given-names>A.</given-names></name> <name><surname>Schaal</surname> <given-names>S.</given-names></name></person-group> (<year>2005</year>). <article-title>Incremental online learning in high dimensions</article-title>. <source>Neural Comput.</source> <volume>17</volume>, <fpage>2602</fpage>&#x02013;<lpage>2634</lpage>. <pub-id pub-id-type="doi">10.1162/089976605774320557</pub-id><pub-id pub-id-type="pmid">16212764</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yanco</surname> <given-names>H. A.</given-names></name> <name><surname>Norton</surname> <given-names>A.</given-names></name> <name><surname>Ober</surname> <given-names>W.</given-names></name> <name><surname>Shane</surname> <given-names>D.</given-names></name> <name><surname>Skinner</surname> <given-names>A.</given-names></name> <name><surname>Vice</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Analysis of human-robot interaction at the DARPA robotics challenge trials</article-title>. <source>J. Field Robot.</source> <volume>32</volume>, <fpage>420</fpage>&#x02013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1002/rob.21568</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>This work focuses on LfD approaches that encode trajectory-based skills (i.e., movements) and we do not discuss goal-based LfD approaches.</p></fn>
<fn id="fn0002"><p><sup>2</sup>A plane simple closed-curve, also known as the Jordan curve, is a continuous loop that divides the plane into an interior region and an exterior region.</p></fn>
<fn id="fn0003"><p><sup>3</sup>A regular curve is a differentiable curve whose derivatives never vanish.</p></fn>
<fn id="fn0004"><p><sup>4</sup>A pencil is a family of geometric objects sharing a common property (e.g., spheres).</p></fn>
<fn id="fn0005"><p><sup>5</sup>An implicit surface is a surface in Euclidean space that can be represented as a function <italic>F</italic> defined by equation <italic>F</italic>(<italic>x</italic><sub>1</sub>(<italic>u</italic>), <italic>x</italic><sub>2</sub>(<italic>u</italic>), <italic>x</italic><sub>3</sub>(<italic>u</italic>)) &#x0003D; 0.</p></fn>
<fn id="fn0006"><p><sup>6</sup>An envelope is a curve/surface tangent to a family of curves/surfaces (2D or 3D).</p></fn>
<fn id="fn0007"><p><sup>7</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/rezaahmadzadeh/TLGC">https://github.com/rezaahmadzadeh/TLGC</ext-link></p></fn>
<fn id="fn0008"><p><sup>8</sup>In all of the figures, the demonstrations, directrix, and reproductions are plotted in red, blue, and magenta, respectively.</p></fn>
<fn id="fn0009"><p><sup>9</sup>The accompanying video shows the execution of the task <ext-link ext-link-type="uri" xlink:href="https://youtu.be/KqUgT72G8Pw">https://youtu.be/KqUgT72G8Pw</ext-link></p></fn>
<fn id="fn0010"><p><sup>10</sup>We empirically found 2&#x02212;5% is a good range.</p></fn>
</fn-group>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This work is supported in part by the Office of Naval Research award N000141410795.</p>
</fn></fn-group>
</back>
</article>