Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Robot. AI, 15 July 2025

Sec. Humanoid Robotics

Volume 12 - 2025 | https://doi.org/10.3389/frobt.2025.1574110

Versatile kinematics-based constraint identification applied to robot task reproduction

  • 1Department of Biomechanical Engineering, University of Twente, Enschede, Netherlands
  • 2Department of Robotics and Mechatronics, University of Twente, Enschede, Netherlands

Identifying kinematic constraints between a robot and its environment can improve autonomous task execution, for example, in Learning from Demonstration. Constraint identification methods in the literature often require specific prior constraint models, geometry or noise estimates, or force measurements. Because such specific prior information or measurements are not always available, we propose a versatile kinematics-only method. We identify constraints using constraint reference frames, which are attached to a robot or ground body and may have zero-velocity constraints along their axes. Given measured kinematics, constraint frames are identified by minimizing a norm on the Cartesian components of the velocities expressed in that frame. Thereby, a minimal representation of the velocities is found, which represent the zero-velocity constraints we aim to find. In simulation experiments, we identified the geometry (position and orientation) of twelve different constraints including articulated contacts, polyhedral contacts, and contour following contacts. Accuracy was found to decrease linearly with sensor noise. In robot experiments, we identified constraint frames in various tasks and used them for task reproduction. Reproduction performance was similar when using our constraint identification method compared to methods from the literature. Our method can be applied to a large variety of robots in environments without prior constraint information, such as in everyday robot settings.

1 Introduction

Autonomous robotic manipulation has the potential to improve human lives by alleviating physical effort. Robots may offer advantages over human labor regarding consistency, endurance, strength, accuracy, and/or precision in fields such as healthcare, logistics, exploration, and the manufacturing industry. However, autonomous robots are typically designed for one specific task in one specific environment, hence they lack versatility. Tasks in different environments therefore often require different robots, which may be expensive and impractical.

It may therefore be beneficial to develop robots that are versatile in their task execution, allowing a single robot to deal with a variety of tasks and environments. Manipulation tasks may include pick-and-place tasks (e.g., order picking), contact tasks (e.g., wiping, polishing, contour following, and opening/closing compartments) and tool-use tasks (e.g., hammering and screwing). Common environments may include moveable objects such as tools, as well as (immovable) physical constraints.

Models of the physical environments may assist in versatile task execution. For example, door opening is a common everyday task involving similar articulation mechanisms across most doors. If the interaction mechanisms (e.g., hinges and slides) can be modeled, a robot may apply the same control to all environments that have the same mechanisms, thereby improving robot versatility. However, manually creating such models may be cumbersome due to many possible variations in the environment, such as object positions, orientations, shapes, and dynamics. There is therefore a need for automatic modeling of physical environments (Kroemer et al., 2020).

Automatically modeling physical environments may occur in a Learning from Demonstration (LfD) context (Kroemer et al., 2020). In LfD, robots learn to perform tasks from human demonstrations, for example, by manually guiding a robot, rather than by explicit programming. From the demonstration data, the physical interactions throughout the demonstration may be modelled, which in turn may be used in autonomous task reproduction.

Physical environments typically contain kinematic constraints that restrict movement. In everyday settings numerous constraints may be encountered, including articulated/mechanism contacts, such as prismatic (drawers) and revolute joints (doors), polyhedral contacts such as pin-plane contacts (pen drawing) and plane-plane contact (box sliding), and contour-following contacts (dusting).

Constraint awareness can benefit task execution in several ways. First, tasks can often be simplified when expressed in the constraints (Bruyninckx and De Schutter, 1996). Second, choosing a suitable control method, e.g., position, velocity, force, or impedance control can improve both task performance and stability by keeping undesirable interaction forces low (Conkey and Hermans, 2019; De Schutter et al., 1999). Third, planning to avoid constraints can simplify some tasks because there are fewer state transitions to consider, such as in reaching tasks (Oriolo and Vendittelli, 2009). Alternatively, planning to introduce constraints can simplify some tasks by reducing the free space of the robot, for example, during object alignment (Suomalainen et al., 2021). Fourth, differentiating between constrained states in a task provides meaningful, tractable building blocks, which can improve task planning (Ureche et al., 2015; Jain and Niekum, 2018; Holladay et al., 2021) and facilitate learning (Simonič et al., 2024). Fifth, some constraints are common in many robot tasks and can therefore be a basis for generalization between tasks (Li and Brock, 2022; Li et al., 2023).

Kinematic constraints consist of a (i) constraint class, e.g., a type of joint or contact, and (ii) constraint geometry, e.g., the orientation and position of a rotation axis or surface normal (De Schutter et al., 1999). Identifying constraints from data therefore requires (i) classifying the constraint class and (ii) identifying the constraint geometry, both are the topic of this work.

In this work, we make three assumptions about robot manipulation to limit our scope. First, we assume that robots are rigidly attached to a constrained mechanism or object in contact with the environment, and thus leave grasping and environmental compliance out of our scope. Second, we assume that there is a single maintained contact between a robot and the environment, such as a contact point, line, plane, or other continuous contact area. Third, we assume that all unconstrained degrees of freedom are excited, to prevent ambiguity between true physical constraints that do not allow motion and non-observed motions.

Bruyninckx and De Schutter defined kinematic constraint classes based on contact (Bruyninckx and De Schutter, 1993a; Bruyninckx et al., 1993b). They proposed to identify constraints by two methods: first, based on the absence of mechanical power in the direction of constraints, requiring position and force measurements. Second, based on whether velocities or forces separately fit candidate constraint models. They used both methods in a Kalman filter with a candidate constraint model of a specific class to estimate constraint geometry, such as contact points, axes of rotations, and polyhedral contact normals (De Schutter et al., 1999).

Several authors extended the methods of Bruyninckx and De Schutter, mainly on multi-contact polyhedral contacts (Meeussen et al., 2007; Cabras et al., 2010; Lefebvre et al., 2005). They identify, classify, and segment data containing arbitrary contacts between two uncertain polyhedral or curved objects using pose and wrench measurements. Where previous methods required approximate geometric models of the polyhedra, Slaets et al. identify arbitrary unknown polyhedra at runtime (Slaets et al., 2007). Although polyhedrons are useful to approximate many tasks, these methods are fundamentally limited in modeling rotations.

Alternative methods omit noise models, and fit constraint models of a specific class directly to data, identifying the constraint geometry in the process. Such models are specified manually, and when several candidate models are proposed, the best-fitting model is chosen. For example, several authors identify one of three candidate models (fixed, prismatic, and revolute joints) using kinematic measurements (Sturm et al., 2010; Niekum et al., 2015; Hausman et al., 2015). Subramani et al. identify six candidate models using kinematic measurements (Subramani et al., 2018). They later expand their method with force information and identify eight candidate models (Subramani et al., 2020).

Some methods do not require specific constraint models but use more versatile representations to capture multiple constraints. Sturm et al. fit Gaussian processes in configuration space, but the identified Gaussian process kernel parameters are not straightforward to interpret (Sturm et al., 2011). Mousavi Mohammadi et al. identify “task frames” instead of identifying constraints explicitly (Mousavi Mohammadi et al., 2024). Such frames conveniently describe contact tasks and thereby often align with constraint geometry, but lack constraint classification (Bruyninckx and De Schutter, 1996). They use velocities and/or forces in a multi-step decision process to identify frame properties that result in low or constant velocities and forces with minimal uncertainty. Van der Walt et al. identify constraints from kinematic data by fitting points, lines, planes, and their higher dimensional equivalents in six dimensional linear and angular velocity space, after which they select the best fitting model (Van der Walt et al., 2025).

Because kinematic constraints occur often in robot manipulation tasks, we believe more versatile constraint identification is an important step to advance the applicability of robots in the real world. However, the literature on constraint identification shows two common limitations. First, methods in the literature often require specific prior constraint models and estimates of geometry and noise. In everyday robot settings where many constraints can be encountered, it may be challenging to exhaustively define such models and estimates. Second, various methods require force measurements, which may not be available. This work overcomes both limitations by identifying constraint frames from kinematic data. First, the method can identify a wide variety of constraints without specific prior constraint models, or estimates of geometry or noise. The method is used to identify articulated/mechanism contacts, polyhedral contacts, and contour following contacts. Second, the method only requires kinematic measurements. Therefore, our method can be applied to various robots in everyday settings, without prior information about the environment or task.

The method offers two more useful features. First, the identified constraint frames can be used directly for task reproduction since the constraints are expressed in a task-relevant reference frame. Second, the number of identified parameters is fixed, in contrast to methods that use an increasing number of parameters for each added candidate model.

This paper is organized as follows: Section 2 discusses preliminaries on rigid body kinematics. Section 3 proposes to specify constraints using constraint frames. Section 4 proposes an optimization problem to identify constraint frames from kinematic data. Section 5 evaluates the method on experimental data from simulation and real robot task demonstrations. Section 6 reproduces robot tasks using the identified constraint frames. Section 7 discusses the results and draws conclusions.

2 Preliminaries on kinematics

This section contains preliminaries for rigid bodies kinematics (Lynch and Park, 2017). Motion between two rigid bodies can be represented by the transformation between two reference frames, one frame rigidly attached to each body. Such reference frames have a position and orientation in space, which can change with body motion. The transformation between two reference frames in three-dimensional space can be parameterized by a distance between their positions pR3 and a rotation matrix RSO3 between their orientations. These components can be combined in a homogenous transformation matrix

T=Rp01×31SE3,

which represents the pose (orientation R and position p) of one frame relative to the other.

The velocity between two rigid bodies can be represented by a twist V=ωT,vTTR6, which contains an angular velocity ωR3 and a linear velocity vR3. The twist must be expressed in a reference frame, denoted by a superscript, e.g., Vb=ωbT,vbTT for reference frame b. If T describes the pose of a body frame b with respect to a ground frame g, the twist Vb of a body with respect to the ground expressed in frame b is related to T by:

T1T˙=Vb×=ωb×vb01×30se3,(1)

where ω× is the skew-symmetric representation of ω=ω1,ω2,ω3T:

ω×=0ω3ω2ω30ω1ω2ω10so3.

Expressing the twist Vb in a frame other than b requires the pose matrix of b with respect to the new frame. For example, if R and p again represent the pose of body frame b with respect to the ground frame g, the transformation

Vg=R03×3p×RRVb.(2)

expresses the twist Vb in frame g.

3 Constraint frames

This section introduces a method to specify constraints that result in constrained kinematics. This method is used in Section 4 for the inverse problem: identifying constraints from kinematic data.

3.1 Degrees of freedom

Kinematic constraints limit a robot’s motion. We consider Pfaffian constraints on the end effector twist V of a robot with respect to the ground. The form of such constraints is

ATV=0h,(3)

where ATRh×6 defines h6 constraint equations (Lynch and Park, 2017).

We restrict AT to be coordinate transformations of the form of Equation 2 that transform velocities ω and/or v to one or more axes of some to be determined frame ξ. For example, the i-th constraint equation AiT may restrict the j-th axis of ω when expressed in ξ, such that AiTV=ωjξ=0. Constraints can similarly be applied to v.

We also allow constraints on ω and v to be described from different frames, respectively. We denote the frame for ω as ξ=ϕ and for v as ξ=ψ. Therefore, we consider constraints of the form:

ωnϕ=0,vmψ=0.(4)

Here, axis n of ω is constrained in ϕ, and axis m of v is constrained in ψ. For either velocity, the number of constraints can be none, one, two, or three. We thus consider bilateral zero-velocity constraints along one or more axes of ω expressed in ϕ and v expressed in ψ. Although most contact constraints are unilateral, they can be modeled as bilateral if contact is kept, which we assume in this work.

To specify whether the axes of ωϕ are constrained according to (4), we use an associated binary degree of freedom vector dϕ0,13. Here, 0 indicates the associated axis is constrained and 1 indicates it is unconstrained. The analogous situation applies to v with dψ0,13.

By summing the degree of freedom vector n=13dnϕ0,1,2,3, we obtain the physical degrees of freedom of ω in frame ϕ. When adding units, dϕ rad s1=spanωϕ gives a basis for ω, which represents all possible ω being either zero, lying on a line, on a plane, or in a volume.

Constraints can be thus defined by specifying dϕ and dψ. For example, a revolute joint can be specified by dϕ=1,0,0T if the first axis of ϕ aligns with the rotation axis, and dψ=0,0,0T if the position of ψ is on the rotation axis.

3.2 Frame type and geometry

So far, the frame ϕ in which to express constraints on ω, and frame ψ in which to express constraints on v, have not been specified. Here, we require the frames to be attached to the ground body G or the robot body B, similar to Mousavi Mohammadi et al. (2024).

The v frame ψ must have its orientation constant in G or B, and its position constant in G or B. This results in four frame types denoted by ψGG,GB,BG,BB. For example, BG indicates that the frame’s orientation is constant in ground body B and its position is constant in body G. Figure 1 visualizes these frame types during motion.

Figure 1
www.frontiersin.org

Figure 1. Constraint frame identification following a two-dimensional task demonstration, e.g., by a human guiding the robot. The behavior of the constraint frame types ψGG,GB,BG,BB is shown at two instants (solid and dashed) during a demonstration of robot body B with respect to ground body G. Frames GG and BB are rigidly attached to their respective bodies. Frames GB and BG have constant orientation in one body, but constant position in the other. Only the transformations between ground g and body b are measured (black), and the method identifies the unknown (gray) constraint between the tool and ground. In this example, the optimal frame is GB at tip of the tool, because the linear velocities along one of its axes are always zero, which is not the case for velocities expressed in the other frames. In practice, the method performs a continuous optimization in three-dimensional space to find the frame position and orientation, and also identifies the angular velocity constraint frame.

Because the frames’ orientations and positions are constant in either one of the bodies, they can be parameterized by a constant rotation matrix Rψ and position vector pψ. For example, RBG denotes the constant rotation matrix between BG and a frame b on body B (Figure 1).

For the ω frame ϕ, only two types of constraint frames have to be considered because ω is independent of the position of the frame it is expressed in, that is ωGG=ωGB and ωBG=ωBB. Therefore, the dependence on frame position can be omitted. The two frame types are ϕG,B, which are parameterized solely by a constant rotation matrix Rϕ. Because frames ϕ do not have a position, they are shown at convenient positions in figures.

3.3 Applied constraint frames

In summary, degrees of freedom dϕ and dψ specify whether axes are constrained; types ϕG,B and ψGG,GB,BG,BB specify what the frames are attached to; and Rϕ, Rψ, and pψ specify the geometry of the constraints. The combination of degrees of freedom and frame types can be considered the class of a constraint. Table 1 summarizes these properties and the constant parameters that specify them. Figure 2 and Table 2 show twelve (not exhaustive) example constraints that can be specified using this method.

Table 1
www.frontiersin.org

Table 1. Parameters of the proposed method.

Figure 2
www.frontiersin.org

Figure 2. Visualization of twelve (not exhaustive) constraints that can be captured by the method. Each constraint (A–L) is described in Table 2. The left-hand figures are planar, where the rotation axis (y-axis) is normal to the plane. Colors (Table 2) indicate the linear velocity constraint frame type ψ,whereGGisyellow,GBisgreen,BGisred,andBBisblue. For conciseness, angular velocity constraint frames ϕ overlap exactly with the linear velocity constraint frames ψ, such that Rϕ=Rψ. If a frame at the second time instance (dashed) is not shown, it is the same as the first. The linear velocity degrees of freedom dψ are marked along the frame axes by an arrowhead (unconstrained) or no arrowhead (constrained). The unconstrained angular velocity degrees of freedom dϕ are marked by a rotational vector.

The presented method does not uniquely define constraints for three reasons. First, the ordering of the axes is undefined. For example, d=1,0,0T and R=r1,r2,r3 define the same constraint as d=0,1,0T and R=r2,r1,r3. Second, the positive or negative directions of the axes are irrelevant because we consider bilateral constraints. Therefore, the basis vectors in R=±r1,±r2,±r3 can have either sign. Third, (parts of) R are always free to choose, as shown in Table 2.

Table 2
www.frontiersin.org

Table 2. Parameters of the example constraints of Figure 2.

In addition, two types of redundancies can occur depending on the constraint. First, (parts of) the frame ψ positions pψ may be free to choose. For example, the frame ψ of a cylinder joint (Figure 2C) may lie anywhere on its rotation axis. Second, multiple frame types may lead to the same constraints. For example, any of the four frame types of ψ can be used to specify a revolute joint (Figure 2A).

For simplicity, we consider angular velocity constraints where either type ϕG,B leads to the same constraint, i.e., when n=13dnϕ2.

4 Constraint frame identification

Section 3 introduced a method to specify constraints that result in constrained kinematics. This section considers the inverse problem: identifying constraints from constrained kinematic data. Constraint identification is first defined as an optimization problem for angular velocity ω, followed by a second optimization problem for linear velocity v.

Section 3 specified constraints on ω through a constraint frame ϕ with a degree of freedom vector dϕ. The inverse problem, given measured ω, is to find some ϕ that results in some dϕ. To resolve this ambiguity, we aim to find the ϕ in which ωϕ has the minimum number of degrees of freedom and thus the maximum number of constraints. Thus, we define our constraint frame identification as:

minϕn=13dnϕ0,,3.(5)

Thereby, a minimal representation of ω is found by expressing it as ωϕ.

Measuring ω at samples k1,2,,K yields a 3×K matrix Ω with rows ωn, columns ωk, and entries ωnk. If the constraint condition of Equation 4 holds for an axis n, then ωnϕk=0 should hold for all samples k. To assess the velocity magnitude in measurement data we use the scaled 2-norm, or root mean square (RMS), over all samples of an axis n:

λnϕ=ωnϕ2,

as defined by the scaled p-norm with p=2:

xp=1Kk=1Kxkp1p,

for a signal x with samples xk (Bullen, 2003).

With noiseless measurements, the constraint condition (Equation 4) can then be tested by dnϕ=0 if λnϕ=0 and dnϕ=1 if λnϕ>0. When noise is present however, noise is added to the zero-velocity constraint and the constraint condition (Equation 4) becomes

ωnϕkN0,σ2,

assuming normally distributed noise N with variance σ2. Therefore, the norm λnϕ is not bounded from below by 0, but by the noise variance σ2, which is the RMS of a normally distributed discrete signal with zero mean. Because λnϕσ2>0, evaluating dnϕ will always lead to dnϕ=1, and an optimization of Equation 5 has no gradient.

Instead of minimizing over dnϕ, we minimize λnϕ directly as a continuous measure of a signal’s degree of freedom. We aim to find a frame ϕ in which the variance of each of the axes λnϕ in λϕR3 are minimized. The values in λϕ are bounded from below by σ2 if constraints are present, but are larger otherwise. Because we are interested in finding those constraints, we prefer finding small individual values in λϕ rather than larger values.

Therefore, we use a p-norm over the three axes of λϕ:

minϕλϕp,

but with <p<1, for which the minimization is more sensitive to small individual values in λϕ in contrast to p>1. Some values of p yield special simplified expressions (Bullen, 2003). For example, p=1 yields the mean of λϕ, which is equally sensitive to all values in λϕ regardless of size. The limit p= yields the minimum of λϕ, which is only sensitive to the smallest value in λϕ. Here we use p=0, which yields the following simplified expression:

minϕλϕp=0=minϕn=13λnϕ1/3,

also known as the geometric mean (Bullen, 2003). This results in a quasinorm, because not all conditions for a norm are satisfied. By substituting the 2-norms (RMS) and 0-quasinorm (geometric mean) into one entry-wise 2,0-quasinorm, we obtain

minϕΩϕ2,0=minϕn=131Kk=1Kωnϕk21213.(6)

Section 3 proposed two options for ϕG,B, both parameterized by constant Rϕ, and thus two optimization problems. The measurement matrix Ω can be expressed in frame ϕ using the coordinate transformation of Equation 2, yielding

minRϕRϕΩ2,0,(7)

where the first optimization ϕ}={B uses Ω=Ωb and the second optimization ϕ=G uses Ω=Ωg.

This method can also be applied to linear velocity v, with a measured 3×K matrix V:

minψVψ2,0.

Section 3 proposed four options for ψGG,GB,BG,BB, all parameterized by constant Rψ and pψ, and thus four optimization problems. The measured V can be expressed in frame ψ using the coordinate transformation of Equation 2, yielding

minRψ,pψRψR¯V¯+pψ×Ω¯2,0,(8)

where Ω¯, V¯, R¯ are given in Table 3, depending on which of the four frame types they are expressed in.

Table 3
www.frontiersin.org

Table 3. Kinematic variables to transform (Equation 8).

Because the ground-truth constraint frame types ϕ and ψ will not be known beforehand, both optimizations of Equation 7 and all four of Equation 8 must be done. Then, ϕ and ψ can be classified by choosing the type with the lowest minimized quasinorm. Furthermore, the binary degrees of freedom vectors d can be classified by thresholding λ.

The inputs to the method are twists Vk containing ωk in Ω and vk in V, and poses Tk containing Rk and pk. The outputs are one set of the constraint frame parameters of Table 1: degrees of freedom, type, and geometry.

5 Evaluation

To evaluate the identification method, we simulated kinematic data for all twelve constraints of Figure 2 and gathered experimental robot data for five constraints. For the simulation experiments we used prior knowledge of the constraint frame types ϕ and ψ, to test how identification accuracy is affected by noise added to the poses Tk, and consequently twists Vk. For the robot experiments we used no prior knowledge of the frame type. We classified the constraint frame types and identified the constraint geometry, with real-life noise.

5.1 Simulation experiments

For the simulation experiments, we first generated constrained end effector poses Tk using Python 3.11 and NumPy 1.26. The velocities were ωϕk=dϕ0.51+sin2kΔt,1,cos3kΔtT rad s1 and vψk=dψ0.11,cos2kΔt,sin3kΔtTms1 where is element-wise multiplication and Δt is the sample time. Therefore, if dϕ and dψ specify that an axis is constrained, the velocity along that axis will be zero. Different ϕ and ψ then result in all constraints of Figure 2. The end effector twist Vbk was calculated using the transformation of Equation 2.

Poses Tk were obtained using the discrete-time version of Equation 1 for constant Vbk between samples (Lynch and Park, 2017):

Tk+1=expVbk×ΔtTk.(9)

Discrete time ran for 5s with sample time Δt=0.1s, an example is shown in Figure 3A. To mimic real world experiments, we introduced noise to Tk. Normally distributed noise with standard deviation σp was added to the position pk in Tk.

Figure 3
www.frontiersin.org

Figure 3. (A) Constraint frame identification of a plane-plane constraint (Figure 2D) in simulation experiments. (B) Constraint frame identification of a revolute joint (Figure 2A) in robot experiments. (C) Reproduction of motion in a prismatic joint (Figure 2B) by applying force f along the identified free axis of the constraint frame containing error ΔRψ, the reaction forces f¯ are due to the real-world constraint. The setup in (B,C) can be configured to constrain specific axes. Measured end effector poses are shown as black dots with red-green-blue axes. Errors in the identified constraint frame geometry are shown enlarged in red with respect to the purple ground-truth geometry: perpendicular axis in (A), rotation axis in (B), and translation axis in (C).

Normally distributed noise with standard deviation σR was added to the rotation vector representation θkR3 of Rk in Tk, related by θk×=lnRk. We assumed that the noise on position σp and rotation σR components were the result of one noise source with standard deviation σ, such that σp=σ and σR=3σm1. Therefore, varying σ affects both position and rotation components in the noisy poses Tk. The three-times larger effect of σ on σR was estimated by measuring noise on the robot pose in standstill.

5.2 Robot experiments

To test the method on real-world data, a Franka Research 3 (Franka Robotics, Munich, Germany) was used to collect constrained end effector poses Tk, an example is shown in Figure 3B. The experimenter physically moved the end effector through space in low-impedance guiding mode, which is part of the robot’s default control options, while the end effector was bolted to a constrained mechanical setup. While interacting with a user in the guiding mode, the robot complied with the Franka safety considerations. These consist of limits on all motor positions, velocities, accelerations, jerks, torques, and torque derivatives. The robot stops upon violation of these limits.

Poses Tk of five constraints (Figures 2A, B, D, I, K) were collected over ten trials per constraint for 5s at Δt=0.05s. The experimenter attempted to track the poses of the simulations.

5.3 Error metrics for evaluation

Section 3.3 noted that the geometric parameters Rϕ, Rψ, and pψ may contain redundancies depending on the constraints. Therefore, the error metrics reflect this.

Section 3.3 and Table 2 noted that Rϕ and Rψ either have none or one unique basis vector in Rϕ and none or one in Rψ. If there are no unique basis vectors, the error is irrelevant. If there is one unique basis vector l, the error metric is the angle ΔR=acosrl·r^lR between the ground-truth basis vector rl and the estimated basis vector r^l. Angle ΔR was wrapped at ±90deg, as the positive and negative axes are irrelevant. For the error metric of pψ, the Euclidian distance from the ground truth Δpψ=pψp^ψ2 was used. Only the unique components of pψp^ψ, as shown in Table 2, were used.

The ground truth constraint frame geometries were Rϕ=Rψ=I3×3, with pψ=0.1,0.3,0.1Tm for simulation, and pψ=0.0,0.0,0.244Tm for robot experiments.

5.4 Implementation

Given experimental poses Tk, noisy twists Vbk were computed from Equation 9 by solving for Vbk. The optimizations of Equation 7 and Equation 8 were implemented in Python using NumPy and the SciPy 1.10 dual annealing global optimization algorithm with 100 maximum iterations and default settings otherwise (Xiang et al., 2013). Our code is available online1.

The rotation matrices Rϕ, Rψ were parameterized by their rotation vector representations and bounded to ±180deg. The bounds on pψ were ±1m. Initial guesses were randomized. For the simulation data, optimizations were done with added noise σ106,104m spaced exponentially at 10 values. For the robot data, optimizations were done with no added noise σ. One optimization of Equation 7 or Equation 8 for a single frame type took approximately 2s on an Intel Core i5-6600 CPU with 8 GB RAM.

5.5 Simulation experiment results

The geometric errors (ΔRϕ, ΔRψ, Δpψ) relate to noise (σ) with approximately unit slope (Figure 4) for all constraints except for I, J, K, and L, whose errors (ΔRψ, Δpψ) are constant with increasing noise. Constraints with fewer degrees of freedom are generally identified more accurately.

Figure 4
www.frontiersin.org

Figure 4. Sensitivities of the geometric identification errors (ΔRϕ, ΔRψ, Δpψ) to noise (σ) in simulation experiments for all constraints of Figure 2.

5.6 Robot experiment results

Identifying constraint frames in demonstrations resulted in several candidate frame types (left, Table 4), each having associated degrees of freedom (middle, Table 4) and geometry (right, Table 4).

Table 4
www.frontiersin.org

Table 4. Identified constraint frames of robot demonstrations.

Section 4 noted that optimal frame types may be classified as the one with the lowest norms of Equation 7 or Equation 8. For constraints where any frame type is valid, norms are expected to be similar between frame types. This holds for all ϕ and ψ in A, but not for ψ in B, D which have lowest norm in GB. For constraints where only one frame type is valid (GB in I, K) the norms of the ground truth types are approximately half those of the other types. Therefore, frame types were classified correctly by the frame with the lowest minimized norm.

After classifying the optimal frame type, the associated degrees of freedom and geometry can be evaluated (middle, right, Table 4). If any threshold between 60,260 mrad s1 is applied to all λϕ, and any between 6,68 mm s1 to all λψ, the degrees of freedom of all five constraints (Table 2) are classified correctly. Suitable thresholds may be chosen at the midpoint of these ranges: 160 mrad s1 for λϕ and 37 mm s1 for λψ. These midpoint thresholds are at least 5 standard deviations away from the means of λϕ and λψ. The errors of the identified geometry are within ΔRϕ<1deg, ΔRψ<2.5deg and Δpψ<8 mm.

6 Application to robot task reproduction

This section illustrates how robot tasks can be reproduced using control expressed in the identified constraint frames of Section 4, and how their identification errors (ΔRϕ, ΔRψ, Δpψ) may influence task performance. The Franka robot reproduced three straightforward endstop-to-endstop tasks involving constraints A, B, and I enforced by the mechanical setup from Section 5.2. An experiment is visualized in Figure 3C. We collected five trials for each task at a 0.05s sample time and clipped each trial to 5s.

6.1 Reproduction control

To reproduce the tasks, we used simple control based on the identified constraint frames from the experiments of Table 4, thereby illustrating simple LfD. We sent desired motor torques τR7 to the Franka torque controller based on constant desired wrenches (moments mR3 and forces fR3) expressed in the identified constraint frames. The torques and wrenches are related by τ=JTfTmTT, where JR6×7 is the robot pose-dependent Jacobian (Lynch and Park, 2017).

Desired motor torques were sent to the robot using Robotic Operating System (ROS, version Noetic Ninjemys) from Ubuntu 20.04 with the franka_ros package. During operation, the user and bystanders were outside the workspace of the robot, and the user monitored the task reproduction with the robot’s emergency stop in hand.

Desired wrenches were set to constant values (3.5 Nm and 7N) along the free axes of the identified constraint frames, and to zero (0 Nm and 0N) along the constrained axes. Therefore, the control for constrained task A applied a pure moment using the identified Rϕ. The control for constrained task B applied a pure force using the identified Rψ. The control for constrained task I applied a moment and a force using the identified Rϕ, Rψ, and pψ, but for this experiment we set ΔRϕ=ΔRψ=0 to isolate the effect of Δpψ.

6.2 Effects of identification accuracy

During constrained task reproduction there will be reaction wrenches along the constrained axes, which we consider undesirable in these experiments. Such reaction wrenches can be caused by, among other factors, the control applying wrenches based on imperfect constraint identification. Therefore, we compared the reaction wrenches in case the constraint frames used in the controller contain no errors (ΔRϕ=ΔRψ=Δpψ=0), identification errors from methods in the literature that use prior knowledge (De Schutter et al., 1999; Niekum et al., 2015), and identification errors from our method without prior knowledge of the constraint (Table 4).

From the measured interaction wrenches (mk and fk) we determined the reaction wrenches (moments m¯k and forces f¯k) along the ground-truth constrained axes (Figure 2), and determined their peak magnitudes m¯* and f¯* with m¯*=maxm¯k2R. Thereby, we obtain a metric of how constraint identification accuracy influences undesirable reaction wrenches in task reproduction. The results are shown in Figure 5. Larger identification errors (i.e., from our method) were expected to cause larger reaction wrenches, but this is not necessarily the case. In all reproductions, the physical endstops of the setup were reached.

Figure 5
www.frontiersin.org

Figure 5. The peak magnitude reaction wrenches (f¯*, m¯*) and their means and standard deviations over five reproductions of the tasks containing constraints (A–C). Colors represent the source of the errors in the constraint frame used in the control.

7 Discussion and conclusion

This work identifies constraints from kinematic data using constraint frames, consisting of a frame type that determines what body the frame is attached to, geometry that determines the orientation and position, and the degrees of freedom of velocities in that frame. First, frame geometries are identified by minimizing a norm on velocities in all frame types. Second, the optimal frame type is classified as the one with the lowest norm. Third, the degrees of freedom can be classified by thresholding the velocities in that frame.

7.1 Advantages

The method does not require force measurements, and can therefore be applied to any system that measures positions and orientations of constrained objects or manipulators. Examples include other (mobile) robotic arms with different kinematic chains, humanoid robots, and mobile robots constrained by their environments. Besides robotics, the method may also be applied to motion tracking systems, for example, to monitor human movement or estimate human joints (Ancillao et al., 2022). While impedance control was used to collect pose data and reproduce tasks, the constraint identification itself does not depend on force measurements. Because force measurements can improve identification accuracy it is the topic of future work (De Schutter et al., 1999; Subramani et al., 2020). Moreover, correct kinematics-based constraint identification requires that all true degrees of freedom are excited. Otherwise, there is no discernible difference between, e.g., a prismatic joint constraint and unconstrained but straight line translation, which will have the same apparent degrees of freedom.

The method requires no specific prior constraint models, nor estimates of geometry. Hence, there is no need to manually estimate or define constraints, which may be challenging to do exhaustively for all constraints that can be encountered in everyday settings. The method can identify a wide variety of constraints, of which twelve common examples were shown. However, many more can be identified by enumerating over all possibilities, including constraints where the angular and linear velocity frames are not orthogonal.

The method identifies a fixed number of intuitive parameters (Table 1). Methods that fit specific constraint models must identify and choose from multiple candidate models, each with their own parameters. Therefore, our method may be more efficient when many different constraints must be considered, such as in everyday settings.

Our identified constraint frames can be directly interpreted as “task frames” from the literature since both frames conveniently describe contact tasks (Bruyninckx and De Schutter, 1996; Mousavi Mohammadi et al., 2024). Although recent work on task frame identification by Mousavi Mohammadi et al. (2024) also has the above advantages, in this work we also classify constraints by discerning the degrees of freedom associated with our frames. Although we reproduced simple tasks, such frames can also be used to reproduce more complicated contact tasks with hybrid position and force control applied to the task frame axes (Conkey and Hermans, 2019; Ureche et al., 2015). Alternatively, impedance controller stiffness may be varied based on the identified constraints.

7.2 Limitations and future work

In this work, we applied our method to a window of kinematic data which we assume contains a single constraint. In future work, we will investigate how our method can be applied to data containing sequential constraints. Furthermore, we only considered single-contact constraints which can be modeled using a single constraint frame. Multiple-contact constraints, such as the two ends of a stick contacting separate planes, cannot be fully modeled using a single instance of the current method, since only one of the two contact points will be identified. However, a second instance of our method may identify a second contact point, if the second solution is (forced to be) distinct. Therefore, multiple contacts may be identified with multiple instances of our method, which is the topic of future work.

We defined constraint identification as a minimization problem and applied a general global-local optimization method. Improvements in accuracy and time efficiency may be achieved in several ways: by more efficient optimizer implementations, by choosing a different optimization method suited for our specific problem, or by tuning the optimization parameters. Furthermore, prior information may be useful for initialization. For example, a vision system may be used to identify a tool tip as prior information on constraint geometry.

Identifying constraint geometry does not require any parameter choice, but classifying the degrees of freedom requires two thresholds on RMS velocities, which can be chosen heuristically or empirically. Heuristically, thresholds may be chosen based on prior information, such as expected noise and expected velocities, which may differ between applications and measurement systems. If such prior information is available, classification may perform as expected without the need for threshold tuning. Empirically, thresholds may be chosen following demonstrations with known ground-truth constraints (Section 5). If such demonstrations are available and representative of future tasks, no explicit prior information about noise and velocities are needed, which may be more convenient depending on the application.

Geometric identification errors were found to scale linearly with noise in simulation experiments, and the exact scaling varies with the underlying constraints. Errors in robot experiments were larger than those in simulation experiments, which may be due to unmodeled factors such as structural compliance. Classification of the ground-truth constraint frame type was successful in all robot experiments. An analysis of factors that influence identification and classification performance is outside the scope of this work, such as the constraint, the optimizer, the relative direction and magnitude of motion, and the sample size.

Constraint identification methods in the literature report geometric identification accuracies using different metrics. In our method, we report the geometric parameter errors between points, lines, and planes. Methods that use similar metrics report similar errors: within 1–7.5 mm and 0.5–5 deg for point-on-plane contacts, prismatic, and revolute joints (De Schutter et al., 1999; Sturm et al., 2011; Mousavi Mohammadi et al., 2024). Other methods report the fitness between observations and their candidate constraint models (Sturm et al., 2011; Subramani et al., 2018; Subramani et al., 2020; van der Walt et al., 2025). Of such methods, the most accurate report sub-millimeter mean fit errors (Subramani et al., 2018). While such sub-millimeter mean model fit errors are not directly comparable to geometric parameter errors, they may correspond to sub-millimeter geometric errors.

While our method may be more versatile, it may yield larger geometric identification errors than methods that fit specific constraint models to data. Regardless, such larger errors (e.g., 5 mm instead of 0.5 mm) did not lead to substantially larger reaction wrenches in simple reproduction experiments with the default Franka controller (Section 6). Furthermore, similar errors have been shown to lead to acceptable performance in more complicated tasks in related work (Mousavi Mohammadi et al., 2024). The obtained accuracy may therefore be sufficient for such tasks, and other factors may have a greater effect on task performance, such as the task definition, environment, robot, controller, and definition of success. For example, control methods that are designed for compliant contact, such as impedance control, may be more robust to misidentified constraint geometry than non-compliant control. We aim to apply such compliant control methods to reproduce constrained tasks through LfD in future work.

7.3 Conclusion

This work identified constraint frames in robot tasks without prior knowledge of the constraints or tasks and without force measurements. Automatically modeling such robot-environment interactions, for example, in the context of Learning from Demonstration, may support versatile autonomous robot applications.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

AO: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing. DD: Supervision, Writing – review and editing. HvdK: Funding acquisition, Resources, Supervision, Writing – review and editing. MV: Conceptualization, Methodology, Project administration, Resources, Supervision, Writing – review and editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was funded by the Dutch Ministry of Education, Culture, and Science through the Sectorplan Bèta en Techniek.

Acknowledgments

The authors would like to thank Arvid Q.L. Keemink and Gwenn Englebienne for their insight into sparse optimization. This work has previously appeared as a preprint (Overbeek et al., 2025).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1https://github.com/ET-BE/ReFrameId

References

Ancillao, A., Verduyn, A., Vochten, M., Aertbeliën, E., and De Schutter, J. (2022). A novel procedure for knee flexion angle estimation based on functionally defined coordinate systems and independent of the marker landmarks. Int. J. Environ. Res. Public Health 20, 500. doi:10.3390/ijerph20010500

PubMed Abstract | CrossRef Full Text | Google Scholar

Bruyninckx, H., and De Schutter, J. (1993a). Kinematic models of rigid body interactions for compliant motion tasks in the presence of uncertainties. Proc. IEEE Int. Conf. Robotics Automation 1, 1007–1012. doi:10.1109/ROBOT.1993.292107

CrossRef Full Text | Google Scholar

Bruyninckx, H., and De Schutter, J. (1996). Specification of force-controlled actions in the “task frame formalism”-a synthesis. IEEE Trans. robotics automation 12, 581–589. doi:10.1109/70.508440

CrossRef Full Text | Google Scholar

Bruyninckx, H., De Schutter, J., and Dutre, S. (1993b). The “reciprocity” and “consistency” based approaches to uncertainty identification for compliant motions. Proc. IEEE Int. Conf. Robotics Automation 1, 349–354. doi:10.1109/ROBOT.1993.292006

CrossRef Full Text | Google Scholar

Bullen, P. S. (2003). Handbook of means and their inequalities. Netherlands, Dordrecht: Springer.

CrossRef Full Text | Google Scholar

Cabras, S., Castellanos, M. E., and Staffetti, E. (2010). Contact-state classification in human-demonstrated robot compliant motion tasks using the boosting algorithm. IEEE Trans. Syst. Man, Cybern. Part B Cybern. 40, 1372–1386. doi:10.1109/TSMCB.2009.2038492

PubMed Abstract | CrossRef Full Text | Google Scholar

Conkey, A., and Hermans, T. (2019). Learning task constraints from demonstration for hybrid force/position control. IEEE-RAS Int. Conf. Humanoid Robots 2019-Octob, 162–169. doi:10.1109/Humanoids43949.2019.9035013

CrossRef Full Text | Google Scholar

de Schutter, J., Bruyninckx, H., Dutré, S., de Geeter, J., Katupitiya, J., Demey, S., et al. (1999). Estimating first-order geometric parameters and monitoring contact transitions during force-controlled compliant motion. Int. J. Robotics Res. 18, 1161–1184. doi:10.1177/02783649922067780

CrossRef Full Text | Google Scholar

Hausman, K., Niekum, S., Osentoski, S., and Sukhatme, G. S. (2015). “Active articulation model estimation through interactive perception,” in 2015 IEEE international conference on robotics and automation (ICRA). Presented at the 2015 (IEEE International Conference on Robotics and Automation ICRA), 3305–3312. doi:10.1109/ICRA.2015.7139655

CrossRef Full Text | Google Scholar

Holladay, R., Lozano-Pérez, T., and Rodriguez, A. (2021). Planning for multi-stage forceful manipulation. arXiv, 6556–6562. doi:10.1109/icra48506.2021.9561233

CrossRef Full Text | Google Scholar

Jain, A., and Niekum, S. (2018). “Efficient hierarchical robot motion planning under uncertainty and hybrid dynamics,” in Proceedings of the 2nd conference on robot learning. Presented at the conference on robot learning (PMLR), 757–766.

Google Scholar

Kroemer, O., Niekum, S., and Konidaris, G., (2020). A review of robot learning for manipulation: challenges, representations, and algorithms. arXiv:1907.03146.

Google Scholar

Lefebvre, T., Bruyninckx, H., and Schutter, J. D. (2005). Online statistical model recognition and State estimation for autonomous compliant motion. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev. 35, 16–29. doi:10.1109/TSMCC.2004.840053

CrossRef Full Text | Google Scholar

Li, X., Baum, M., and Brock, O. (2023). “Augmentation enables one-shot generalization in learning from demonstration for contact-rich manipulation,” in 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS). Presented at the 2023 (IEEE/RSJ International Conference on Intelligent Robots and Systems IROS), 3656–3663. doi:10.1109/IROS55552.2023.10341625

CrossRef Full Text | Google Scholar

Li, X., and Brock, O. (2022). Learning from demonstration based on environmental constraints. IEEE Robotics Automation Lett. 7, 10938–10945. doi:10.1109/LRA.2022.3196096

CrossRef Full Text | Google Scholar

Lynch, K. M., and Park, F. C. (2017). Modern robotics: mechanics, planning, and control. Cambridge University Press.

Google Scholar

Meeussen, W., Rutgeerts, J., Gadeyne, K., Bruyninckx, H., and De Schutter, J. (2007). Contact-state segmentation using particle filters for programming by human demonstration in compliant-motion tasks. IEEE Trans. Robotics 23, 218–231. doi:10.1109/tro.2007.892227

CrossRef Full Text | Google Scholar

Mousavi Mohammadi, S. A., Vochten, M., Aertbeliën, E., and De Schutter, J. (2024). Automatic extraction of a task frame from human demonstrations for controlling robotic contact tasks. arXiv. doi:10.48550/arXiv.2404.01900

CrossRef Full Text | Google Scholar

Niekum, S., Osentoski, S., Atkeson, C. G., and Barto, A. G. (2015). Online Bayesian changepoint detection for articulated motion models. Proc. - IEEE Int. Conf. Robotics Automation 2015-June, 1468–1475. doi:10.1109/ICRA.2015.7139383

CrossRef Full Text | Google Scholar

Oriolo, G., and Vendittelli, M. (2009). “A control-based approach to task-constrained motion planning,” in 2009 IEEE/RSJ international conference on intelligent robots and systems. Presented at the 2009 IEEE/RSJ international conference on intelligent robots and systems, 297–302. doi:10.1109/IROS.2009.5354287

CrossRef Full Text | Google Scholar

Overbeek, A. H. G., Dresscher, D., Van der Kooij, H., and Vlutters, M. (2025). Versatile kinematics-based constraint identification applied to robot task reproduction. Preprint. Available online at: https://research.utwente.nl/en/publications/versatile-kinematics-based-constraint-identification-applied-to-r (Accessed May 31, 2025).

Google Scholar

Simonič, M., Ude, A., and Nemec, B. (2024). Hierarchical learning of robotic contact policies. Robotics Computer-Integrated Manuf. 86, 102657. doi:10.1016/j.rcim.2023.102657

CrossRef Full Text | Google Scholar

Slaets, P., Lefebvre, T., Rutgeerts, J., Bruyninckx, H., and De Schutter, J. (2007). Incremental building of a polyhedral feature model for programming by human demonstration of force-controlled tasks. IEEE Trans. Robotics 23, 20–33. doi:10.1109/TRO.2006.886830

CrossRef Full Text | Google Scholar

Sturm, J., Jain, A., Stachniss, C., Kemp, C. C., and Burgard, W. (2010). “Operating articulated objects based on experience,” in 2010 IEEE/RSJ international conference on intelligent robots and systems. Presented at the 2010 IEEE/RSJ international conference on intelligent robots and systems, Taipei, Taiwan, 2739–2744. doi:10.1109/IROS.2010.5653813

CrossRef Full Text | Google Scholar

Sturm, J., Stachniss, C., and Burgard, W. (2011). A probabilistic framework for learning kinematic models of articulated objects. J. Artif. Intell. Res. 41, 477–526. doi:10.1613/jair.3229

CrossRef Full Text | Google Scholar

Subramani, G., Hagenow, M., Gleicher, M., and Zinn, M. (2020). A method for constraint inference using pose and wrench measurements. arXiv. doi:10.48550/arXiv.2010.15916

CrossRef Full Text | Google Scholar

Subramani, G., Zinn, M., and Gleicher, M. (2018). “Inferring geometric constraints in human demonstrations,” in Proceedings of the 2nd conference on robot learning. Presented at the conference on robot learning (Zürich, Switzerland: PMLR), 223–236.

Google Scholar

Suomalainen, M., Abu-Dakka, F. J., and Kyrki, V. (2021). Imitation learning-based framework for learning 6-D linear compliant motions. Auton. Robot. 45, 389–405. doi:10.1007/s10514-021-09971-y

CrossRef Full Text | Google Scholar

Ureche, A. L. P., Umezawa, K., Nakamura, Y., and Billard, A. (2015). Task parameterization using continuous constraints extracted from human demonstrations. IEEE Trans. Robotics 31, 1458–1471. doi:10.1109/TRO.2015.2495003

CrossRef Full Text | Google Scholar

Van der Walt, C., Stramigioli, S., and Dresscher, D. (2025). Mechanical constraint identification in model-mediated teleoperation. IEEE Robotics Automation Lett. 10, 5777–5782. doi:10.1109/LRA.2025.3560893

CrossRef Full Text | Google Scholar

Xiang, Y., Gubian, S., Suomela, B., and Hoeng, J. (2013). Generalized simulated annealing for global optimization: the GenSA package. R J. 5 (1), 13–29. doi:10.32614/RJ-2013-002

CrossRef Full Text | Google Scholar

Keywords: constraint identification, physical constraints, constraint frames, contact modeling, robot manipulation, learning from demonstration, imitation learning

Citation: Overbeek AHG, Dresscher D, van der Kooij H and Vlutters M (2025) Versatile kinematics-based constraint identification applied to robot task reproduction. Front. Robot. AI 12:1574110. doi: 10.3389/frobt.2025.1574110

Received: 10 February 2025; Accepted: 02 June 2025;
Published: 15 July 2025.

Edited by:

Xiaopeng Zhao, The University of Tennessee, Knoxville, United States

Reviewed by:

Changchun Liu, Nanjing University of Aeronautics and Astronautics, China
Dler Salih Hasan, Salahaddin University, Iraq

Copyright © 2025 Overbeek, Dresscher, van der Kooij and Vlutters. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alex H. G. Overbeek, YS5oLmcub3ZlcmJlZWtAdXR3ZW50ZS5ubA==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.