Rethinking Motor Development and Learning
University of Miami, Electrical & Computer Enginee, USA
University of Miami, Psychology, USA
University of California San Diego, Computer Science and Engineering, USA
University of California San Diego, Institute for Neural Computation, USA
Sensorimotor systems generate movements that are compliant and non-repeatable, yet remarkably well-adapted to an unstructured, uncertain and non-stationary world. By contrast, robotics has for the most part focused on simplifying the control problem by using stiff, highly geared actuators, emphasizing repeatability over compliance, and avoiding unstructured conditions. This approach worked well for industrial applications and has revolutionized the assembly line. However, in order to develop the robotic technology that could transform daily life, it is critical to confront robotic problems that approximate the properties of the human body. The problem is that due to the complexity, compliance and actuator dynamics of the human body, most of the control schemes used in practice become inapplicable.
One obstacle to progress is the fact that motor control is typically studied in isolation. Examples of research niches include reaching, eye movements, crawling, grasping, and walking. Interestingly this partition of the motor control problem does not match the pattern observed during development. Instead, even the earliest attempts to reach and grasp are accompanied by eye, head, face, leg, and body movements. When observing 4 month old infants reaching movements ones gets the impression that
they are "reaching" with their legs as much as they are reaching with their arms and hands. Based on these ideas, we have been pursuing a project whose goal is to obtain a better computational understanding of how infants learn to control their bodies. One component of this project focuses on the development of a sophisticated humanoid robot named Diego that approximates the complexity, compliance and control dynamics of the human body. The other component, which is the focus of this contribution, centers on the use of motion capture technology to understand the development of motor control in the context of physical and social interaction. While the use of motion capture has recently become popular in the motor development literature, a unique aspect of our work is the attempt to simultaneously capture the motion of the entire body (arms, legs, trunk, and head) in the infant while also tracking the mother. Here we describe how our first steps towards capturing such data are already changing our perspective on the development of reaching.
B. Findings - Motion Energy Correlation Analysis
We first asked whether infants typically move one joint at a time or multiple joints at the same time during a reach. A simple correlational tool was developed to perform these analyses. First, the instantaneous motion energy of each marker is estimated by squaring the displacement between adjacent frames. A displacement is counted only when a marker is visible in both frames and the displacement is reasonable (speed < 24m/s) so that we exclude missing markers and jittering. The values are normalized across the entire session. Second, the temporal correlations between the average group motion energy are computed over the entire session. If two limbs are moving at the same time regardless the direction of movement, there will be a high correlation between the corresponding markers.
As the markers on the same limb move together as a group, high correlations are observed in the following groups: head, left arm, right arm, left leg (first 2 leg markers) and right leg (last 3 markers). Surprisingly, high correlations were observed between markers on different limbs (e.g., 0.4 between the arms). To explore the source of high inter-limb correlation, we plotted the motion energy over time. For simplicity, the markers have been grouped by their corresponding limbs and group averages are reported.
A critical limitation of the literature on motor development is the fact that it is carved into isolated research niches (e.g., reaching, crawling, and walking), while research on shared attention and facial expressions is handled by still other specialties. Each research niche is typically focused on how one part of the body performs a single task (e.g., hands and arms reaching for an object). While this research strategy is reasonable, it may result in a distorted perspective of how infants really learn to control their bodies. Here we presented our first steps to explore this issue. These first steps required the development technology to jointly capture motion of the limbs, trunk, head and facial expressions of infants and their caregivers. We’ve described the obstacles we faced and some of the solutions we found so far.
The preliminary data obtained is already presenting a different perspective from the one available in the literature. For example, current work on the development of reaching emphasizes the fact that infants reach with very little movement in the elbow's joint when compared to adults [2, 3]. This is interpreted as evidence of a developmental trajectory from fewer degrees of freedom to more degrees of freedom .
Our motion capture data suggests that during early reaching episodes infants not only move their arms and hands but also engage their legs, head and face. We are also finding that the physical and social contexts of motor development are tightly coupled. The result is that behavioral categories that are natural from an adult perspective may be artificial from an infant's perspective. For example, when a caregiver is present, making facial expressions, vocalizing, or moving the legs, may be as effective to make contact with an interesting object, as reaching with the arms and hands. Thus, it could be argued that moving the legs or making facial expressions should be a legitimate part of the literature on the development of reaching.
This phenomenon is reminiscent of the work of Karl Sims  who used genetic algorithms to optimize simulated creatures for the task of swimming. The striking outcome of these simulations was the emergence of life-like diversity of behaviors pursuing a common goal (propel the center of mass forward). Every arm, leg, tentacle and other hard-to-name body part of the simulated creature ended up moving in such a way that the entire creature made progress towards the goal. Our work is still
clearly preliminary and our observations are only anecdotal. Yet the approach and the technologies we are exploring are already contributing a different perspective on motor development. This perspective may give us clues to develop a new generation of robots that learn to control their own bodies.
Infants face a formidable control problem whose solution eludes the most sophisticated contemporary approaches to robotics and artificial intelligence. Yet infants solve this problem seamlessly in the first years of life. Understanding how this is done and reproducing this process in robots may have profound scientific and technological consequences.
Thanks to all of the families who participated in this project. This research is supported by the National Science Foundation under Grant No. IIS-INT2-0808767 and 0808653.
 Bernstein, N. A. (1967). The co-ordination and regulation of movements.
 Berthier, N. E., & Keen, R. (2006). Development of reaching in infancy. Experimental Brain Research, 169(4), 507-518.
 Lee, H.M., Bhat A., Scholz J.P, and Galloway, J.C. Toy Oriented Changes During Early Arm Movements. Infant Behavior and Development, 31(3) 447-469.
 Sims, K. (1994). Evolving virtual creatures. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, 15-22.
Artificial Intelligence Learning,
IEEE ICDL-EPIROB 2011, Frankfurt, Germany, 24 Aug - 27 Aug, 2011.
Development and emergence
(2011). Rethinking Motor Development and Learning.
Front. Comput. Neurosci.
IEEE ICDL-EPIROB 2011.
29 Jun 2011;
12 Jul 2011.
Mr. Juan Artigas, University of Miami, Electrical & Computer Enginee, Coral Gables, USA, firstname.lastname@example.org