NooTriX

How to Make Robots with Lifelike Behavior

How to Make Robots with Lifelike Behavior

In a truly awesome talk at TEDx Jaffa entitled: “Robots with Soul” (see Video below), Guy Hoffman from Cornell University explains how robots of different shapes can act in a way that looks lifelike, almost human.

Shape doesn’t matter, Hoffman said. The secret lies in making the robot move in a way that exhibit emotion. It doesn’t matter how something looks. It’s all in the motion, and in the timing of how the thing moves.

Moving in a graceful way is just one building block to make robots with lifelike behavior. The Hoffman’s approach is also inspired by a trend in cognitive psychology, called embodied cognition. It can be summarized as following: robot body postures feed back into their brains to generate the way that they behave. Applying these concepts to robots result into playful, reactive, and curious robots that people enjoy interacting with.

The concepts behind this talk are described in more details in Hoffman’s publications, and more specifically the following articles:

  • Embodied Cognition for Autonomous Interactive Robots [PDF]

    Hoffman, G. (2012)

    Topics in Cognitive Science, 4(4), 759–772

    Abstract

    In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior.
    This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human–robot interaction based on recent psychological and neurological findings.

  • Designing Robots with Movement in Mind [PDF]

    Hoffman, G., & Ju, W. (2014)

    Journal of Human-Robot Interaction, 3(1), 89–122

    Abstract

    This paper makes the case for designing interactive robots with their expressive movement in mind. As people are highly sensitive to physical movement and spatiotemporal affordances, well-designed robot motion can communicate, engage, and offer dynamic possibilities beyond the machines’ sur- face appearance or pragmatic motion paths. We present techniques for movement centric design, including character animation sketches, video prototyping, interactive movement explorations, Wiz- ard of Oz studies, and skeletal prototypes. To illustrate our design approach, we discuss four case studies: a social head for a robotic musician, a robotic speaker dock listening companion, a desktop telepresence robot, and a service robot performing assistive and communicative tasks. We then re- late our approach to the design of non-anthropomorphic robots and robotic objects, a design strategy that could facilitate the feasibility of real-world human-robot interaction.

  • Emotionally Expressive Dynamic Physical Behaviors in Robots [PDF]

    Bretan, M., Hoffman, G., & Weinberg, G. (2015)

    International Journal of Human-Computer Studies, Volume 78

    Abstract

    For social robots to respond to humans in an appropriate manner they need to use apt affect displays, revealing underlying emotional intelligence. We present an artificial emotional intelligence system for robots, with both a generative and a perceptual aspect. On the generative side, we explore the expressive capabilities of an abstract, faceless, creature-like robot, with very few degrees of freedom, lacking both facial expressions and the complex humanoid design found often in emotionally expressive robots. We validate our system in a series of experiments: in one study, we find an advantage in classification for animated vs static affect expressions and advantages in valence and arousal estimation and personal preference ratings for both animated vs static and physical vs on-screen expressions. In a second experiment, we show that our parametrically generated expression variables correlate with the intended user affect perception. On the perceptual side, we present a new corpus of sentiment-tagged social media posts for training the robot to perceive affect in natural language. In a third experiment we estimate how well the corpus generalizes to an independent data set through a cross validation using a perceptron and demonstrate that the predictive model is comparable to other sentiment-tagged corpi and classifiers. Combining the perceptual and generative systems, we show in a fourth experiment that our automatically generated affect responses cause participants to show signs of increased engagement and enjoyment compared with arbitrarily chosen comparable motion parameters.

Related Articles

Free Downloads

You can find our free downloads gathered in the dedicated page. They are grouped by categories:

These materials are brough to you freely with the hope that they will be useful. Please support our action and help us keep running this site and producing even more free content.

Support Free Contents

Materials you can found here are brought to you freely under a creative commons license. It took us time and money to make it. Please donate to help us pay the bills.