Objectives

We are excited to announce the 5th workshop titled Can we build Baymax? Part V. Since the first workshop was organized in 2015, our workshop series has taken place at the IEEE-RAS International Conference on Humanoid Robots (Humanoids 2019) every year. Baymax is a humanoid character in the Disney feature animation Big Hero 6. It is a healthcare robot with an inflatable body, capable of walking, bumping into surrounding objects, learning motions and physically interacting with people. However, in the real world, it is not easy to build such a robot. In the previous workshops, we have discussed topics on mechanisms and structure, sensors, protection, control, soft robot fabrication, fail-safe systems and learning for humanoid robots. As a continuation of the discussion, this workshop will bring together researchers looking for paths toward seamless communication between humans and humanoid robots like Baymax. In particular, we will tackle challenges in rendering and interpreting diverse types of interactions from hardware and software points of view including but not limited to gestures, touching and hugging, facial expressions, speech, body language, and eye contact.

We invite you to contribute and to participate in this workshop.


The workshop's topics include, but are not limited to:
Confirmed Speakers:
Previous Workshops:

Workshop Schedule

08:50 - 09:00 Welcome and Introduction
09:00 - 09:30 Katsu Yamane, Honda Research Institute, USA
Empathetic Physical Interaction

Physical interactions using the body play an important role in human-human interactions by allowing the exchange of subtle information that is difficult to describe in words, e.g. comfort, preference, intention, and emotion. Robots working closely with humans have to understand, utilize and express such information in order to effectively interact with humans. The goal of empathetic physical interaction is to realize human-centered physical support where the robot uses its body as a medium for exchanging subtle information with the human partner, and adapts its behavior based on the information to improve the human perception and, ultimately, overall performance of physical support. This talk will introduce two related ongoing projects at Honda Research Institute USA: perception of pedestrian avoidance behavior of a mobile robot, and modeling of intimate social interactions of a humanoid robot.

09:30 - 10:00 David Hanson, Hanson Robotics, Hong Kong
Art meets Artificial Intelligence, or Why Humanizing Robots Can Be Useful and Cool

Most robots and AI today are designed to be non-humanlike, and certainly for many uses they don't need to be humanoid. However, for key applications such as the arts, intuitive social interfaces for AI agents, certain kinds of therapy, and for scientific investigation of human behavior, humanlike robots can be very useful. This presentation covers the tech, arts, and history of Hanson robots, including transdisciplinary collaboration among artists, robotics & AI engineers, manufacturing, business development, and social sciences. By diversifying the creative landscape of robotics, bringing domains of engineering together with narrative and figurative arts, philosophy, ethics, biosciences, and AI, we hope to achieve a better toolkit to discover what's best in humanity, and to empower intelligent machines to work with people better. Investigating new forms of robots as works of art may also challenge our preconceptions and allow for surprise and wonder. Furthermore, making AI embodied and humanlike may facilitate a path of co-evolution, leading towards machines who may someday grow to become true friends to humanity, as trusted allies rather than mere slaves. Maybe, humanizing our machines will realize a more stable world with growing benefits for all sentient beings.

10:00 - 10:30 Gordon Cheng, Technical University of Munich, Germany
TBD
10:30 - 11:00 Coffee Break (Lobby)
11:00 - 11:30 Serena Ivaldi, INRIA Nancy Grand-Est, France
Social and physical interaction with a humanoid: Lessons learned with the iCub

In this talk I will overview our recent human-humanoid interaction experiments with the iCub humanoid and discuss our findings. We studied social and physical interaction in a collaborative assembly scenario to study engagement and relation to individual factors. We studied how to use these signals for predicting intention in collaborative scenarios, which led us to a multimodal intention prediction framework based on probabilistic movement primitives. Finally, we studied trust in human-humanoid interaction with a shared decision protocol, which unveiled the people’s negative trusting attitude towards the robot. I will conclude with our current perspectives in whole-body collaboration.

11:30 - 12:00 Chung Hyuk Park, George Washington University, USA
Empathetic Agent and Interactions with Children with ASD

Socially assistive robots (SARs) and their application for interventions for children with autism spectrum disorder (ASD) have been actively researched and widely used in special education and clinical settings. Going further from their utilization as learning aids and research platforms, this talk will address the aspects of empathy and emotion regulation (ER) for SARs, which are important mechanisms to be implemented in interventions since the empathy and ER impairments are underlying factors for many atypicalities manifested in ASD. We will discuss the design of our empathetic robotic agent, including the design and the algorithmic model to provide dynamic ER capabilities. In addition, we will describe a user study that evaluates the ER capabilities of an emotionally expressive empathetic agent as well as its capability to prime higher social engagement in a user.

12:00 - 12:30 Hae Won Park, MIT Media Lab, USA
Socio-Emotive Intelligence for Long-term Human-Robot Interaction

In this talk, I’d like to engage our community to question whether robots need socio-emotive intelligence. To answer this question though, we need to first think about a new dimension of evaluating AI algorithms and systems that we build - measuring their impact on people’s lives in the real-world contexts. I will highlight a number of provocative research findings from our recent long-term deployment of social robots in schools, homes, and older adult living communities. We employ an affective reinforcement learning approach to personalize robot’s actions to modulate each user’s engagement and maximize the interaction benefit. The robot observes users’ verbal and nonverbal affective cues to understand the user state and to receive feedback on its actions. Our results show that the interaction with a robot companion influences users’ beliefs, learning, and how they interact with others. The affective personalization boosts these effects and helps sustain long-term engagement. During our deployment studies, we observed that people treat and interact with artificial agents as social partners and catalysts. We also learned that the effect of the interaction strongly correlates to the social relational bonding the user has built with the robot. So, to answer the question “does a robot need socio-emotive intelligence,” I argue that we should only draw conclusions based on what impact it has on the people living with it - is it helping us flourish in the direction that we want to thrive?

12:30 - 14:00 Lunch
14:00 - 14:30 Ludovic Righetti, New York University, USA & Max-Planck Institute, Germany
What can go wrong when robots physically interact with humans? An ethical and technical perspective

The ability to safely physically interact with humans is one of the most exciting features of Baymax. This presentation will tackle two drastically different, yet, inseparable issues when discussing robots that interact with humans: the technical problem of safe physical interaction and the ethical issues inherent to robots making decisions that impact us. The first part of the presentation will present our research related to the control of contact interactions, using both optimal control and reinforcement learning and highlight some of the current challenges one faces when creating safe physical interactions. In the second part of the talk, we will discuss some issues that can arise when robots interact with humans and make decisions that have an impact on how humans receive a service or health care support. We will give an overview of concerns associated to bias in machine learning that can lead to discriminatory behaviors and explain how this impacts robotics. Then we will give some possible research directions on how to tackle these issues to create safe robot companions that can benefit everyone.

14:30 - 15:00 Alex Alspach, Toyota Research Institute, USA
Tactile sensing bubbles for interaction
15:00 - 15:30 Joohyung Kim, Disney Research, USA
Robots inspired by animation characters

Animated characters often have interesting and unique motions, configurations and abilities. Animators mostly base these creatures on nature, including humans, and augment them with their imagination. As technology advances, some of these features become implementable in real robots. In this talk, I will present our efforts at Disney Research to make robots that capture these interesting features from animation characters.

15:30 - 16:00 Coffee Break (Lobby)
16:00 - 16:30 Communicate with Speakers
16:30 - 17:30 Open discussion and Closing
18:00 - Welcome Reception and Late Breaking Reports

Speakers


Gordon Cheng

Gordon Cheng holds the Chair of Cognitive Systems with regular teaching activities and lectures. He is Founder and Director of Institute for Cognitive Systems, Faculty of Electrical and Computer Engineering at Technical University of Munich, Munich/Germany. He is also the coordinator of the CoC for Neuro-Engineering - Center of Competence Neuro-Engineering in the Department of Electrical and Computer Engineering.

Formerly, he was the Head of the Department of Humanoid Robotics and Computational Neuroscience, ATR Computational Neuroscience Laboratories, Kyoto, Japan. He was the Group Leader for the newly initiated JST International Cooperative Research Project (ICORP), Computational Brain. He has also been designated as a Project Leader/Research Expert for National Institute of Information and Communications Technology (NICT) of Japan. He is also involved (as an adviser and as an associated partner) in a number of major European Union Projects.

Over the past ten years Gordon Cheng has been the co-inventor of approximately 20 patents and is the author of approximately 250 technical publications, proceedings, editorials and book chapters.

David Hanson

David Hanson develops robots that are widely regarded as the world’s most human-like in appearance, in a lifelong quest to create true living, caring machines. To accomplish these goals, Hanson integrates figurative arts with cognitive science and robotics engineering, inventions novel skin materials, facial expression mechanisms, and collaborative developments in AI, within humanoid artworks like Sophia the robot, which can engage people in naturalistic face-to-face conversations and currently serve in AI research, education, therapy, and other uses.

Hanson worked as a Walt Disney Imagineer, both a sculptor and a technical consultant in robotics, and later founded Hanson Robotics. As a researcher, Hanson published dozens of papers in materials science, artificial intelligence, cognitive science, and robotics journals — including SPIE, IEEE, the International Journal of Cognitive Science, IROS, AAAI, AI magazine and more. He wrote two books including “Humanizing Robots” and received several patents. Hanson was featured in the New York Times, Popular Science, Scientific American, WIRED, BBC and CNN. He also received earned awards from NASA, NSF, Tech Titans’ Innovator of the Year, RISD, Cooper Hewitt Design Triennial, and the co-received the 2005 AAAI first place prize for open interaction of an AI system. Hanson holds a Ph.D. in Interactive Arts and Technology from the University of Texas at Dallas, and a BFA in film Animation video from the Rhode Island School of Design.

Serena Ivaldi

Serena Ivaldi is a tenured research scientist at Inria, leading the humanoid and human-robot interaction activities of the Team Larsen in Inria Nancy, France. She earned her Ph.D. in Humanoid Technologies in 2011 at the Italian Institute of Technology. Prior to joining Inria, she was post-doctoral researcher in UPMC in Paris, France, then at the University of Darmstadt, Germany. She was PI of the EU projects CoDyCo (FP7); she is currently PI of the EU projects AnDy (H2020) and Heap (CHIST-ERA). She is also involved in the French ANR project Flying CoWorker. Her research is focused on humanoid robotics and human-robot collaboration, using machine learning to improve the control, prediction and interaction skills of robots. She strongly believes in user evaluation, i.e., making potential end-users evaluate the robotics technologies to improve usability, trust and acceptance.

Chung Hyuk Park

Dr. Chung Hyuk Park is an assistant professor in the Department of Biomedical Engineering (BME) in the School of Engineering and Applied Science (SEAS) at The George Washington University (GW). Dr. Park directs the Assistive Robotics and Tele-Medicine (ART-Med) Lab in GW where he studies the collaborative innovation between human intelligence and robotic technology, integrating human-robot interaction, machine learning, computer vision, haptics, and telepresence robotics. The current and future research topics are focused on the following three main themes: Multi-modal human-robot interaction and robotic assistance for individuals with disabilities or special needs, Robotic learning and humanized intelligence, and Tele-medical robotic assistance. He was the lead-PI for a NRI-NIH project (#R01 HD082914) and a recipient of a NSF Early Career award on socially assistive robotics for individuals with Autism Spectrum Disorder (ASD). He received his Ph.D. in Electrical and Computer Engineering from the Georgia Institute of Technology in 2012 and M.S. in Electrical Engineering and Computer Science and B.S. in Electrical Engineering from Seoul National University in 2002 and 2000, respectively.

Hae Won Park

Hae Won Park is a Research Scientist at MIT Media Lab and a Principal Investigator of the Social Robot Companions Program. Her research focuses on socio-emotive AI and personalization of social robots that support long-term interaction and relationship between users and their robot companions. Her work spans a range of applications including education for young children and wellbeing benefits for older adults. Her research has been published at top robotics and AI venues and has received awards for best paper (HRI 2017), innovative robot applications (ICRA 2013), and pecha-kucha presentation (ICRA 2014). Hae Won received her PhD from Georgia Tech in 2014, at which time she also co-founded Zyrobotics, an assistive education robotics startup that was recognized as the best 2015 US robotics startup by Robohub and was the finalist of the Intel Innovation Award.

Ludovic Righetti

Ludovic Righetti is an Associate Professor in the Electrical and Computer Engineering Department and in the Mechanical and Aerospace Engineering Department at the Tandon School of Engineering of New York University and a Senior Researcher at the Max-Planck Institute for Intelligent Systems (MPI-IS) in Tübingen, Germany.

He leads the Machines in Motion Laboratory, where his research focuses on the planning and control of movements for autonomous robots, with a special emphasis on legged locomotion and manipulation. He is more broadly interested in questions at the intersection of decision making, automatic control, optimization, applied dynamical systems and machine learning and their application to physical systems.

He studied at the Ecole Polytechnique Fédérale de Lausanne (Switzerland) where he received an engineering diploma in Computer Science (eq. M.Sc.) in 2004 and a Doctorate in Science in 2008 under the supervision of Professor Auke Ijspeert. Between March 2009 and August 2012, he was a postdoctoral fellow at the Computational Learning and Motor Control Lab with Professor Stefan Schaal (University of Southern California). In September 2012 he started the Movement Generation and Control Group at the Max-Planck Institute for Intelligent Systems in Tübingen, Germany where he became a W2 Independent Research Group Leader in September 2015. He moved to New York University in September 2017.

He has received several awards, most notably the 2010 Georges Giralt PhD Award given by the European Robotics Research Network (EURON) for the best robotics thesis in Europe, the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Best Paper Award, the 2016 IEEE Robotics and Automation Society Early Career Award and the 2016 Heinz Maier-Leibnitz Prize from the German Research Foundation.

Organizers


Christopher G. Atkeson

I am a Professor in the Robotics Institute and Human-Computer Interaction Institute at Carnegie Mellon University. My life goal is to fulfill the science fiction vision of machines that achieve human levels of competence in perceiving, thinking, and acting. A more narrow technical goal is to understand how to get machines to generate and perceive human behavior. I use two complementary approaches, exploring humanoid robotics and human aware environments. Building humanoid robots tests our understanding of how to generate human-like behavior, and exposes the gaps and failures in current approaches.

build-baymax.org

Joohyung Kim

Joohyung Kim is currently a Research Scientist in Disney Research, LA. His research interests include implementation of robots based on animation characters, soft human-robot interaction, balancing and walking control for humanoid robots and novel mechanisms for legged locomotion. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science from Seoul National University, Korea, in 2001 and 2012. Prior to joining Disney Research, he was a postdoctoral fellow in Robotics Institute of Carnegie Mellon University for DARPA Robotics Challenge in 2013. From 2009 to 2012 he was a senior engineer in Samsung Electronics, Korea, developing biped walking controllers for humanoid robots.

Jinoh Lee

Jinoh Lee is a Research Scientist in the Department of Advanced Robotics, Istituto Italiano di Tecnologia (IIT). He received his B.Sc. degree in Mechanical Engineering from Hanyang University, Seoul, South Korea, in 2003 (awarded Summa Cum Laude, Top 2%) and his M.Sc. and Ph.D. degrees in Mechanical Engineering from Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea in 2012. Since 2012, he has joined IIT as a postdoctoral researcher and has been awarded a competitive grant from the National Research Foundation (NRF) of the Korean Government titled 'Fostering Next Generation Researchers Program' (2013-2014). He has been involved in projects such as WALK-MAN (Whole-body Adaptive Locomotion and Manipulation), participated in the DARPA Robotics Challenge (DRC) Finals, where contributions were made to develop various manipulation skills on the humanoid. His research has primarily focused on control of compliant multi-DoF robotic systems and the dexterous and reactive manipulation of humanoids and bimanual robots with multiple contacts.

Katsu Yamane

Dr. Katsu Yamane is a Senior Scientist at Honda Research Institute USA. He received his B.S., M.S., and Ph.D. degrees in Mechanical Engineering in 1997, 1999, and 2002 respectively from the University of Tokyo, Japan. Prior to joining Honda in 2018, he was a Senior Research Scientist at Disney Research, an Associate Professor at the University of Tokyo, and a postdoctoral fellow at Carnegie Mellon University. Dr. Yamane is a recipient of King-Sun Fu Best Transactions Paper Award and Early Academic Career Award from IEEE Robotics and Automation Society, and Young Scientist Award from Ministry of Education, Japan. His research interests include humanoid robot control and motion synthesis, physical human-robot interaction, character animation, and human motion simulation.

Alex Alspach

Alex designs and builds soft systems for sensing and manipulation at Toyota Research Institute (TRI). He earned his master's degree at Drexel University with time spent in the Drexel Autonomous Systems Lab (DASL) and KAIST's HuboLab. After graduating, Alex spent two years at SimLab in Korea developing and marketing tools for manipulation research. While there, he also worked with a production company to develop artists' tools for animating complex, synchronized industrial robot motions. Prior to joining TRI, Alex developed soft huggable robots and various other systems at Disney Research with Joohyung and Katsu!

Related Links

Contact