Stanford Jackrabbot, A Social Robot to Understand Human Behavior

Stanford Jackrabbot, A Social Robot to Understand Human Behavior

Jun 1, 2016 @ 21:03 | Stanford| USA |

Human behavior cannot be predicted by any mathematical rule; human may behave differently in different situations and also changes over time. So to navigate among the humans on sidewalks and mingle with humans in crowded places, robots have to understand the unwritten rules of human behavior.

The Researchers from Computational Vision and Geometry Lab, Stanford have developed a robot prototype, Jackrabbot that can autonomously move in crowded places and learn social behavior over time. Jackrabbot is equipped with sensors to be able to understand its surroundings and navigate streets and hallways according to normal human etiquette. The researchers will present their system, Jackrabbot for predicting human trajectories in crowded spaces at the Computer Vision and Pattern Recognition conference (CVPR) 2016.

The idea behind the work is that by observing how Jackrabbot navigates itself among students around the halls and sidewalks of Stanford’s School of Engineering, and over time learns unwritten conventions of these social behaviors, the researchers will gain critical insight in how to design the next generation of everyday robots such that they operate smoothly alongside humans in crowded open spaces like shopping malls or train stations.

“By learning social conventions, the robot can be part of ecosystems where humans and robots coexist,” said Silvio Savarese, an assistant professor of computer science and director of the Stanford Computational Vision and Geometry Lab.

“As robotic devices become more common in human environments, it becomes increasingly important that they understand and respect human social norms”, Savarese said. “How should they behave in crowds? How do they share public resources, like sidewalks or parking spots? When should a robot take its turn? What are the ways people signal each other to coordinate movements and negotiate other spontaneous activities, like forming a line?”

These human social conventions aren’t necessarily explicit nor are they written down complete with lane markings and traffic lights, like the traffic rules that govern the behavior of autonomous cars.

So Savarese’s lab is using machine learning techniques to create algorithms that will, in turn, allow the robot to recognize and react appropriately to unwritten rules of pedestrian traffic. The team’s computer scientists have been collecting images and video of people moving around the Stanford campus and transforming those images into coordinates. From those coordinates, they can train an algorithm.

“Our goal in this project is to actually learn those (pedestrian) rules automatically from observations – by seeing how humans behave in these kinds of social spaces,” Savarese said. “The idea is to transfer those rules into robots.”

Jackrabbot already moves automatically and can navigate without human assistance indoors, and the team members are fine-tuning the robot’s self-navigation capabilities outdoors. The next step in their research is the implementation of “social aspects” of pedestrian navigation such as deciding rights of way on the sidewalk. This work, described in their newest conference papers, has been demonstrated in computer simulations.

“We have developed a new algorithm that is able to automatically move the robot with social awareness, and we’re currently integrating that in Jackrabbot,” said Alexandre Alahi, a postdoctoral researcher in the lab.

Even though social robots may someday roam among humans, Savarese said he believes they don’t necessarily need to look like humans. Instead they should be designed to look as lovable and friendly as possible. In demos, the roughly three-foot-tall Jackrabbot roams around campus wearing a Stanford tie and sun-hat, generating hugs and curiosity from passersby.

Today, Jackrabbot is an expensive prototype. But Savarese estimates that in five or six years social robots like this could become as cheap as $500, making it possible for companies to release them to the mass market.

“It’s possible to make these robots affordable for on-campus delivery, or for aiding impaired people to navigate in a public space like a train station or for guiding people to find their way through an airport,” Savarese said.


  • Reference: “Social LSTM: Human Trajectory Prediction in Crowded Spaces, Computer Vision and Pattern Recognition conference 2016.
  • Source: Stanford
  • Image:Stanford

 

Leave a Reply

Your email address will not be published. Required fields are marked *