Interact with Robot in Your Natural Language

Interact with Robot in Your Natural Language

Oct 7, 2016 @ 13:55 | 


Researchers at the University of Rochester have developed a model for processing natural language so that a robot can be given basic verbal commands and then act on them without the need for additional programming.


Robots can be programmed to perform all sorts of repetitive tasks, but they don’t adapt well to changing environments and circumstances. They rely on people to give them direction and orient them to a precise set of parameters that will not change. What if a person could simply tell the robot what is needed and that language could be understood and then acted upon, without the need for extensive programming?

Researchers from Robotics and Artificial Intelligence Laboratory at the University of Rochester are working to address this problem. Assistant professor of electrical and computer engineering, Thomas Howard, and PhD student Jake Arkin have developed a model for processing natural language so that a robot can be given basic verbal commands and then act on them without the need for additional programming. This research was a joint effort with Rohan Paul and Nicholas Roy from MIT.

The model also offers a spatial representation of the environment in which the robot is operating so that it can discern between the placements of different objects and interact with them. If a table is filled with a grouping of identical objects, telling the robot to pick up the third one from the left will be enough for it to determine which object is the correct one and then pick it up.

Various cameras aid in the accuracy of the robots movements and its understanding of the space. Localized visual servoing, contributed by graduate student Siddharth Patki, allows for consistent execution of the demonstrated robot actions. As the model is refined, the robot will be able to adapt to increasingly complex environments and verbal commands, and to do so at an even more rapid pace.

Researchers presented their results and finding at 12th Robotics: Science and Systems conference held in July 2016.


  • Reference: “Efficient Grounding of Abstract Spatial Concepts for Natural Language Interaction with Robot Manipulators” RSS 2016, Rohan Paul, Jacob Arkin, Nicholas Roy, Thomas M. Howard. [paper]
  • Image: University of Rochester

Leave a Reply

Your email address will not be published. Required fields are marked *