Weekly Video: DFKI Robotics, AI Car, Agile Drone and Surgical Robotics

Weekly Video: DFKI Robotics, AI Car, Agile Drone and Surgical Robotics

Oct 2, 2016 @ 18:30 | 

Research at the DFKI Robotics Innovation Center

The German Research Center for Artificial Intelligence (DFKI), with locations in Kaiserslautern, Saarbrücken, Bremen (with a branch office in Osnabrück) and a project office in Berlin, is the leading research center in Germany in the field of innovative commercial software technology using Artificial Intelligence.

The DFKI research department Robotics Innovation Center (RIC), headed by Prof. Dr. Frank Kirchner, develops mobile robot systems that solve complex tasks under water, in space and in our everyday life. The goal is to design robots that operate autonomously and interact safely with humans, their environment and other systems. The RIC closely cooperates with the Robotics Group at the University of Bremen.

NVIDIA AI Car Demonstration

In contrast to the usual approach to operating self-driving cars, NVIDIA did not program any explicit object detection, mapping, path planning or control components into this car. Instead, the car learns on its own to create all necessary internal representations necessary to steer, simply by observing human drivers.

The car successfully navigates the construction site while freeing us from creating specialized detectors for cones or other objects present at the site. Similarly, the car can drive on the road that is overgrown with grass and bushes without the need to create a vegetation detection system. All it takes is about twenty example runs driven by humans at different times of the day. Learning to drive in these complex environments demonstrates new capabilities of deep neural networks.

The car also learns to generalize its driving behavior. This video includes a clip that shows a car that was trained only on California roads successfully driving itself in New Jersey .

Agile Drone Flight through Narrow Gaps with Onboard Sensing and Computing | ailabRPG

Robotics and Perception Group, University of Zurich presents a method to let a quadrotor autonomously pass through narrow gaps using only onboard sensing and computing. They estimate the full state by fusing gap detections from a single onboard camera with an IMU. Researchers generate a trajectory that considers geometric, dynamic, and perception constraints. During the approach maneuver, the quadrotor actively controls its orientation such that it always faces the gap to allow robust state estimation. During the traverse through the gap, the quadrotor maximizes the distance from the edges of the gap to minimize the risk of collision.


RI Seminar: Umamaheswar Duvvuri, MD : Surgical Robotics- past, present and future

The advent of robotic systems to medicine has revolutionized the practice of surgery. Most recently, several novel robotic surgical systems have been developed are entering the operative theater. This lecture describes the current state-of-the-art in the robotic surgery and some of the newer systems that are currently in use. Finally, the future of robotic surgery described in the context of clinical development and ease of use in the operating theaters of the future.

Moon Mission: Hakuto robot rover undergoes testing in Japan ahead of lunar travel | RT

Hakuto’s privately developed rover is scheduled to be launched from Earth by SpaceX’s Falcon 9 rocket and then deployed onto a lunar orbit. Once landed on the moon’s surface, the rover is set to travel more than 500 metres (1,640 feet) on autopilot while bypassing craters and rocks. The rover is also to capture so-called mooncasts, high-resolution, 360-degree images of the lunar surface, which the researchers say the rover will then send back to Earth. The device will compete with other rovers in Google Lunar XPrize competition.

Dynamic Multi-Target Coverage with Robotic Cameras | ACT Lab

This video is the supplemental material for our paper “Dynamic Multi-Target Coverage with Robotic Cameras”, which will appear at the International Conference on Intelligent Robots and Systems (IROS) 2016.

When tracking multiple targets with autonomous cameras for 3D scene reconstruction, e.g., in sports, a significant challenge is handling the unpredictable nature of the targets’ motion. Such a monitoring system must reposition according to the targets’ movements and maintain satisfactory coverage of the targets. We propose an approximate, centralized approach for maximizing the visible boundary of dynamic targets using mobile cameras in a bounded 2D environment. Targets and obstacles translate, rotate, and deform independently, and cameras are only aware of the current position and shape of the targets and obstacles. Using current information, the environment is searched for better viewing positions, then cameras navigate to those positions while avoiding collisions with targets and obstacles. We present a benchmark and metrics to evaluate the performance of our method, and compare our approach to a simple gradient-based local method in several real-time simulations.

A Better Way to Communicate with Robots

Robots can be programmed to perform all sorts of repetitive tasks, but they don’t adapt well to changing environments and circumstances. They rely on people to give them direction and orient them to a precise set of parameters that will not change. What if a person could simply tell the robot what is needed and that language could be understood and then acted upon, without the need for extensive programming?

That’s the very problem that researchers are working on in the Robotics and Artificial Intelligence Laboratory at the University of Rochester. Assistant professor of electrical and computer engineering, Thomas Howard, and PhD student Jake Arkin have developed a model for processing natural language so that a robot can be given basic verbal commands and then act on them without the need for additional programming. This research was a joint effort with Rohan Paul and Nicholas Roy from MIT.

Image: DKFI

Leave a Reply

Your email address will not be published. Required fields are marked *