How to Teach Autonomous Vehicles to Take Driving Decisions like Human?

How to Teach Autonomous Vehicles to Take Driving Decisions like Human?

Jul 30, 2016 @ 20:59 | 


Stanford engineers are conducting experiments to translate social behavior into algorithms so that self-driving cars will maintain vehicle safety and passenger comfort.


Autonomous robotics has gone through a lot of evolution in the last few decades. One of the technology that has be most talked and invested on is Autonomous Vehicle. The research in autonomous field started in early 60s and first self-sufficient and truly autonomous vehicle appeared in the 1980s with Carnegie Mellon University. But the real kick start of the development of autonomous vehicle was given by Defense Advanced Research Projects Agency (DARPA).

At present, the technology for autonomous vehicle is moving really fast but deployment of these vehicles into human society is neither safe nor reliable yet. We already heard about two accidental cases, Tesla car running on autopilot mode crashes and Autonomous security robot runs over toddler in shopping mall, involved autonomous vehicles in last one month.

So to integrate autonomous vehicles into everyday life, researchers need to teach the cars how to make the safe driving decisions that come intuitively to human drivers. Stanford engineers are conducting experiments to translate social behavior into algorithms so that self-driving cars will maintain vehicle safety and passenger comfort.


Related Articles:


There are lots of considerations that human drivers do subconsciously for vehicle safety and passenger comfort. We have to program autonomous vehicles with such considerations to make these same kinds of decisions in the same kinds of scenarios and it’s really important to do so in an ethical and responsible way.

Human drivers often will violate traffic laws in order to maintain vehicle safety and occupant comfort. As a programmer what will you do if an autonomous vehicle encounters an obstacle in the middle of your lane? Because a human driver would go around the obstacle and violate the double yellow line, assuming that it’s clear, but do you program the autonomous vehicle to decide to break the law?

So programmers are going ahead of time to decide how these autonomous vehicles maneuver. We can treat that as a very harsh strict constraint and vehicle will have to come to a complete stop in order to not hit the obstacle; another option would be to minimize how much violates the double yellow line and we passes very closely to the obstacle that makes very uncomfortable to the passenger; third scenario is to enter the ongoing traffic lane to give more space to the obstacle.

The autonomous vehicle is moving based on algorithms and these algorithms that we use have different constraints as well as costs and we have to tune them to tell how far away to be from an obstacle how close to get to an obstacle. In a way we can translate human comfort and safety into these numerical constraints and cost.

See the experiment of these three scenario:


 

Leave a Reply

Your email address will not be published. Required fields are marked *