Wednesday, October 28, 2015

"Emergency Situations"

From a Road and Track article on a Nissan concept vehicle:

, but it's what it does when you put it in Piloted Drive that's amazing.​ The steering wheel ​retracts into the instrument panel​ and is replaced by a big flat screen . . . like something out of ​Transformers​, just without the guns...

But what if you need the steering wheel in an emergency situation? Maybe I've seen too many movies, but I'm quite certain that there will be at least one situation on the road you encounter that requires some kind of snap-decision making.

I think it hasn't yet dawned on the author that the almost point of these vehicles is that humans are "bad" at emergency situations and therefore will be replaced by the AI. In other words, the human IS the problem to be solved. This is why Google's car has nothing for the "driver". Everyone in Google's car is a passenger. The future as the industry (both the auto and insurance) is no human driver on public roads. EVER. The automakers and/or tech companies will make the technology available and the insurance companies will bankrupt any but the richest owner who wants to actually drive. That is if the government doesn't pronounce human driving a public health hazard and has it banned first.

Friday, October 23, 2015

Some Ideas About Automated Vehicle Development

What follows are the thoughts of a very much NOT ENGINEER. Just someone curious about it. Personally, I want to drive myself and change my own gears...with a clutch thank you very much.

So having watched the latest video of Tesla's auto-pilot make a dive for the opposite side of the road when it got confused I thought about how engineers are approaching automated driving. What follows is purely speculative as I have no idea what the engineers are actually doing.

The first thing I thought was that the focus that I see is on safety. It seems to me that automated driving, outside of DARPA, is overly concerned with safety. I was watching a test drive of the GLE AMG and the driver was complaining about how the safety equipment was interfering with the track testing. It killed the throttle, tightened the seat belts, etc. The car was being overly cautious because it assumed that the driver was not in control. I think that is a problem. The focus for automated systems should be to emulate an experienced driver. I know we like to ding humans for their multiplicity of accidents but the fact is that humans travel millions of miles without nary an incident. We take in and process, in real time vast amounts of information and do a lot of predictive analysis when we drive. For all our accidents we do a fucking great job….the vast majority of the time.

Lets make automated systems act like humans. First off, lets lower “safety” down a few notches in priorities. The human driver makes a bunch of assumptions when driving. We assume that everyone else on the road will act in a manner that will ensure their own safety. We humans look for a reason to NOT think this is the case. It seems to me that automated systems assume everything is a danger until it is no longer in range. Here's an example. You are driving towards an intersection and a vehicle is approaching it at a particularly high rate of speed but IS slowing down. Most humans would note this behavior but would NOT brake or take any actions to stop or swerve. 99% of the time this is the correct response. We are AWARE of the other car but we do not REACT to the other car. We also calculate, almost instantaneously whether we have enough speed to either cross the intersection before they would hit us or we have enough braking distance to stop if they entered the intersection. We also are aware of how much room we have to maneuver.

We also note that the other vehicle has a threshold to cross, the stop sign. Most of us wouldn't react unless we determined that the other vehicle was carrying to much speed/momentum to stop at the stop sign or the vehicle crossed the imaginary “stop line” demarcated by the stop sign. Automated vehicles need to be able to recognize stop signs from any angle so that it could make such judgment calls and react appropriately.

Following an “unmarked” Road

The two things I saw that bothered me the most was when on curved roads the Tesla simply went off road. The reason for this is that the car needs to “see” white lines to know how to position itself while moving. Who thought that was a good idea? Anyone who has road tripped knows that there are plenty of roads that have very poor or no lane markings. Lets not even begin to talk about snow. So I think the engineers are approaching the positioning question alllll wrong. Again, how do humans know how to position themselves. White lines are taken as cues to the shape of the road but we process more than that. Most of us cue heavily off of the vehicle in front of us. Automated systems should be built to prioritize the vehicle ahead as the cue for vehicle position just as humans do. These cues become even more important in situations like snow and heavy rain where the road is practically invisible.

Secondly the vehicle should do as humans do and establish road boundaries by analyzing the larger environment. Automated vehicles need to be able to know the “hard left” of the road. The “hard left” is the do not cross threshold for any car. It should not cross to the other side of the road unless it is the only option to not crash. Barriers, grass, double lines and oncoming vehicles should be used to determine the “hard right” of any road. In the absence of this an automated vehicle should be able to measure the width of the road, divide it in half and keep the car to the right of the mid point as a minimum requirement. This is what humans do when there are no markings.

Road Arches

There are two scenarios for curved roads. One has a vehicle ahead and one has no vehicle. Lets take the first. If there is a vehicle on a curved road the car should make one assumption..all curved roads are smooth curves. That is there are no corners. With this assumption in place the car should note that the vehicle in front of it is HIGHLY likely to not make an abrupt 90 degree turn. That is the vehicle ahead is following the curve. The automated vehicle should prioritize following the vehicle ahead over road markings because it is MOST likely that the vehicle is following the road. Using this assumption the vehicle should use the “best fit” model for what the road ahead looks like until it can actually “see” the road. Humans do this all the time on “blind curves”. Even though we know the curves on roads we drive daily and have stored it in our biological GPS, whenever we encounter a novel road this is what we do. We assume the curve will be smooth. We assume the car in front of us is following the curve and that we should mimic its actions and we predict what the road ahead looks like until we can see it.

This kind of driving works 99.9% of the time and that's a great rate!

Now with curved roads that have no other vehicle in front of us means that we cannot use the “mimic that” algorithm. Instead we fall back to road analysis. The vehicle should determine the hard right (and probably “hard left”) and place the car dead center of it's side of the road. In cases where the road is barely wider than the vehicle (relative to a wide two way road), the car would put itself dead center.

This idea of “hard right” and “hard left” cannot be stressed enough. The reliance on road markers is IMO a dangerous assumption. Perhaps engineers can start with the assumption that there are no markers on the road. There is only road and “not road”. Once you can get the automated system to determine the difference between the two, without lines, then a lined road is that much easier to handle. I realize this may be counter to the idea that we tackle the easiest problem first, but I think that by tackling the hard problem first (which is really a matter of dealing with perception) we can have automated systems that won't fail..and not try to cross the divider.