# Autonomous Cars



## Foxbat (Aug 1, 2019)

If you put aside the agruments/debates on how it will affect employment, I think autonomous cars could drastically cut down on traffic deaths once the software and hardware are sufficiently evolved.

However, I was watching a recent episode of BBC's Click about them and it left me a bit perturbed.
Here's why:
In the article, they covered the death of a pedestrian by an Uber car in Arizona. Much of the blame lies with the 'safety driver', who was busy watching a TV show on her phone and failed to spot the pedestrian in time. But there's more. Comparisms between the autonomous car footage and normal dashcam footage taken on the same bit of road at the same time of night showed the Uber footage to be significantly below par. Why? Are Uber using sub-par components (perhaps to cut cost)? The viewing distance was much greater with the standard dashcam.

And there's more. Uber used a Volvo that had its own anti-crash software/hardware but it had been deliberately disabled by Uber because it conflicted with their own system, causing the car to jerk. Was this a case of the need for performance over-riding safety?

Here's a question for the lawyers among us. If I buy a car and deliberately disable the anti-crash system, then go on to kill a pedestrian, how liable am I? My own conscience tells me that I would be 100% liable but maybe that's just me.

Astonishingly (to me), Uber have been found to be not criminally liable for the woman's death. I would expect the safety driver to suffer the consequences of her negligence but I would have thought Uber must also bear some responsibility for disengaging a car manufacturer's safety system.

I am left to wonder if there is something political going on here - perhaps an attempt to try and keep the autonomous car industry out of the courts and therefore in a good light with consumers.

Whatever happens, many people are unhappy with the notion of driverless cars and this won't help dissuade them.


----------



## CupofJoe (Aug 1, 2019)

If you actively defeat the safety systems of anything, then I would think that you are responsible for any outcome of the use of that item.
Now, if you can use the manufacturer for not making their system “hack proof” is another question.

For me the Safety Driver is to blame. 
They were employed for the exact purpose of being able to take control in case the car’s systems were not able to cope. They chose not to perform their duty. 
People have tried and failed to sue Tesla for crashing their car while it was in autonomous mode. Tesla pointed out that the driver still has to be ready and able to take control if needed. So you can’t get in to a Tesla drunk and get it to take you home.

Also I doubt that any autonomous driving system is using just visible light. Smoke and fog [and even rain and snow] could render them useless. We had an autonomous car demo’d at work and it was using all sorts of interesting wave-lengths to see near and far [different frequencies for fidelity and bandwidth] and even around corners [but I didn’t really understand that bit, it had to do with Doppler shifts, echoes and AI].

As for full autonomous cars for the masses, I think they are decades off. The technological hardware might be there but the software isn’t. Someone will have to decide on the ethical values required for driving a car drive through crowded city streets. Given no other options does it kill, 3 elderly people? 2 children? A pregnant woman? Or crash killing its passenger? Humans, for good or bad can make that judgement, a machine will have to be told what to do, or at least learn what is acceptable to its passengers and society.
They do it for disasters, weighing up the “cost” of human life. Who is "worth" saving...

Car ownership as the norm for most people is probably coming to an end. I think universal ride sharing and limited use autonomous cars [taxis around purpose built environments] will come sooner. If it is not already here. Already I know of apartment blocks that have no dedicated parking but a few shared [electric] cars that can be booked ad hoc. Now military use? I can see that in the next decade.


----------



## Kerrybuchanan (Aug 1, 2019)

I have just ordered myself a new Seat Ateca car on the motability scheme (for those not in the UK, this is a scheme to provide transport for disabled people and takes the form of a three-year lease with all costs covered except for fuel, in exchange for a disability money allowance). 

The version I'm getting has traffic jam assist, which controls the steering, brakes and throttle at low speeds and brakes automatically if a pedestrian steps out, a lane-keeping assistant which stops you from drifting across lanes, rear cross traffic alert, blind-spot monitor, traffic sign recognition and adaptive cruise control. It is not a self-driving car, but it seems to me to have all the benefits with none of the drawbacks, assuming this stuff actually works. 

I'm currently a 15-year-old driving a rust-bucket, so it will seem like total luxury to me, whatever!


----------



## REBerg (Aug 1, 2019)

At this point in the development of driverless vehicles, a pedestrian death is seen as big news;
What is not covered is how many times the system being used by this particular car recognized pedestrians and reacted as designed. I also wonder how many pedestrians were injured or killed when hit by vehicles being operated by humans during the same time.
Driverless cars are coming. My own car is just a Prius, but it detects vehicles, objects and pedestrians in all directions, maintains a standard vehicle following distance, warns and corrects for lane deviations, parks itself and brakes if an imminent collision is indicated by speed and distance.
Driverless accidents are big news now because the technology is still developing. In a decade or so, a driverless car accident will be even bigger news because it will be so rare.


----------



## Foxbat (Aug 1, 2019)

Lots of good points. 
I totally agree that the safety driver in this case is most responsible, but defeating an original safety system and replacing it with one that is essentially still unproven appears reckless to me (and I would argue, contributed to the death because if the original safety system had not been defeated, it might well have kicked in and  saved the pedestrian's life even if the saftey driver had still not intervened by this time).

I also agree that any autonomous system must be using more than visible light - which makes it even more staggering that this pedestrian died in the dark. What happened to all those other detectors? 

Finally, Uber has consistently failed to reach its target of 13 miles without a driver intervention. Two other companies were mentioned in the article. one had reached over 500 miles without the need of driver intervention and the other had reached over 5000. this speaks volumes about Uber's system and why I suspect they may be doing it 'on the cheap' or cutting corners.


----------



## Robert Zwilling (Aug 1, 2019)

Foxbat said:


> I am left to wonder if there is something political going on here



I guess when we go ahead with something without knowing everything about it, that means compromise of some kind, which can be a good definition of political.

Unlike when people inflict injury on another person, cars have always gotten a different level of responsibility when they inflict injury. It's appears to start out that either the car was at fault, so it's not at fault and no one's at fault, like it was nothing personal, or the car is responsible and the driver isn't so much responsible for what happens. Both of which leave the supposed fault in the pedestrian's court, wearing dark clothes, not watching what they were doing, walking in the road. It can also be more of a crime to leave the accident scene than the accident itself.

Innocent until proven technologically guilty, is AI's defense, probably harder to prove. The program was trying the best it could, unlike people with no AI who might not have been doing the best they could.

In the world of business it is usually harder to assign blame to a non human entity than to a human. When blame is assigned, it can be the result of a person's decision that made that happen, and not the product itself, no matter how irresponsible the product appears to be.

I read an article that said calling car navigation and control systems Auto pilot is perhaps a mistake, or an attempt to imply a car's self driving systems are safer than they are. AI will improve road safety, eventually, right now the bugs are being worked out. Auto Pilot works good with aircraft but the great distances between vehicles mask problems that haven't been solved when the density of vehicles interacting is greatly increased, such as car traffic. Auto pilot doesn't do much when the aircraft are entering or leaving flight, unlike cars where that procedure is a good part of the experience.

It might be better to have two programs monitoring the situation when auto driving. A consensus decision could be better thought out electronically with true parallel processing, it would also have redundancy, use of different sensors. The problem is at this time we can't smoothly combine the efforts of 2 independent guidance systems in cars. It is assumed the external AI can handle the car better than the way the manufacturers can program their vehicles. How true is that? If the AI can't interact with the way the car is designed, then perhaps the AI is not ready to use. 

When I see videos of robots walking and performing tasks, sometimes I wonder how much of the processing is done within the machine vs what it can send to a more powerful processor located outside of the robot's body. I guess there are rules for that. It does bring up the idea that the car's AI is relying solely on real time reaction response and not on past experience of related events. Those results can be programmed in for auto response but I don't think the car's AI is mulling over past experiences, deciding what is best, instead it is just reacting. I think medical computer programs are heavily using past experience to make decisions about what yo do next when making recommendations.

When there is heavy traffic and high speeds and the weather is bad, the outcome can solely be based on the idea that nothing bad will happen, it then becomes just a matter of chance that nothing out of the ordinary will happen. In bad weather the car AI has information about the location of the road from past navigation experience. If it doesn't have the prior navigation of the road's layout and the precipitation is too heavy, the AI car will pull over to the side of the road, or attempt to, until it can "see" again. Can they be fooled by the deep puddle that looks flat because the road drops off or do they have some kind of real time elevation detection running all the time.

As time goes on and the cars are receiving information from the roadway, traffic signals, and other external situation reports, in some areas, voluntary overriding of auto driving could be a crime in itself. It would be amusing if when the road sign has a speed set on it, that all the cars will not exceed that speed whether the AI is driving or not.


----------



## Ursa major (Aug 1, 2019)

Foxbat said:


> If I buy a car and deliberately disable the anti-crash system, then go on to kill a pedestrian, how liable am I?


My car has this, and various other, safety systems. The car's manual is liberally sprinkled with statements** such as the one that comes with Front Assist (my car's radar-controlled warning*** and emergency braking system):





> ■ The system only serves to support and does not relieve the driver of the responsibility for the vehicle operation.
> ■ The system has physical and system-related limitations. For this reason, the driver may experience some undesired or delayed system responses in
> certain situations. You should therefore always be alert and ready to intervene!
> ■ Always adapt your speed and safety proximity to the vehicle ahead to the current visibility, weather, road and traffic conditions.
> ...


Okay, this is the manufacturer trying to avoid any blame, which it's squarely placing (correctly IMHO) on the driver.

Note that I too have problems with Front Assist: on approaching a roundabout on a road with multiple lanes, the system will warn -- and even brake -- If I'm driving along a lane that is itself clear but, because the road curves to the left as it joins with the roundabout, assumes that I am driving towards one or more stationary or slow moving cars that lie across my path. I can't steer to alter this: I would either hit the kerb or crash into cars to my left. Despite this, I have not felt the need to disable the system. After all, it helps to protects me from, for example, a car pulling out of a side road -- or, if it's coming the other way, is turning right across my lane and into a side road -- when the driver has overestimated the time and space they have to make the manoeuvre.

That Uber's software cannot cope with systems that it _knows_ are on, and should be functioning on, the car indicates to me that they should really not be using such obviously poorly specified software on cars that are driving on public roads.


** - That last bullet point is interesting: I don't know if this applies to the system that Volvo uses.

*** - It's also used for the adaptive cruise control, where it, without there being an emergency, adjusts the speed of the car based on the speed and proximity of the car in front.


----------



## thaddeus6th (Aug 1, 2019)

There's a major flaw with self-driving cars.

They require someone to be behind the wheel. Not driving, but constantly alert. Ready to shift at any moment from fiddling with their telephone or playing I, Spy to immediately take evasive action, or slam on the brakes.

That's far harder than actually paying attention when driving oneself. It's a massive drawback, and one that makes me think the utility of self-driving cars has to be heavily compromised.


----------



## Ursa major (Aug 1, 2019)

> There's a major flaw with self-driving cars. They require someone to be behind the wheel.


There's problem with them is they don't require someone to be behind the wheel. At the moment, if I am crossing a road where there are cars apparently parked across** the road from me, I can tell whether they may be about to move because they have someone sitting in the driver's seat.

A future fully autonomous car may not have anyone in the car at all and yet, if it was, say, an uber cab, it might decide to pull out at any moment. Unless it is required to signal the manoeuvre well in advance, I could find myself approaching a gap between that car and the car in front of (or behind) it, and the road between me and that gap, that are about to cease to exist, with less than optimal consequences.


** - Let's pretend for a moment that no-one crosses the road from _between _a couple of parked cars....


----------



## REBerg (Aug 1, 2019)

In the not-so-distant future, autonomous vehicles will no longer need to anticipate the moves of vehicles being driven by humans.
All vehicles will be self-driving and networked. Each vehicle will "know" the exact positions and programmed moves of all the other vehicles on the road. Drivers will be restricted to racetracks and antique parades.
Traffic signals will be eliminated. Speeds will increase dramatically. Accidents will become statistically insignificant.
If I am so fortunate as to live long enough, I look forward to calling a car, telling it where I want to go, darkening the windows and taking a nice nap on the way to my destination.


----------



## AlexH (Aug 1, 2019)

Moral Machine
					

A platform for public participation in and discussion of the human perspective on machine-made moral decisions




					moralmachine.mit.edu


----------



## Ursa major (Aug 1, 2019)

REBerg said:


> In the not-so-distant future, autonomous vehicles will no longer need to anticipate the moves of vehicles being driven by humans.


Even here, in highly urbanised England, we have wild and untethered animals roaming about the places (e.g. deer and even horses).

And then there are the lesser spotted cyclists and pedestrians (who do not have jay-walking rules to obey), not to mention their dogs (and yes, I do mean cyclists, as well as pedestrians, roaming about the place accompanied by their pet pooches).


----------



## REBerg (Aug 1, 2019)

Ursa major said:


> Even here, in highly urbanised England, we have wild and untethered animals roaming about the places (e.g. deer and even horses).
> 
> And then there are the lesser spotted cyclists and pedestrians (who do not have jay-walking rules to obey), not to mention their dogs (and yes, I do mean cyclists, as well as pedestrians, roaming about the place accompanied by their pet pooches).


OK, detection technology for pedestrians, pets, cyclists and wildlife will need to be perfected before autonomous vehicles can rule the streets and highways. Maybe all but the wildlife could be chipped to add them to the network and underwrite their safety.


----------



## Ursa major (Aug 1, 2019)

REBerg said:


> Maybe all but the wildlife could be chipped to add them to the network and underwrite their safety.


I'm reminded of a (I think apocryphal) recipe of Mrs Beaton's that started, "First catch your hare...."


----------



## Robert Zwilling (Aug 2, 2019)

It is one thing to have a power failure for objects that are stationary, like buildings, but what happens for large masses of moving traffic when the internet burps, suffers from latency, a term makes self driving car manufacturers shudder, or outright power failure, what does that do to all those vehicles relying on electronic contact instead of good old fashioned line of sight with a brain that can adapt to whatever it is seeing in a moments notice. Doesn't even take into account hacking. How does such a system react, does it slam on the brakes for everything moving at that moment. Slowly slow everything down as it shifts everything moving over to sensors riding on every car. Everything slows down to a complete stop. Or do we let the manufacturers assume that is never going to happen.


----------



## CupofJoe (Aug 2, 2019)

REBerg said:


> OK, detection technology for pedestrians, pets, cyclists and wildlife will need to be perfected before autonomous vehicles can rule the streets and highways. Maybe all but the wildlife could be chipped to add them to the network and underwrite their safety.


I can see there is a bit of a problem in convincing _Everyone_ that they need to be _Chipped_ so "the machines always know where you are..."
If someone wants autonomous cars to take over, then it is up to them to make the cars safe from me and not make me safe for them.
Limited automation I can see coming soon. Didn't Mercedes trial drone trucks on an autobahn earlier this year? It had one truck with a driver and six autonomous cargo trucks if I remember...


----------



## Foxbat (Aug 2, 2019)

Ursa major said:


> And then there are the lesser spotted cyclists and pedestrians (who do not have jay-walking rules to obey), not to mention their dogs (and yes, I do mean cyclists, as well as pedestrians, roaming about the place accompanied by their pet pooches).



As a keen cyclist myself, I am often horrified to see other cyclists with their dogs tethered to their waists. It would only take the dog to sniff a deer and take off for the cyclist to be pulled out of the saddle and probably suffer some injury. One thing I've learned over the years cycling is that it is not a relaxing pursuit. You need to be vigilant and on your guard at all time. This probably applies to all modes of transport including the autonomous kind. But some folk will never learn...



CupofJoe said:


> So you can’t get in to a Tesla drunk and get it to take you home.



This made me wonder. Instead of a charge of Driving whilst Under The Influence, could we, in the future, see a charge of Not Driving whilst Under The Influence


----------



## mosaix (Aug 2, 2019)

REBerg said:


> In the not-so-distant future, autonomous vehicles will no longer need to anticipate the moves of vehicles being driven by humans.
> All vehicles will be self-driving and networked. Each vehicle will "know" the exact positions and programmed moves of all the other vehicles on the road. Drivers will be restricted to racetracks and antique parades.
> Traffic signals will be eliminated. Speeds will increase dramatically. Accidents will become statistically insignificant.
> If I am so fortunate as to live long enough, I look forward to calling a car, telling it where I want to go, darkening the windows and taking a nice nap on the way to my destination.



My daughter works on a project an Coventry University developing such a system where the cars 'talk to each other' continuously. The problem is that there is little communication between the manufacturers to come up with a standard protocol.

Considering the wealth of regulation that manufacturers have to pay regard to before a vehicle is allowed on the road (seat belts, crumple zones, emissions etc.) I'm surprised that there isn't a world-wide body controlling and testing driverless cars. Perhaps there is?


----------



## Foxbat (Aug 2, 2019)

mosaix said:


> My daughter works on a project an Coventry University developing such a system where the cars 'talk to each other' continuously. The problem is that there is little communication between the manufacturers to come up with a standard protocol.



In the Click article, this was touted as one of the big advantages - that autonomous cars all over the world could learn from each other. Almost like Skynet on wheels.


----------



## Ursa major (Aug 2, 2019)

If manufacturers cannot even agree the basic interface by which their cars might communicate, how are they going agree the interface for all the very complex data that would be required to allow such "learning"? And who/what exactly is determining what the lessons to be learnt are?

Even if both those issues could be solved (although I'm not sure how they, particularly the latter, could be), why would the authorities responsible for safety regulations allow such behaviour-altering data to be exchanged between all those individual cars? It isn't the cars on the roads that should be "learning", but those in the testing labs.

I'd rather updates to the cars' software and data would be thoroughly tested before they were uploaded (randomly?) into a car already travelling on a road, particularly a car that was, because of the supposed superiority of its operation over that of a human, driving much closer to other cars than a human driver ever should.

And what if different manufacturers' cars "learn" different lessons based on the new data they receive (assuming that they're ever in a position to receive it), or react differently even if their interpretations of that data are similar?


TL;DR: The road is not the place to be doing alpha or beta testing.


----------



## Danny McG (Aug 2, 2019)

Ursa major said:


> There's problem with them is they don't require someone to be behind the wheel. At the moment, if I am crossing a road where there are cars apparently parked across** the road from me, I can tell whether they may be about to move because they have someone sitting in the driver's seat.


----------



## mosaix (Aug 2, 2019)

Robert Zwilling said:


> It is one thing to have a power failure for objects that are stationary, like buildings, but what happens for large masses of moving traffic when the internet burps, suffers from latency, a term makes self driving car manufacturers shudder, or outright power failure, what does that do to all those vehicles relying on electronic contact instead of good old fashioned line of sight with a brain that can adapt to whatever it is seeing in a moments notice. Doesn't even take into account hacking. How does such a system react, does it slam on the brakes for everything moving at that moment. Slowly slow everything down as it shifts everything moving over to sensors riding on every car. Everything slows down to a complete stop. Or do we let the manufacturers assume that is never going to happen.



If I remember correctly, Robert, the communication will be via a protocol like Bluetooth rather than the internet. But still, there has to be a way of controlling things when communication fails - as it will.

Edit: Although my satnav has an internet connection via a SIM card so, no doubt, all future systems will have some kind of internet involvement.


----------



## CupofJoe (Aug 2, 2019)

Do you think that the car manufacturers will try and give their autonomous cars a personality that reflects their brand?
Will a Ferrari or Porsche give you a different drive experience than that from a Ford or Honda? I don't mean what is in the passenger cabin, but how the car feels as it moves on the road.


----------



## Vladd67 (Aug 2, 2019)

Another way for people in China who fail on their social score to have their movements restricted.


----------



## REBerg (Aug 2, 2019)

CupofJoe said:


> Do you think that the car manufacturers will try and give their autonomous cars a personality that reflects their brand?
> Will a Ferrari or Porsche give you a different drive experience than that from a Ford or Honda? I don't mean what is in the passenger cabin, but how the car feels as it moves on the road.


I see car makers temporarily trying to outdo each other with self-driving models, but I also see individual ownership giving way to manufacturing company fleets providing transportation on demand.
I suppose riders might derive some sort of status through the quality of the vehicle they summon, but the days of taunting your neighbor with your brand new Mercedes would be over.


----------



## CupofJoe (Aug 2, 2019)

REBerg said:


> I see car makers temporarily trying to outdo each other with self-driving models, but I also see individual ownership giving way to manufacturing company fleets providing transportation on demand.
> I suppose riders might derive some sort of status through the quality of the vehicle they summon, but the days of taunting your neighbor with your brand new Mercedes would be over.


I was reading an article if we have already bought our last cars. It made a lot of good arguments for urban and suburban car ownership coming to an end. Not so much for those without street lighting. The move to shared ownership or subscription is something that has the car companies quaking at what might happen over the next 10-20 years.  I guess the bragging rights will be if you subscribe to Rolls-Royce-Rolls, Mercedes-Uber or Dacia-Lyft?


----------



## Vladd67 (Aug 2, 2019)

I see a think tank has suggested that all private cars should be banned from London by 2030 and that TfL should operate their own version of Uber along with making escooters available and offering free tube rides. Who pays for the free tube service wasn’t mentioned.


----------



## thaddeus6th (Aug 2, 2019)

Think tanks come out with some tosh sometimes. I remember one from a couple of years ago suggesting abolishing real world money in the next decade or so.


----------



## Foxbat (Aug 2, 2019)

On the subject of learning: as far as I understood it, the learning was not about alpha or beta testing on the road but more a build up of AI experience. The argument touted was that an individual who drives on the road can only improve by learning from their own experience but autonomous cars can share that experience with each other almost instantaneously. The example given was something like  a car learns something in (say) London and minutes later that learning is put into practice in Miami.

P.S. Isn't the answer to the comms situation the much vaunted 5G?


----------



## Foxbat (Aug 2, 2019)

Just had a thought. If 5G is the answer and Huawei decided to make cars, would they be allowed in the USA or would that be seen as a prelude to invasion?


----------



## CupofJoe (Aug 2, 2019)

Foxbat said:


> P.S. Isn't the answer to the comms situation the much vaunted 5G?


That is what a LOT of money is being spent on... 5G on your mobile phones is not the same as the 5G IoT [Internet of Things]


----------



## Ursa major (Aug 2, 2019)

Foxbat said:


> but autonomous cars can share that experience with each other almost instantaneously. The example given was something like a car learns something in (say) London and minutes later that learning is put into practice in Miami.


But that _would_ be alpha and beta testing on the road.

How can something be tested elsewhere if, _minutes_ after Car A learns something (Learnt Lesson L) in London (based on Incident X), Car B is putting it into practice in Miami? Where is the off-road testing? There isn't any. There's no involvement in the creation of the solution (that which is learnt and then passed onto other cars) by anything other than Car A.

There is no guarantee that the solution does not cause problems, either in a replay of Incident X or incidents that resemble Incident X. And if the solution is a variation on "I wouldn't start from here", it may involve doing something else in the run up to Incident X that avoids Incident X entirely... but then has other cars changing their behaviour in situations that resemble the run-up to Incident X that would not have ever led to a replay of Incident X, but might lead to another incident.

Now if that sounds complicated, just imagine what lesson learning means, both for the cars learning them and to those changing their decision trees based on that learning.

And then there is the issue that laws (or even the side of the road on which the cars drive) are not identical in different countries. Such things mean that the lessons learnt have to be transformed so that they take account of the different driving environments found around the world.

So the list of areas where problems might occur is legion: incorrect analysis of Incident X; incorrect determination of the solution; incorrect creation of the solution for Car A's decision tree; incorrect transformation of Car A's decision tree changes to match the driving environment experienced by Car B in Miami. And in all of this, there is no independent stress testing of the solution in Car A or Car B (or the hundreds of millions of other cars that are given the solution)... unless you count the stress of passengers in those cars that suddenly start performing strange, not to say inappropriate to the conditions, actions all over the world.

The only safe place to be might be behind vehicle-proof walls.... (And did you see those huge autonomous mining trucks in that episode of Click...? )


----------



## Robert Zwilling (Aug 2, 2019)

If conditions stayed the same the cars would not need constant updating except for security updates. The weather is on the move due to the constantly increasing freshwater being added to the system and the slowly increasing temperature. It is literally changing every week and it is not heading for a calm plateau anytime soon. The automatic vehicles will need constant updating to respond to the changing physical conditions which is seat of the pants navigation never out of beta situation. The easiest option is to just have the vehicles sit it out once the local weather reaches a threshold point. Will the cars reverse directions and not even go into storms to start with. Are emergency vehicles better off always on manual drive with people able to drive them. Will a new class of pilots spring up, called land pilots who travel where there are no automated roads. Instead of a driver's license, the ordinary person gets a passenger license. Once people get used to automatic cars they will start losing their ability to rapidly respond to risky situations while driving a vehicle. Will bicycles/scooters/motorcycles have to ride in physically separated lanes, their own private roadways as cars that are not linked into the system would probably be banned from driving automated roads.


----------



## REBerg (Aug 3, 2019)

If millions of distressed and distracted (as well as more than a few downright dumb) drivers can successfully travel the roadways of the world daily, it would not take an exceptionally advanced computer system to take their places.

Driving isn’t all that complicated. Vehicles are confined to designated areas. Within those areas, a finite set of rules must be followed. Unlike human drivers, autonomous vehicles would follow those rules to the letter.

Networking can keep driverless vehicles from colliding with each other. Sensors can prevent them from hitting people, pets, wildlife and brick walls.

Legal responsibility for accidents would shift from drivers to vehicle manufacturers and victims. Case law would decide how responsibility would be assigned to a manufacturer and how much to a victim.

The existence of autonomous cars would obviously be public knowledge. Bicyclists, pedestrians, parents and pet owners would be obligated to take reasonable actions to protect themselves and others as they interact with driverless cars -- just as they now do with vehicles controlled by humans.

I honestly believe that autonomous cars will save lives. Now, what about autonomous, passenger-carrying drones?


----------



## Foxbat (Aug 3, 2019)

Ursa major said:


> But that _would_ be alpha and beta testing on the road.
> 
> How can something be tested elsewhere if, _minutes_ after Car A learns something (Learnt Lesson L) in London (based on Incident X), Car B is putting it into practice in Miami? Where is the off-road testing? There isn't any. There's no involvement in the creation of the solution (that which is learnt and then passed onto other cars) by anything other than Car A.
> 
> ...


As I understood it, the piece on learning was _after_ the widespread use of autonomous cars on the roads had been established and this was to be an ongoing process, with a much greater learning database than any one human could have. Perhaps I misunderstood.


----------



## Ursa major (Aug 3, 2019)

> As I understood it, the piece on learning was _after_ the widespread use of autonomous cars on the roads had been established


The number of autonomous cars as a proportion of all the cars on the road has zero relevance to the issue of a lack of testing.

The lack of testing is an inevitable consequence of a car in, say, London learning something -- it isn't doing this in a lab, but on the road -- and what it has learnt being disseminated around the world in a few minutes, including to a car in, say, Miami.

Whether there would be only two autonomous cars in the world, or those two cars would be but two out of all the hundreds of millions of cars in the world (all of them autonomous), makes no difference at all...

..._except_ that if the lesson learnt was not the right lesson:

in the former case, only two cars would be in danger of following the wrong rule,
in the latter case , every car in the world would be in danger of doing that,
and in both cases, not a single second would have been spent on independently analysing or testing the new rule, either in isolation or as but one rule in combination of all the other rules (an ever increasing number of them) that the cars are meant to follow.


Frankly, this is the sort of dubious stuff one might expect in a badly thought-out SF story but, unfortunately, it appears to be the norm for reporting on science and technology, where statements made by people working on the technology or science (or, worse, by the marketing departments of the companies involved) are repeated verbatim and without question.


----------



## Toby Frost (Aug 3, 2019)

Think of the fun a Hostile Foreign Power could have when it hacks the controls for the driverless cars!


----------



## Robert Zwilling (Aug 3, 2019)

At times we seem to be living the logic of Phillip K Dick science fiction stories, why would industry not be subject to the same conditions.

Being part of a network the system would need updates from time to time. How could those updates be tested, would a small number of cars be updated ahead of others so if it was a bad update only a few cars would experience problems. How many would be enough and if an update was instantly needed would the risk of mass updating be overlooked with fingers crossed. Are there any operating systems that don't need constant updates. Could a multitude of manufacturers be allowed to make the cars or would there be only one car company to make sure everything was always in step. Is this a case of one person's dream, another person's nightmare. Can we even make a network that would encompass all kinds of vehicles using the one size fits all scenario. 

There would probably be a market for hacked cars, with fake registrations or more options normally allowed such as unlimited speed, done in such a way that they would be cruising under the radar. One aspect of automatic cars is the ability of authorities to shut the car down any time they wanted to. That could be extended to repossessions done by remote control. If it was possible to control a family members use of a car, would some kind of safeguard be needed so one person couldn't block another family member from legitimate use of a the car. Would it be possible to stop cars from being used for illicit purposes before they were used.  The cars would probably be able to recognize legitimate users making it far easier to take one's license away or would the need for that punishment disappear.


----------



## thaddeus6th (Aug 3, 2019)

Toby, indeed.

The Internet of Things (online kettles etc) have been hijacked to provide computing power for DDOS attacks and hackery. The NHS got hit by an attack a couple of years ago. Just because autonomous cars are possible doesn't make them a good idea.


----------



## Foxbat (Aug 3, 2019)

Online kettles. Why? Just why? 

I can just imagine the text message it might send to its owner: I need filling up but the tap is busy ransoming the NHS again. Tap says no! Repeat! TAP SAYS NO!


----------



## Toby Frost (Aug 3, 2019)

thaddeus6th said:


> The Internet of Things (online kettles etc) have been hijacked to provide computing power for DDOS attacks and hackery. The NHS got hit by an attack a couple of years ago. Just because autonomous cars are possible doesn't make them a good idea.



Quite. In Richard Morgan's _Black Man/Thirteen_, robots aren't used because they are hacked by terrorists and sent on the rampage. This is clearly done to set up the need for semi-caveman super-soldiers, but it doesn't seem too unlikely to me. The current obsession for linking everything to everything else seems to make all the systems very vulnerable to attack. I could imagine a society which, having suffered from such attacks, people just don't risk it happening again, and have either no internet or multiple unconnected internets.


----------



## thaddeus6th (Aug 4, 2019)

It's so someone heading home can have the kettle boiling as they enter.

To my mind, it's a tiny bit of convenience at the cost of needless expense and risk. I have more sympathy with the argument for online doorbells/cameras (and possibly thermostats). But an online fridge or kettle just seems daft to me.

And security does need beefing up.


----------



## Vince W (Aug 30, 2019)

My sister-in-law and her husband just purchased a Tesla and made a video of them letting the car drive. Apparently the driver has to touch the steering wheel every one in a while. I've already found a hack to get around that.


----------



## WaylanderToo (Aug 30, 2019)

Robert Zwilling said:


> It is one thing to have a power failure for objects that are stationary, like buildings, but what happens for large masses of moving traffic when the internet burps, suffers from latency, a term makes self driving car manufacturers shudder, or outright power failure, what does that do to all those vehicles relying on electronic contact instead of good old fashioned line of sight with a brain that can adapt to whatever it is seeing in a moments notice. Doesn't even take into account hacking. How does such a system react, does it slam on the brakes for everything moving at that moment. Slowly slow everything down as it shifts everything moving over to sensors riding on every car. Everything slows down to a complete stop. Or do we let the manufacturers assume that is never going to happen.



well we do have autopilots and jet fighters that NEED a (primitive) type of AI to ensure that they can even fly without crashing - not too many people have issues with that




Vince W said:


> My sister-in-law and her husband just purchased a Tesla and made a video of them letting the car drive. Apparently the driver has to touch the steering wheel every one in a while. I've already found a hack to get around that.



you are Homer Simpson AICMFP!




The problem is that cars would need to develop a sort of 'fuzzy logic' to actually operate properly in the real world - something humans can, and do, everyday.

it was said driving a car is easy - to a point it may be. Driving a car *well* on the other hand is bloody difficult, which is why driving standards are rank


----------



## Ursa major (Aug 30, 2019)

WaylanderToo said:


> well we do have autopilots and jet fighters that NEED a (primitive) type of AI to ensure that they can even fly without crashing - not too many people have issues with that


Aircraft (of which there are only a fraction of the number of the hundreds of millions of cars, vans and lorries) are usually** kept well separated in all three dimensions... and there are precious few pedestrians and cyclists sharing the airspace with them.


** - To the extent that planes flying close*** to each other are the subject of air displays, not day-to-day travel.

*** - I was in Rome in December 1999 and, during the morning rush hour, I realised that, because of the volume of traffic, Italian drivers were driving along a three-lane road (one side of a divided highway) _four_ abreast so as to increase the number of cars that could fit on that road. Now _that's_ close.


----------



## Vince W (Aug 30, 2019)

Humans _are_ the problem with driving cars. Too much ego and impatience. When all cars are autonomous accidents will probably be reduced to 0 +- 0.0001.


----------



## Robert Zwilling (Aug 31, 2019)

The term autopilot for flying craft is not the same for cars. The pilots are maneuvering the plane on the ground, and taking off. Landings are also handled by the pilots. The auto pilot feature is running when there is a lot of space between planes and they are traveling inbetween destinations.


----------



## CupofJoe (Aug 31, 2019)

Robert Zwilling said:


> The term autopilot for flying craft is not the same for cars. The pilots are maneuvering the plane on the ground, and taking off. Landings are also handled by the pilots. The auto pilot feature is running when there is a lot of space between planes and they are traveling inbetween destinations.


And unless it has changed recently there is no AI in a plane's autopilot. It follows very simple rules on what to do, hence the Boeing 737 Max issues. It follows those rules regardless.


----------



## Ursa major (Aug 31, 2019)

Robert Zwilling said:


> The term autopilot for flying craft is not the same for cars. The pilots are maneuvering the plane on the ground, and taking off. Landings are also handled by the pilots. The auto pilot feature is running when there is a lot of space between planes and they are traveling inbetween destinations.


So then mention of planes' "auto pilot feature" is hardly relevant to a thread on autonomous cars, is it?


----------



## WaylanderToo (Aug 31, 2019)

Ursa major said:


> *** - I was in Rome in December 1999 and, during the morning rush hour, I realised that, because of the volume of traffic, Italian drivers were driving along a three-lane road (one side of a divided highway) _four_ abreast so as to increase the number of cars that could fit on that road. Now _that's_ close.



TBH I think that the Italians are a breed apart when it comes to driving


----------



## Ursa major (Aug 31, 2019)

WaylanderToo said:


> TBH I think that the Italians are a breed apart when it comes to driving


In my experience, and speaking from a pedestrian's point of view (I've driven in neither country), Belgium (well Brussels) was a _far_ more dangerous place to be on the road** than Italy.


** - Even (or particularly) when using pedestrian crossings....


----------



## Brian G Turner (Aug 31, 2019)

After decades of development, PC's still crash - yet they want us all to embrace new software-driven driverless cars?!


----------



## Foxbat (Aug 31, 2019)

Brian G Turner said:


> After decades of development, PC's still crash - yet they want us all to embrace new software-driven driverless cars?!


Ha! Good point.  

I'll counter with: after thousands of years of evolution, most people are still idiots when they get behind the wheel


----------



## thaddeus6th (Aug 31, 2019)

Brian, indeed.

But even if they work flawlessly almost all the time, they still need a human behind the wheel, constantly alert and capable of taking control.

That's a psychological nightmare. Humans aren't designed to do nothing other than pay attention to something that is 99.999% safe but has a 1:100,000 chance per journey of killing you. You still need a qualified driver, but instead of letting them actually drive you're imposing a bizarre form of mental torture on them. Be bored. Be bored. Be bored. Be bored. Be bo- 0.5s to grab the wheel or you're dead!

It's nuts.

The only way I can see autonomous cars working is in a separate transport network, perhaps carrying goods rather than people.


----------



## Robert Zwilling (Aug 31, 2019)

Brian G Turner said:


> After decades of development, PC's still crash - yet they want us all to embrace new software-driven driverless cars?!


That doesn't mean anything to people trying to sell a product. Computers control important functions on naval ships that are supposed to operate during violent engagements.


----------



## WaylanderToo (Aug 31, 2019)

Ursa major said:


> In my experience, and speaking from a pedestrian's point of view (I've driven in neither country), Belgium (well Brussels) was a _far_ more dangerous place to be on the road** than Italy.
> 
> 
> ** - Even (or particularly) when using pedestrian crossings....




ok - I'll grant you the Belgians. Italians are just crazy, the Belgians seem willfully bad!


----------



## Vince W (Aug 31, 2019)

WaylanderToo said:


> ok - I'll grant you the Belgians. Italians are just crazy, the Belgians seem willfully bad!


Don't discount the French. If you've ever tried to navigate l'Étoile you'll know what I mean.


----------



## Vertigo (Sep 1, 2019)

Robert Zwilling said:


> That doesn't mean anything to people trying to sell a product. Computers control important functions on naval ships that are supposed to operate during violent engagements.


And the software that goes into cars, ships, aeroplanes and even things like tablets and phones (that are generally never turned off) is subject to much more rigorous testing than the software that goes into desk computers like PCs. Anyone who used a PC for real time, life critical systems deserves whatever they get when it all goes wrong. You don't have triple redundancy on your desktop but I assure you they do in aeroplanes and I'm pretty sure they do in ships as well. I don't know about cars but I would expect something similar there too. The systems they have already do far, _far _more miles between accidents than humans manage to achieve. 

I for one can't wait for it to be the norm. The roads will be safer, private car ownership will be redundant and probably pretty much vanish. which will make it way better for the environment. This is absolutely and inevitably going to happen and I suspect it will be a lot sooner than most of its detractors seem to expect.


----------



## Venusian Broon (Sep 1, 2019)

Vertigo said:


> And the software that goes into cars, ships, aeroplanes and even things like tablets and phones (that are generally never turned off) is subject to much more rigorous testing than the software that goes into desk computers like PCs.



Well, there are always terrible exceptions. For example with Boeing. So whose to say something similar, re: problems with their software leading to crashes, couldn't happen with automated cars? Not saying that's a reason for fearing automatic cars or not having them, but I'm sure it's going to happen. (I for one will love it to let the car drive, so that I can just sit back and read a book )

However you are right regarding the above software systems, but there are other good reasons why. These systems above are generally quite closed and are designed for one, albeit pretty complex, purpose. PC's are much more multi-purpose and generally full of software (and sometimes hardware, especially if you tinker!) that has never been designed to be run with each other - also we tend to download and install all sorts of things on it without thinking too much about it. Now most of the time these programs don't impact each other, but it does lead to occasional problems.


----------



## Ursa major (Sep 1, 2019)

I think we are kidding ourselves if we believe that the environment an autonomous car will encounter is anything like as simple as the environments that other software encounters at the moment. That other software has to deal with either very constrained environments, purely digital environments, or environments with few or no independent actors.

An autonomous car will have to work in a very complex (i.e. lots of automomous actors) analog (a multitude of, sometimes chaotic, behaviours) environment. Even other autonomous cars have analog characteristics as they are not purely software and the failures modes of their analog components, e.g. tyres, and their behaviours after a failure, are not easy to predict). And that is without considering the non-software-controlled actors (such as pedestrians, cyclists, even crisp packets** blown by the wind) that they will meet.

The "always on" mobile phone, by contrast, works with well-defined standards for its interfaces and the behaviours it's expected to encounter. And while the totality of software on a phone does have to deal with the user and many disparate tasks, each of those tasks is limited (limited, in many cases, by the design of the software***) and not (within the software) fully integrated with all of the others.

Aircraft fly in what are sparcely occupied and often almost-completely controlled environments (where, unfortunately, they sometimes do meet analog actors, such as flocks of birds).

As for ships, the so-called busiest shipping lanes (such as Strait of Dover) have nothing on the roads around me**** (and not only during the rush hour) For example, pedestrians' behaviour is very difficult to predict (and that's without considering those whose attention is focused solely on their mobile phone's screen).


Now it is not impossible for autonomous cars to work properly in the environment for which they're designed _eventually_, but we are kidding ourselves if we believe that this is going to happen safely soon. And there's no doubt that, in time, autonomous cars will be far less dangerous to their passengers, and other vehicles and humans in their vicinity, than we drivers are when we're behind the wheel. But the last thing we want to do is take the word of over-enthusiastic engineers (and those financing them and hoping for a return on their investment) that we should do anything as foolish as fast-tracking their mass introduction (as some are perhaps far too keen on doing).


** - Some radar-based safety systems have been known to cause a car to brake when a crisp packet has been blown in front of the sensor (though one would hope that either: a) future software will be able to cope with those flying packets; b) those packets are changed to something less environmentally unfriendly).

*** - If the software is not designed to recognise something, that something will not exist, as far as the software is concerned, in terms of the software's ability to deal with it other than as an exception.

**** - I live in a suburb of a small conurbation, and thus nowhere near the busiest place in the world.


----------



## Vertigo (Sep 1, 2019)

I agree with @Venusian Broon accidents will happen in the future. Of course they will but it's not like car accidents aren't happening now already.

Autonomous* cars are driving successfully on our roads _now. _As far as I'm aware the only fatality from an *autonomous* car was the Uber incident in March '18 and that was using, I believe, Uber's own designed and *retro-fitted* system. Personally if I'm driving an autonomous car I would only do so in a car which had been especially designed as such. My conclusion is that it is happening *safely *and now. There's still some way to go before the 'driver' can sit back and read a book, but with all the major manufacturers in direct competition with each other I expect the advance to be rapid. Bear in mind that, again I believe, the first automated cars only started being legally used on the roads about four years ago. They are developing very quickly.

Oh and regarding aeroplane autopilots, the latest ones will land the plane as well as the 'simple' flying between takeoff and landing.


* Autonomous cars still mandate being monitored which I agree is not satisfactory, but they are moving beyond that fairly rapidly. The autonomous cars are defined as occasionally requiring the driver to take control. As opposed to automated cars which "expects a driver to be fully aware at any time of the driving and traffic situation and be able to take over any moment."


----------



## Vince W (Sep 1, 2019)

Vertigo said:


> cars which "expects a driver to be fully aware at any time of the driving and traffic situation and be able to take over any moment."


The way it is now, there are far too many drivers unaware of their surroundings without the benefit of an autonomous car.


----------



## Vertigo (Sep 1, 2019)

Vince W said:


> They way it is now, there are far too many drivers unaware of their surroundings without the benefit of an autonomous car.


I agree which is why the technology will only really be maturing when that is no longer a requirement. But I think that is just around the corner. A fully autonomous car will not get bored!


----------



## Ursa major (Sep 1, 2019)

Vertigo said:


> My conclusion is that it is happening *safely *and now.


Something that is true for _almost all_ human drivers _almost all_ the time now.

But I think we'd all agree that we don't think human drivers are as safe as they could or should be.


----------



## Robert Zwilling (Sep 23, 2019)

Here is an unlikely monkey wrench for the progress of the all electric auto piloted car. The number of sensors and devices on autopilot cars is increasing at a fast clip. The onboard computer, hardware interfaces and all the automatic controls that run the vehicle can now use up to 3 kilowatts of power. This puts a drain on the battery whose first use is devoted to making the wheels turn. Apparently it isn't practical for a small vehicle to put another battery on board. The battery has already been designed for best weight, size and power delivery for running the motor. A gasoline powered electric generator would fit the bill quite nicely, as in a hybrid vehicle.


----------

