Page 1 of 2 12 LastLast
Results 1 to 30 of 33

Thread: Tesla "Autopilot" isn't.

  1. #1
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333

    Tesla "Autopilot" isn't.

    The crash was on March 23, 2018. Walter Huang was driving to work on US-101, like he did every day. He had Autopilot on in his Tesla Model X. As a left exit split off from his lane, the car pulled him into the “gore area” and he crashed into the barrier

    https://t.co/CUPXcrJa7U
    Also,

    Crucially, the data shows Huang didn’t take any corrective actions. The investigation shows he may have been playing a mobile game at the time.

    He had also complained to family and a friend that Autopilot pulled him left in the same spot before

    https://t.co/CUPXcrJa7U
    Firstly, he was playing on his phone!

    Secondly, the Tesla had tried to do the same thing before.

    In my opinion, this was self inflicted. But, how does Tesla get away with calling it Autopilot when it patently is no such thing. Autopilot implies it drives itself, when all the car does in fact is a bit of self steering and collision avoidance at a basic level.

    There was another case recently where a Tesla ran down and killed a pedestrian who was jaywalking. Apparently the system didn't recognise that it was a person because they shouldn't have been there and it could not make a decision about what, if any, hazard they represented. (there is some appalling data that goes with that example - as the car approached the pedestrian it kept trying to classify them and every time it tried a new classification it basically started from scratch. It had no "awareness" that it was reclassifying the same object as it got closer and closer to it. Eventually of course it ran out of time.)

    This is not AI as it's been touted. This is beta software let loose in a killing machine.

    Are there any examples of AI that have proven reliability?

    At what stage does machine learning cross over to true artificial intelligence?
    Last edited by headrush; 2020-Feb-26 at 03:07 PM.

  2. #2
    Join Date
    Jul 2005
    Posts
    18,776
    Quote Originally Posted by headrush View Post
    Are there any examples of AI that have proven reliability?

    At what stage does machine learning cross over to true artificial intelligence?
    I've said before that we really need to stop calling what we've got now "Artifical Intelligence"--it has been one of the most dramatic and unfounded piece of goalpost-moving in the history of technology, and as far as I can see was a response to the way funding started to dry up for research into AI back in the '80s and '90s.
    "Hey, it turns out that getting a machine to demonstrate aspects of General Intelligence is really, really difficult to achieve. Let's just call what we've got now 'Artifical Intelligence', and then we can say that we're working on development, rather than repeatedly failing to achieve our goal." It's as if the Wright brothers announced that they were in the early stages of interplanetary travel.

    Grant Hutchison

  3. #3
    Join Date
    Mar 2004
    Posts
    18,680
    Quote Originally Posted by headrush View Post
    In my opinion, this was self inflicted. But, how does Tesla get away with calling it Autopilot when it patently is no such thing. Autopilot implies it drives itself, when all the car does in fact is a bit of self steering and collision avoidance at a basic level.
    It does? Does “autocruise” or “cruise control” imply a car drives itself? I don’t own a Tesla, but from what I have read, they are very clear that you should always watch the road and be ready to steer or brake. And if you take your hands off the steering wheel too long, it will warn you then start slowing down.

    This is not AI as it's been touted. This is beta software let loose in a killing machine.
    I disagree. This is people not following simple instructions and getting themselves or others killed.

    At what stage does machine learning cross over to true artificial intelligence?
    It depends on your definition of AI. I consider the optical recognition and driving software to be something that incorporates a type of AI. Of course it isn’t a general purpose AI.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  4. #4
    Join Date
    Mar 2004
    Posts
    18,680
    Quote Originally Posted by grant hutchison View Post
    I've said before that we really need to stop calling what we've got now "Artifical Intelligence"--it has been one of the most dramatic and unfounded piece of goalpost-moving in the history of technology, and as far as I can see was a response to the way funding started to dry up for research into AI back in the '80s and '90s.
    "Hey, it turns out that getting a machine to demonstrate aspects of General Intelligence is really, really difficult to achieve. Let's just call what we've got now 'Artifical Intelligence', and then we can say that we're working on development, rather than repeatedly failing to achieve our goal." It's as if the Wright brothers announced that they were in the early stages of interplanetary travel.
    We have a variety of software that deals with fuzzy real world data and performs a variety of useful real world functions that are the sort of things we used to dream about when talking about AI. I see no reason not to refer to it as AI, as long as we understand it is domain limited, not general AI. I don’t recall when or if there was ever a time when only general AI was considered. I do recall that in the 70s to 80s it started to be clear to many that just throwing more computer hardware at the problems wasn’t going to be sufficient and that it just wasn’t easy to duplicate what brains do.

    It’s actually probably a good thing general purpose AI is hard to achieve. I have thought of a number of scenarios a lot more sophisticated than Terminator movies but still not a pleasant introduction to general purpose AI.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  5. #5
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    Quote Originally Posted by Van Rijn View Post
    It does? Does “autocruise” or “cruise control” imply a car drives itself? I don’t own a Tesla, but from what I have read, they are very clear that you should always watch the road and be ready to steer or brake. And if you take your hands off the steering wheel too long, it will warn you then start slowing down.

    I disagree. This is people not following simple instructions and getting themselves or others killed.
    Cruise control does not imply anything other than a set speed. At least to anyone who has ever used it. Autopilot on the other hand is the name of a system used in aeroplanes that can navigate and even land an aircraft. The use of that term is almost intentionally misleading.

    I agree that people are not following the instructions given, but why is that a surprise? As the adage says, when all else fails, read the instructions. (If you're still alive in this case)

    In the incident that provoked this thread, the car did not slow down, it actually accelerated into the barrier.

    At the very least, tesla should have a big fat asterisk next to the word Autopilot, with a reference underneath explaining that that's what they would like it to be, but it's not actually true. Use like cruise control, not like a passenger in seat 27A.

    Would it be acceptable to promote a pharmaceutical product using the claim that it cures cancer? Even if the fine print disavowed the claim, it would be unacceptable because some people don't read the fine print and may stop their other medication.

    I agree with Grant that AI has been overblown, Microsoft are using the term on TV adverts at present, when what they have is some software they've trained to recognise aspects of an image. It is in no way "intelligent", it just looks like that to the uninitiated.

  6. #6
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    As per my second example in the OP, the software could not classify an object, so it could not determine whether it was a hazard. Every time it failed to classify, it wiped its memory of it and then went "ooh, there is an object" as if it had never seen it before. Because of that, the system never got to the stage of calculating whether there was a closing speed etc. That shows no intelligence. A true form of intelligence would treat the object as a hazard until it proved otherwise. That it didn't do that demonstrates to me the ugly human error of the programmer, which further demonstrates the lack of intelligence of the system.

    I do have some programming experience BTW.

  7. #7
    Join Date
    Sep 2006
    Posts
    524
    Tesla is very clear that their Autopilot does not make their cars autonomous. They are very clear that it is required that the driver be paying attention to driving and be ready to take over from the Autopilot system at any time. Yes, Tesla's intent is that some day in the future that they will grow their Autopilot system to be capable of full autonomy and to that end they collect data from all of the vehicles they sell.

    From Tesla's Website

    Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment. While these features are designed to become more capable over time, the currently enabled features do not make the vehicle autonomous.
    But they've been clear about all of this. At what point do people become responsible for misusing the car? What more can the manufacturer do beyond making the information available to the customer in several hard to miss ways? I've not looked into this incident beyond what the OP offered here, but based on this limited information Huang was at fault for the accident, not Tesla.

    From Tesla's Website
    Before enabling Autopilot, the driver first needs to agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Subsequently, every time the driver engages Autopilot, they are shown a visual reminder to “keep your hands on the wheel."
    Arguing that the definitions of the words that make up the labels Tesla coined for their driving assistance systems could lead people to think that they make the car fully autonomous is not valid when Tesla has been quite clear that they don't and reminds the driver of that every time they use the systems. It may seem like this isn't clear because of confusion on the issue throughout the internet-verse, but really it's quite clear to people who buy Tesla's. Or at least it should be.

  8. #8
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    And yet somebody decided it was OK to play a game on their phone instead of following teslas instructions. Would they be doing that, totally hands free if there were no misleading labels? It's all very well blaming the idiot drivers, but surely a competent designer would avoid using terms that could be misunderstood whether or not there is a disclaimer. Another aphorism - you can try to make things idiot proof, but they'll just invent a better idiot.

    I'm fine with outlandish advertising claims, as long as they are obviously outlandish. "Red bull gives you wings" with cartoon characters flying around. "Autopilot" on a car that under controlled conditions can actually deliver you safely from place to place leads people to believe the hype. That's not unavoidable. People will be inclined to test it and if they get away with it they'll carry on.

    I shouldn't imagine the case in the OP is the first time that guy risked his life on that system. My argument is that he probably would never have risked it like that without the untrue claim that the car had Autopilot. I certainly wouldn't risk hands free and attention free driving in a car from 1998. So what's changed?

    Expectations driven by false terminology.

  9. #9
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    I guess we're living in a post modern world where words have no literal meaning. But I'm pretty sure that you couldn't have a brand of bleach called gatorade. You can imagine the law suit. Technical terms should not be used as advertising fluff unless they genuinely work.

  10. #10
    Join Date
    Sep 2006
    Posts
    524
    Hyperbole, inaccurate comparisons and misrepresentation are not persuasive arguments.

  11. #11
    Join Date
    Mar 2004
    Posts
    18,680
    Quote Originally Posted by headrush View Post
    And yet somebody decided it was OK to play a game on their phone instead of following teslas instructions. Would they be doing that, totally hands free if there were no misleading labels?
    I’m not convinced there are any misleading labels involved, but yes, absolutely, some would definitely go against Tesla’s instructions. There is a gizmo sold to defeat the Tesla safety feature that requires people to keep their hands on the wheel. Though to be fair, I read the claim that part of the reason for that is that Tesla has tightened up on the timing for “steering wheel jiggle” to the point of making it annoying for competent drivers.

    But heck, my mother once told me about a time when she and my older sister were driving cross country in an RV and she caught my sister with her feet well away from the gas and brake while it was on autocruise. She even had the chair partly rotated. They were on a freeway with no visible traffic, but still, it was asking for trouble. (My older sister has a history of pulling dangerous stunts. Some people are just like that.)

    It's all very well blaming the idiot drivers, but surely a competent designer would avoid using terms that could be misunderstood whether or not there is a disclaimer. Another aphorism - you can try to make things idiot proof, but they'll just invent a better idiot.

    I'm fine with outlandish advertising claims, as long as they are obviously outlandish. "Red bull gives you wings" with cartoon characters flying around. "Autopilot" on a car that under controlled conditions can actually deliver you safely from place to place leads people to believe the hype. That's not unavoidable. People will be inclined to test it and if they get away with it they'll carry on.
    I think it is established that Tesla has been very clear that the driver is expected to be ready and able to drive the car. The real issue, as I see it, is if there are enough idiots out there for them to be forced to further nerf or even remove the feature for everyone else.

    I shouldn't imagine the case in the OP is the first time that guy risked his life on that system. My argument is that he probably would never have risked it like that without the untrue claim that the car had Autopilot.
    And I don’t agree with your argument. I don’t even own one and I have still read and heard about how they make it apparent the driver is still expected to be able to drive.
    Last edited by Van Rijn; 2020-Feb-26 at 07:53 PM.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  12. #12
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    How many people must die before tesla changes the terminology?
    At present, they are making bank on what is basically a lie. The public is testing their software for them, with often fatal consequences. How can a corporation make claims that they then disavow with lethal results?
    This argument demonstrates a disconnect which is purely political so I can't go there.

    Regardless of the politics, there is no artificial intelligence demonstrated here. The concept as presented is a scam. The words are false, the technology does not exist. It's a severe case of "fake it 'till you make it" but played with human chips.

  13. #13
    Join Date
    Sep 2003
    Location
    The beautiful north coast (Ohio)
    Posts
    49,526
    Quote Originally Posted by headrush View Post
    And yet somebody decided it was OK to play a game on their phone instead of following teslas instructions. Would they be doing that, totally hands free if there were no misleading labels?
    Quote Originally Posted by Van Rijn View Post
    But heck, my mother once told me about a time when she and my older sister were driving cross country in an RV and she caught my sister with her feet well away from the gas and brake while it was on autocruise. She even had the chair partly rotated. They were on a freeway with no visible traffic, but still, it was asking for trouble. (My older sister has a history of pulling dangerous stunts. Some people are just like that.)
    I know of numerous stories similar to Van Rijn's, none involving Tesla. I had a boss who told stories of putting the car on cruise control and steering with his knees while he ate a sandwich. I've seen people shaving or putting on make-up while driving. I see people on cell phones all the time. Not to mention people who cruise through red lights or go 20 or 30 mph over the speed limit.

    Tesla may or may not want to rename this feature, but I think that is a marketing and legal decision they would need to make. Either way, it won't have any noticeable effect on driver behavior.
    At night the stars put on a show for free (Carole King)

    All moderation in purple - The rules

  14. #14
    Join Date
    Aug 2005
    Location
    NEOTP Atlanta, GA
    Posts
    3,018
    Huang had gone through the intersection numerous times since acquiring the car. The lawsuit argues that the car always drifted towards the guardrail in that same spot. So which is true?

    - He became so complacent that he felt able to play a game?

    - Or that he knew from previous experience that the car had problems with a particular location but chose to ignore it?

    As a side note the State of California is also being sued because the guardrail was apparently crushed from an accident some time previously and the state had not repaired it to be safe again.
    Last edited by schlaugh; 2020-Feb-27 at 02:38 AM. Reason: weird text in post title - I blame Tapatalk

  15. #15
    Join Date
    Jul 2012
    Posts
    318
    Quote Originally Posted by headrush View Post
    How many people must die before tesla changes the terminology?...
    H headrush,

    I think a pertinent question is: How many lives (if any) has it saved?
    Death rate per unit distance should be illuminating.

    Do you think death due to human error is better than death due to machine error?

    cheers,

  16. #16
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    I'm well aware of drivers who do stupid things when their attention should be on the road. Unfortunately that is part of the human condition. I guess my problem with this particular issue is because the technology is encouraging complacency. Even though Tesla warns against total reliance on the system, because people test it out hands free - probably for short periods at first - and it appears to work fine, they then feel more confident and take greater risks. I would probably do this myself. It would take a great effort to not use a facility the vehicle had available, especially with the marketing given it.

    I realise this begins to sound like a slippery slope argument, but a similar principle applies to lots of new tech. The segway for example. People were hesitant when first using it but gradually placed more faith in it and became confident. I'm suggesting that a similar process is underway here, but the technology doesn't really back up that confidence.

  17. #17
    Join Date
    Jun 2005
    Posts
    14,080
    Quote Originally Posted by headrush View Post
    Cruise control does not imply anything other than a set speed. At least to anyone who has ever used it. Autopilot on the other hand is the name of a system used in aeroplanes that can navigate and even land an aircraft. The use of that term is almost intentionally misleading.
    I think it's worth remembering, though, that although autopilots may be able to land aircraft, they are never used autonomously. The presence of a human able to take over the controls is always a requirement, and in fact there are situations when the autopilot automatically disengages if it is not able to maintain the programmed flight characteristics (like if it's been programmed to maintain an altitude and the engines fail).
    As above, so below

  18. #18
    Join Date
    Jun 2005
    Posts
    14,080
    Quote Originally Posted by headrush View Post
    Regardless of the politics, there is no artificial intelligence demonstrated here. The concept as presented is a scam. The words are false, the technology does not exist. It's a severe case of "fake it 'till you make it" but played with human chips.
    I think you have a point, but you are taking it a bit far with the hyperbole. I agree actually the "autopilot" is a poorly chosen name for the system. Something like "driver assistant" would have been better. I think they chose the "cooler" sounding name, and I agree that it could make some people more careless than they ought to be.

    But I would not say it is a scam. I would say it is an important system, with a hyped name, and with some technical flaws which absolutely need to be rectified. But the way I see it is that roads are deadly: more than 3,000 people around the world die every day in traffic accidents. Eventually, automated driving systems are going to drive that down, but they are not perfect and never will be perfect. It's only required that they be safer than humans.
    As above, so below

  19. 2020-Feb-27, 05:35 AM
    Reason
    Duplicate post

  20. #19
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    Quote Originally Posted by Jens View Post
    I think it's worth remembering, though, that although autopilots may be able to land aircraft, they are never used autonomously. The presence of a human able to take over the controls is always a requirement, and in fact there are situations when the autopilot automatically disengages if it is not able to maintain the programmed flight characteristics (like if it's been programmed to maintain an altitude and the engines fail).
    Sure, but I think the difference there is that pilots are highly trained, both in general and in the specific operation of modern technology in the cockpit. Drivers as far as I'm aware are not required to retrain on a more modern motor vehicle.

    Also it has occurred to me that my problem with the terminology may be a cultural issue. I'm aware that in the US the word "auto" can be more associated with motor cars in general. That's not the case here in the UK. Generally they're simply known as cars. This leads me to associate auto much more with automatic or autonomous. That is why I have a problem with the term Autopilot, it implies much more autonomy than is justified. To a US citizen, it probably just means something else associated with an automobile.

  21. #20
    Join Date
    Jun 2005
    Posts
    14,080
    Quote Originally Posted by headrush View Post
    I guess my problem with this particular issue is because the technology is encouraging complacency. Even though Tesla warns against total reliance on the system, because people test it out hands free - probably for short periods at first - and it appears to work fine, they then feel more confident and take greater risks. I would probably do this myself. It would take a great effort to not use a facility the vehicle had available, especially with the marketing given it.
    I can understand, but again it has to be seen in context. I know that people have something blamed aircraft autopilots for making pilots complacent and losing the ability to fly the plane without its support. Which might be true. But we have clear evidence that the installation of autopilots makes flying safer, even given the complacence problem. Now it may be that there are serious flaws with the specific application that Tesla is using, and that major redesigns need to be made. In the case of aircraft, we have the disastrous situation with the 737 Max, for example. But I am certain that this is part of overcoming glitches that will eventually lead to safer road travel.
    As above, so below

  22. #21
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    Quote Originally Posted by 7cscb View Post
    H headrush,

    I think a pertinent question is: How many lives (if any) has it saved?
    Death rate per unit distance should be illuminating.

    Do you think death due to human error is better than death due to machine error?

    cheers,
    I think it would be difficult to quantify the numbers. The technology is not yet widespread to get accurate figures. I don't think death by either cause is preferable. But death because of a machine error when the machine is supposed to be actively working against that outcome is probably worse.

  23. #22
    Join Date
    Jun 2005
    Posts
    14,080
    Quote Originally Posted by headrush View Post
    Also it has occurred to me that my problem with the terminology may be a cultural issue. I'm aware that in the US the word "auto" can be more associated with motor cars in general. That's not the case here in the UK. Generally they're simply known as cars. This leads me to associate auto much more with automatic or autonomous. That is why I have a problem with the term Autopilot, it implies much more autonomy than is justified. To a US citizen, it probably just means something else associated with an automobile.
    It might, but I sort of doubt it. To me "autopilot" conjured the image of an aircraft, and I suspect that this would be similar for other American people. Interestingly, it you do want to make the point, a more compelling argument might be that Americans often hear the word "automatic rifle," which are not automatic.
    As above, so below

  24. #23
    Join Date
    Jun 2005
    Posts
    14,080
    Quote Originally Posted by headrush View Post
    But death because of a machine error when the machine is supposed to be actively working against that outcome is probably worse.
    Though I would assume that in the vast majority of human error-caused accidents the driver is also actively working against that outcome. The remainder would be suicides and things like that.
    As above, so below

  25. #24
    Join Date
    Mar 2004
    Posts
    18,680
    Quote Originally Posted by headrush View Post
    I'm well aware of drivers who do stupid things when their attention should be on the road. Unfortunately that is part of the human condition. I guess my problem with this particular issue is because the technology is encouraging complacency. Even though Tesla warns against total reliance on the system, because people test it out hands free - probably for short periods at first - and it appears to work fine, they then feel more confident and take greater risks. I would probably do this myself.
    That strikes me as a much more valid concern than the Autopilot name. For me the question is, like Jens discusses, if on balance fewer people are hurt or killed with it than without it. Hopefully most people will be diligent enough to use it properly and not ruin it for everyone else. I do wonder about this fellow - an engineer working for Apple really should have known better, especially after he had already seen there was a problem at that location.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  26. #25
    Join Date
    Mar 2004
    Posts
    18,680
    One hardware aspect is that some other projects use lidar to assist in judging distance, and lidar has an accuracy advantage. Tesla isn’t doing that yet because it costs too much currently for a production vehicle, but it is almost certain that will change within a few years. That will help everyone.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  27. #26
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    333
    I suspect I will not be comfortable with the Autopilot term in a motor vehicle until it's a true hands off system. A car where if you choose the Autopilot option the steering wheel folds away to prevent the driver interfering. An aircraft pilot has some serious training and can oversee an Autopilot properly but I don't think that's an option for road vehicles as most drivers never take any training once they get their licence.

    I've been thinking about the possibility of introducing some kind of penalty system for AI in general. Some way to give it the potential to learn the cost of errors. I don't think death can be communicated to a computer, but certainly loss of function should be possible. Imagine if the failures of one system could be directly added to the knowledge of another, rather than just trying to design the errors out through programming.

  28. #27
    Join Date
    Jun 2005
    Posts
    14,080
    Quote Originally Posted by headrush View Post
    I suspect I will not be comfortable with the Autopilot term in a motor vehicle until it's a true hands off system. A car where if you choose the Autopilot option the steering wheel folds away to prevent the driver interfering. An aircraft pilot has some serious training and can oversee an Autopilot properly but I don't think that's an option for road vehicles as most drivers never take any training once they get their license.
    I understand what you're saying, but I think there are limits to the comparison between aircraft and cars. They are really fundamentally different. Both are about moving a vehicle, but the tasks are so different that it seems difficult to compare them.
    As above, so below

  29. #28
    Join Date
    May 2003
    Posts
    6,121
    Quote Originally Posted by Van Rijn View Post
    That strikes me as a much more valid concern than the Autopilot name. For me the question is, like Jens discusses, if on balance fewer people are hurt or killed with it than without it. Hopefully most people will be diligent enough to use it properly and not ruin it for everyone else. I do wonder about this fellow - an engineer working for Apple really should have known better, especially after he had already seen there was a problem at that location.
    I know that most of the people I know who have links to the autonomous vehicle world (and I know a few) don't actually think that it's a good idea to have production versions of things like Tesla's Autopilot, for precisely this reason: it's too easy to become complacent and distracted, and if there is an emergency where the human driver has to take over, the driver could easily be too distracted and not prepared to do so in time. They feel that it makes more sense to aim for full autonomy, and only release something as a commercial product if it meets that standard (lower levels of autonomy for continued testing and development are fine, as are very limited driver assist features, such as cruise control or automated parking). Obviously, there are other developers who don't agree, but that's the feel I get for things from my contacts.
    Conserve energy. Commute with the Hamiltonian.

  30. #29
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,922
    agreed the fall off of experience is a major worry, viz. the drop off of snow driving skills in south UK causes more accidents than in the old days at a rare snow event. The skills are lost. The idea that drivers rarely have to intervene is a bad one. On the other hand full automation may be a long way off. I guess a middle option is for motorways to be automated, probably reducing lane changing and cell phone distraction accidents, but leave in city, in town driving fully manual, so that attentive skills are maintained.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  31. #30
    Join Date
    Jun 2005
    Posts
    14,080
    To me, it seems ok to release something if, on the whole, it will reduce deaths. Car accident deaths per distance traveled have dropped significantly over the years despite the purported drop in driving skills, due to the incorporation of things like helmets, seatbelts, and airbags which might make people complacent.


    Sent from my iPhone using Tapatalk
    As above, so below

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •