• Please review our updated Terms and Rules here

Self driving cars?

pgru2

Experienced Member
Joined
Sep 14, 2017
Messages
145
Just want to discuss about self-driving cars. Maybe You have Tesla or other? Or live in Phoenix, Arizona and ride "car without driver", or "robo-taxi" depending how You call it?

If You want to read about my experience - it's not big. If it is count as "self driving car"(but this was not on public road) I was riding an ULTra on Heathrow. For me ride was short, but amazing. Especially because the "car" looks for me very similar to that from game called "Syndicate" - My feelings was "Hey I am in future now". Other was not even riding - just seeing a Tesla car, "dancing with the doors that goes up and down", with the music from car, just I think it was something like promotional event for local Tesla salesman.

From computer collector perspective - yes it have probably modern computer in it. But depending how You treat vintage - if we will rewind 20-30 years forward, the computers from that self-driving cars may be moved to museums, where some people probably will say that they smartwatch(or Musk neuralink, or something else) have bigger compute power. Like the "big Iron" that supported big companies, and now You will probably have problems with running on them the same apps, that just count Your calories on smartwatch. And Yes - I would like to have some self-driving car computer in collection, just probably need to wait when it will become "obsolete"("cheap enough") and not too much "vintage" ("pricy too much")...
 
Last edited:
The thing I like most about vintage computers is that they don't think they're smarter than me. Self-driving cars are developer hubris in automobile form, and dangerous enough to the public before you factor in the part where they're one clever exploit away from being a 5,000-lb. automated crowd-seeking death missile.
 
I'll happily admit to not knowing anything about robot cars. However occasionally they'll make the news because of trying to drive under a semi-trailer, or at full speed into a tree. This makes me realise that I'm not quite ready for that level of automation, not yet, and perhaps not ever.
 
Just want to discuss about self-driving cars. Maybe You have Tesla or other?
Another one who thinks Teslas are self-driving. They are not!

Personally, I don't care about self-driving cars. What's the point? If I'm not driving myself, I can very well just take a bus, a train, or a cab. If I choose to go by car, I want to drive myself.
 
Another one who thinks Teslas are self-driving. They are not!

Personally, I don't care about self-driving cars. What's the point? If I'm not driving myself, I can very well just take a bus, a train, or a cab. If I choose to go by car, I want to drive myself.

Hmm it look like Tesla is now nearly autonomous car level 4 , so from that perspective it is "self driving car". Try to get by bus or train, to every place of interest, especially rural village in USA or Europe - it generally is impossible. Cab? Maybe if You are rich is the option. Drive by myself - okay You have driving license, but thousands or millions of people don't have. Some of them even don't have an chance for it - if they are elderly, younger or disabled. And even the law is constructed to harden a life of average citizen. For example average pass rate of driving exam in Poland is probably less than 1/3. Yes it mean that for 3 people, who fill the requirement of physical ability, complete the course, 2 will not pass....

https://hypebeast.com/2021/1/tesla-autopilot-full-self-driving-san-francisco-los-angeles-round-trip
 
Is artificial intelligence, which is misnomer in itself, a reality at all ? It would be required for a safe self driving car.

If it was, then a self driving car would be a better proposition than one driven by a real well trained Human Being that was 1) alert and paying attention and 2) not influenced by drugs or alcohol.

As far as I see it, I would not trust a computer (regardless of its programming) to be superior to the Human mind, to control a car or any other machine, like a plane, in the event of exceptional emergencies (not the mundane tasks of "event free" driving or flying) . Now, how was it that Sully landed that plane ? Maybe in a few seconds he called on everything he knew going right back to his childhood where he played with paper planes, and all the experiences and learning in between. You cannot program a computer to do this as well.

"AI" and machine programming can look good, especially playing Chess, until the unexpected protocol happens that the programmer has not anticipated. True intelligence is dealing with the unexpected that you are not programmed for and synthesizing on the spot a suitable response.

When an AI can spontaneously (of its own volition) re-program itself to be a better AI in the middle of the night when not asked to, and Robots can beat every human being at the Olympics, in every sport and a "robot bird" can out do a common sea Shag, that can fly in air, swim in water and walk on land, then I will be happy with the notion of a self driving car. And Oh, that's right, the AI controlling it would have to have at least one or two existential crises in its expected service lifetime and struggle with bad dreams some nights.

However, if all cars on the road were self driving and could communicate with each other I would concede that nearly all road accidents could be prevented. Then we could have a society where people like cattle, with even less decision making, could be shepherded between their places of home and work by government funded and automated self driving transport utilities. The only time you could get the pleasure of driving a car yourself would be on a private track with heavy indemnity insurance.

When I heard about the accidents that had occurred with Teslas in self driving mode, it did not surprise me at all, not one iota. The example where one crashed into the back of a trailer, the explanation from Tesla was that the angle of the radar did not see the trailer and the camera did not work due to the high back lighting conditions, really ? It was not the fault of the car's sensors, it was hopelessly inadequate software that can function nowhere near as well as a Human mind (paying attention that is). The fellow in the car was so confident it would be safe (marketing hype perhaps) that he was playing Harry Potter on his laptop computer.

https://www.theguardian.com/technolo...r-harry-potter

So I'm not falling the marketing BS from Telsa or any other car maker that dares to even intimate that their self driving (or autopilot) car can drive better or more safely than I can. It stuns me that people have fallen for the whole notion of the autopilot, including some of my colleagues who are well educated. Plus, I don't fancy being "de-skilled" by automatic processes. You will find if you let your car self reverse park for you about 10 or 20 times, you will lose the ability to do it yourself. Perhaps some song lyrics are appropriate here: From the song by Zager and Evans, in the Year 2525:

In the year 5555
Your arms are hanging limp at your sides
Your legs got nothing to do
Some machine is doing that for you

In addition to this, Human responses are different to machine responses. For example, lets say as a driver you see a lightweight object , like a cardboard box, blow across the road into your path. Likely, you would not swerve or brake and maybe just run over it. But if it was a child you would rather drive off the road into a concrete barrier to avoid them (notice them , not "it"). But in the case of the self driving car, it cannot distinguish the objects with certainty, so it won't swerve, only apply the brakes. The company line being "The car prioritizes the safety of the occupants". Sometimes of course, that is the wrong response. A self driving car killed a woman:

https://www.theguardian.com/technolo...-arizona-tempe

If that is not enough, once the battery in the EV catches file, it is like lighting a piece of Magnesium ribbon (remember that from chemistry class) Burning metal is extremely difficult to extinguish , so if you prang up your EV, better get out of it fast or be cremated while alive.
 
Last edited:
I think most of the people here have seen software go through its entire life cycle. Even if the software gets to a point where it works very well, is solid, robust, and well tested, it will eventually reach a point where it starts to fall apart, bloat up, get filled with advertising and malware, and eventually wind up abandoned.

What makes anyone think that autonomous/self driving software will be any different?

I can easily imagine years after self driving cars have become "ho-hum", some pointy hared boss will make a decision like "users are complaining about our cars slowing down for squirrels. Make it, uh, not do that.". Which works fine for a while until it runs over a baby without even slowing down.

Then we get to look forward to self driving cars that drive us to a store other than the one we asked for because the other store has deep enough advertising pockets to make that happen. Sort of like http://www.charlespetzold.com/blog/2021/08/Screw-You-Microsoft-Edge.html

Eventually we will get the joy of having to do all kinds of wacky workarounds to keep from getting killed. Surreal crap like using a fishing pole to hold a picture of three circles in front of the car to make it stop. Fall back on manual driving? Oh, no, in their infinite wisdom car manufactures have discontinued support for manual driving, relegating that to the dustbin of history right next to FTP and unencrypted HTTP.

"To err is human. To really F things up takes a computer." One day we will watch in horror as a bridge collapses and the ravine fills up to the top with self driving cars. And nothing will change.

In the end, we will have to go back to walking.
 
Really ? Is artificial intelligence, which is misnomer in itself, a reality at all ? It would be required for a safe self driving car.

If it was, then a self driving car would be a better proposition than one driven by a real well trained Human Being that was 1) alert and paying attention and 2) not influenced by drugs or alcohol.

As far as I see it, I would not trust a computer (regardless of its programming) to be superior to the Human mind, to control a car or any other machine, like a plane, in the event of exceptional emergencies (not the mundane tasks of "event free" driving or flying) . Now, how was it that Sully landed that plane ? Maybe in a few seconds he called on everything he knew going right back to his childhood where he played with paper planes, and all the experiences and learning in between. You cannot program a computer to do this as well.

"AI" and machine programming can look good, especially playing Chess, until the unexpected protocol happens that the programmer has not anticipated. True intelligence is dealing with the unexpected that you are not programmed for and synthesizing on the spot a suitable response.

As far as I see it, when an AI can spontaneously (of its own volition) re-program itself to be a better AI in the middle of the night when not asked to, and Robots can beat every human being at the Olympics, in every sport and a "robot bird" can out do a common garden sea Shag, that can fly in air, swim in water and walk on land, then I will be happy with the notion of a self driving car. And Oh, that's right, the AI controlling it would have to have at least one or two existential crises in its expected service lifetime and struggle with bad dreams some nights.

However, if all cars on the road were self driving and could communicate with each other I would concede that nearly all road accidents could be prevented. Then we could have a society where people like cattle, with even less decision making, could be shepherded between their places of home and work by government funded and automated self driving transport utilities. The only time you could get the pleasure of driving a car yourself would be on a private track with heavy indemnity insurance.

When I heard about the accidents that had occurred with Teslas in self driving mode, it did not surprise me at all, not one iota. The example where one crashed into the back of a trailer, the explanation from Tesla was that the angle of the radar did not see the trailer and the camera did not work due to the high back lighting conditions, really ? It was not the fault of the car's sensors, it was hopelessly inadequate software that can function nowhere near as well as a Human mind (paying attention that is). The fellow in the car was so confident it would be safe (marketing hype perhaps) that he was playing Harry Potter on his laptop computer.

https://www.theguardian.com/technolo...r-harry-potter

So I'm not falling the marketing BS from Telsa or any other car maker that dares to suggest their self driving (or autopilot) car can drive better or more safely than I can. It stuns me that people have fallen for the whole notion of the autopilot, including some of my colleagues who are well educated. Plus, I don't fancy being "de-skilled" by automatic processes. You will find if you let your car self reverse park for you about 10 or 20 times, you will lose the ability to do it yourself. Perhaps some song lyrics are appropriate here: From the song by Zager and Evans, in the Year 2525:

In the year 5555
Your arms are hanging limp at your sides
Your legs got nothing to do
Some machine is doing that for you

In addition to this, Human responses are different to machine responses. For example, lets say as a driver you see a lightweight object , like a cardboard box, blow across the road into your path. Likely, you would not swerve or brake and maybe just run over it. But if it was a child you would rather drive off the road into a concrete barrier to avoid them (notice them , not "it"). But in the case of the self driving car, it cannot distinguish the objects with certainty, so it won't swerve, only apply the brakes. The company line being "The car prioritizes the safety of the occupants". Sometimes of course, that is the wrong response. A self driving car killed a woman:

https://www.theguardian.com/technolo...-arizona-tempe

If that is not enough, once the battery in the EV catches file, it is like lighting a piece of Magnesium ribbon (remember that from chemistry class) Burning metal is extremely difficult to extinguish , so if you prang up your EV, better get out of it fast or be cremated while alive.

Interesting point of view, but I will try to discuss with it (no, I am not working for Tesla, and have too low income for being ever they salesman).

First of all - probably richest man on the Earth Bezos as far as I known got into space "without pilot". Everything was controlled by computer. Future is now? Yeah. To add spice - it was the first human crewed space flight of that company, the all other tries was without person. And he didn't even send somebody instead of him to test it. On other side we have Branson - yeah the guy who also made a space flight (for me, not for guys who are into all "it must be 100 km or it isn't") - they didn't exceed 100 km because some years ago - the human pilot, well trained guy with experience, physical ability made a human mistake...

Others are statistics - most of the drivers are not Formula One driver Kubica, in fact the all "road safety" is generally made because of the human errors. Humans can be drunk, can be tired, can forget something. Most of computers can work better and longer than people. If the computer is statistically making less errors than an average human driver, it is safer to travel with "autopilot".

If we are about "autopilot" - I may be wrong but in for example London, the computer controlled trains are operating from as early as 1987. The humans on board are not driving it. The number of accidents is less than with trained, humans that are payed and educated to work on human driven trains.

It generally doesn't mean that we won't have problems with self-driving cars, it rather mean - we will have much less problems than with human drivers.

You cite The Guardian from 2016, but hey - we have 2021, the 5 year means a big experience in self driving cars riding on roads. When the first cars were in UK, the people were so afraid of accident's (because it didn't had a horse, which were they used to), that they passed a law that every "car" can't drive on the road without human(!) with red flag walking/running before vehicle. Imagine what will be if the law was still in force now, in XXI century.
 
Last edited:
I think the first point is that anyone considering measuring the accuracy and safety of 'self driving' cars based on Tesla is barking up the wrong tree. However more advanced they are getting at competent 'auto pilot' systems to work, the vast majority of their cars on the roads are not self driving. When these vehicles have accidents, by and large it has been the result of idiots 'driving' them by turning control over to semi-autonomous systems that may be were sold as capable, but really are not - and where there are plenty of warnings to drivers to that effect too.

The second thing that strikes me is that while just about every 'self driving' accident, and particularly the notable ones, get reported in the media and examined in detail, virtually none of the accidents caused by humans and our mis-judgements are reported at all, so we don't get much of a balanced view of which is safer. However, what we do know reasonably clearly is that there are plenty of humans driving vehicles around who are impaired for one reason or another, or simply lacking much in competence, or distracted, or busy eating breakfast, or even reading the newspaper. The idea that autonomous systems aren't capable of driving to at least that standard seems silly.

Personally, I don't want anything to do with self driving cars because I don't want to give control of the vehicle I am in to anyone or anything but me - or at the very least, to someone I can communicate my expectations for safe driving to should I feel compelled to comment at all - but I have seen sufficient examples of truly bad driving to believe that competent engineers can - theoretically - build safer transportation solutions than many humans are capable of piloting them. But they haven't yet.

Real statistics on the accidents/mile from real self-driven trials versus human-driven accidents rates would be interesting to see, but would not have much to do with Tesla.
 
Interesting point of view, but I will try to discuss with it (no, I am not working for Tesla, and have too low income for being ever they salesman).

First of all - probably richest man on the Earth Bezos as far as I known got into space "without pilot". Everything was controlled by computer. Future is now? Yeah. To add spice - it was the first human crewed space flight of that company, the all other tries was without person. And he didn't even send somebody instead of him to test it. On other side we have Branson - yeah the guy who also made a space flight (for me, not for guys who are into all "it must be 100 km or it isn't") - they didn't exceed 100 km because some years ago - the human pilot, well trained guy with experience, physical ability made a human mistake...

"Computers" (or, simply, machines) have been sending man in to space since Gagarin in Vostok. Other than having a hand on an abort button, nobody is "flying" anything going in to space, they're just along for the ride.

Once in space, they had more control. But computers still do the heavy lifting.

As they say in software, the first 90% takes 90% of the time, then the last 10% takes the other 90%.

That's where we are with autonomous driving today, but I think we're currently at a peak that will get difficult to break through to get full autonomy in the wild.

Special cases, in "controlled" environments are fine. But random, in the wild, no. I don't think we're there yet, nor will we for some time.

I have a friend with a Tesla, he does not use the features. My single interest is one that will creep along in stop and go freeway traffic, but I think it would be hard to trust. And the real problem is that with the current systems, they want the person on "standby" in case the system alerts a failure. And people are fine at being at stand by, but response times aren't very good at 60-70 MPH.

The other day we were in stop and go traffic, in my wife's car. She made one of those hard lane changes to get out of a very slow lane (you know, not fast, but you have to turn the wheel quite a bit to make the change, like in a parking lot). She confused the car somehow as it felt it was about to hit something and the car stopped cold with very hard braking. It was disconcerting to say the least.

So, I don't think we're "10 years" away from full autonomy in the wild. I think we're farther than that, that the edge cases are still there and very sharp and very hard, and may well be insurmountable with our current tech and techniques.
 
Hmm it look like Tesla is now nearly autonomous car level 4 , so from that perspective it is "self driving car".

No. The Tesla isn't even remotely a Level 4 car, and according to Tesla's lawyers when talking to regulators (verses the ridiculous prattle that's constantly spewing out of Elon Musk) it will almost certainly *never* be more than a Level 2 with the current hardware.

Those YouTube idiots that make those videos fooling themselves and others that it's *ever* safe to let go of the steering wheel in a Tesla are a menace, as are the credulous websites that promote this tripe.
 
I'll believe that anyone has fully self-driving done when I can order my car to drive to an address out here on a rutted one-lane gravel or dirt road on a moonless night to an address that is a matter of conjecture, while dodging the tank-trap potholes and animals running across the road.
 
No. The Tesla isn't even remotely a Level 4 car, and according to Tesla's lawyers when talking to regulators (verses the ridiculous prattle that's constantly spewing out of Elon Musk) it will almost certainly *never* be more than a Level 2 with the current hardware.

Those YouTube idiots that make those videos fooling themselves and others that it's *ever* safe to let go of the steering wheel in a Tesla are a menace, as are the credulous websites that promote this tripe.

Hmm level 2 means that car is not self-driving more than highway. Certainly - thankfully to videos the cars can drive safely in cities. The lawyers saying is probably as usual just talk to cover their back, like some big company that wanted to prohibit selling used software by writing license, and it was breaking of European law.

Biggest car institute in Poland, according to something like "gov road strategy" for autonomous vehicle says that fully autonomous cars will be allowed by law in Poland, in 2030(not for test purposes, because for test purposes they are allowed now). The one of cited reasons is that for about 10 years self driving cars are tested in the world, and it is "just working".

There are not only Teslas on the road, in Phoenix, Arizona in usual city traffic are fully autonomous robo-taxis(there is no "emergency" driver in the seat probably for year or more now), they take orders, arrive by self-driving, and even payment by app. It is called "test", but it looks like full commercial service, in quite normal city conditions, with lights, pedestrians, cars, bicycles etc.

Chinese have also a lot of robo-taxis operating in very big city, some with "emergency driver", some not, buses too. During pandemic in March as far as I remember, there were the "workhorse" of separated part of the city, when the people drivers to stop spreading pandemic were not allowed, and they transported food, and medicinies.
 
Hmm level 2 means that car is not self-driving more than highway. Certainly - thankfully to videos the cars can drive safely in cities. The lawyers saying is probably as usual just talk to cover their back, like some big company that wanted to prohibit selling used software by writing license, and it was breaking of European law.

Again, these idiots filming this **** are not experts in self-driving, they're people abusing some very brittle software that *sometimes* works but very often doesn't. (YouTube is full of people filming their Teslas making very stupid mistakes as well, but they don't get breathlessly covered like the "successes" do.) Even that one you're citing is a lie: it wasn't zero interventions, it was *at least* one (helping the car avoid debris in the road) and more practically at least two because it "got weird* at another point but he let it alone until it "figured it out". (Which was a highly irresponsible thing to do on Market Street in San Francisco.) And if you click through to the idiot's next video he says this in the description:

This drive was even better than the last one, although there were still many mistakes and areas for improvement that didn't require a disengagement. Can you spot them?

This is not a competent "Level 4" vehicle cleanly navigating a real-world problem, this is some ***hole taking a dangerously incomplete toy on what is effectively a drunken joyride. Every time his car "got weird" and he had to let it figure it out the humans around him were having to deal with his idiocy, so, no, you're wrong. Multiple mistakes on a roughly 500 mile trip is not what constitutes safe Level 4 autonomy.

(I also have to stress here: He excuses the fact he had to take control multiple times to get the Tesla off the freeway to a charging station. By definition a Level 4 car would find its own way to a charging station. With zero mistakes or interventions.)

To go back:

If we are about "autopilot" - I may be wrong but in for example London, the computer controlled trains are operating from as early as 1987. The humans on board are not driving it. The number of accidents is less than with trained, humans that are payed and educated to work on human driven trains.

That is an absolutely ridiculous comparison. A train is a vehicle which travels with only a single degree of freedom on a dedicated right of way which it is physically bound to by the rail/flanged wheel interface. Automating a train is, at least in principle, not a whole lot harder than automating an elevator. (FWIW, the first push-button elevator was demonstrated in 1894. And for that matter production-level rail system automation dates back to at least 1960.) Comparisons to spacecraft and airplane autopilot systems also fall seriously flat for a number of reasons as well, but the one biggest is that these systems typically operate in (comparatively) sparsely populated and well-controlled environments. Airplanes typically don't fly in irregular uncontrolled formations within a few feet of each other, nor are they restricted in terms of evasive maneuvers to strictly horizontal movements within a constrained lane that might be barely wider than the airplane itself. Even the most sophisticated airplane autopilot systems, IE, the ones that can land and take off completely hands off, still make a lot of assumptions about the runway being clear, that another airplane isn't going to suddenly swerve in front of them, etc.

(No modern collision avoidance system, even one in a quarter of a billion dollar airliner, would be able to handle problems on the sort of timescales that a self-driving car needs to, and they usually have the help of transponder tracking information from other aircraft. Also for an airplane it's generally safe to assume that any radar return within some reasonable radius *is* actually something to worry about, verses being a reflection off a street sign or a trash bag blowing across the street.)

Complete autonomy by a car on unmodified roads is a *completely* different scale of problem than a spacecraft or train. And again, Tesla isn't even remotely close here; Carnage Mellon university was demonstrating the "NavLab" system back in 1997 that, to within an order of magnitude or so, could accomplish about the same level of freeway following that a Tesla can. IE, this. Getting as car to look for things that resemble travel lanes and follow them isn't "that hard" (the earliest 1980's-era prototypes of NavLab would occasionally try to climb trees because both roads and trees are sets of parallel lines slowly converging but, hey, that's what you need radar for), and Level 2 systems don't need much more than to graft that to basic proximity sensor input. But as they say in that linked video the computer vision processing necessary to handle real world edge cases was far beyond 1997's technology, and the blunt fact is that it still is today, at least at the performance level you really need to replace humans. Teslas do stupid things like mistake the moon for yellow traffic lights or slam on the brakes because they get confused by something they saw on a billboard all the time. (Not to mention that whole slamming at high speed into other vehicles stopped in their lanes thing, but we'll set that aside for now.) Their customers are just so soaked in the kool-ade they don't like talking about that to outsiders.

So far as I'm concerned those Waymo taxies bumbling around Chandler, AZ, as well as they mostly work, are living proof of just how hard this problem is. Waymo's cars carry massive amounts of instrumentation, multiple cameras and lidar, the inputs of which are integrated with massive databases of high-resolution terrain scans that get updated constantly. And yet despite that the cars are limited to operating at less than 45 miles per hour in a tightly geofenced suburban (not urban) area located where it almost never snows nor rains nor fogs... etc. Waymo is compensating for AI weaknesses through sheer brute force, yet even their cars need to call for help on a regular basis. Also, and it's worth pointing this out, Waymo will tell you that almost all the accidents they've had with their vehicles were "the other guy's fault", and legally that's true because the majority of them involved their cars getting rear-ended, but it's also possible that some of those accidents were the direct result of the Waymo car acting in unexpected ways, like suddenly hesitating during a turn, etc. (Even the most booster-y videos I've seen about these cars shows them occasionally reacting/overreacting to things and slamming on the brakes.) If you want to talk about automated car progress you really need to ignore clown shows like Tesla and pay more attention to serious actors like Waymo. See the speed they're moving at before you start dreaming that these things will be widespread and able to go "everywhere" any time remotely soon.

Here's a blog run by a real, serious, AI vision researcher. He had a post about the possibility of another "AI Winter" coming along go viral a few years ago, you might want to spend some time going through his links. (There are other good sources about this as well, but this guy's blog is fun and accessible.) The hype balloon for self driving was seriously overinflated around 2016 and there's still a lot of air in it, but the holes in it are massive. Don't get suckered.
 
Last edited:
I think I'm largely with Timo on this one. I enjoy driving cars, so I think I might as well take a bus/train/fly if I'm not driving.

However, I also despise features that take control of the car while I'm driving, even traction control and ABS. It snows where I'm at half the year, and you can't stop on snow with ABS, and you can't speed up quickly without throwing snow. I think a good test for a driving AI would be a rally course with many different road conditions. If it can't handle that well, then it won't be able to handle emergency situations during everyday driving well.

I guess I'll go on a bit of a tangent here, but it sounds like a large part of the push for autonomous driving is fueled by "safety". I honestly think that's just a band-aid for the actual problem, which is how the US trains drivers. We have tests that show how well people follow the traffic laws, and that's it. To my understanding, in many other countries, you have to be trained and tested on actual driving techniques. We are told to simply hit the brakes in any dicey situation, and this seems to be programmed into our driving assist features of new cars (whartung's wife's car example). This is where ABS came from. I think something that many drivers have no concept of is that while braking hard, the steering wheel inputs are amplified. I read a statistic from a study in New Zealand that with the advent of ABS, vehicle to vehicle collisions were reduced by 5 or 10%, but vehicle departing road accidents INCREASED somewhere in the 20-30% range. I'm sure this issue is something that can be dealt with by AI, but if we don't train human drivers to deal with it, who is going to remember to train AI drivers to deal with it?

I personally see the future of self-driving cars as dark and dingy and boring. It seems like they will just take place of high-speed trains that governments are putting off erecting between big cities, only much less efficient. I see they are also fueled by the attitude of the current middle class generation to "serve the self first", but also put up the facade of "helping" others (safety). If we're really pursuing safety, maybe we'd not spend so much on making unsafe things safe (cars), and instead make safe things safer (trains)...
 
... if you really want to get an idea of just how brittle a video-only system like Tesla is going to be for the foreseeable future do some reading on adversarial attacks against machine learning algorithms. Machine vision is particularly prone to these. Drop a few stickers in the middle of an intersection and you can trick a Tesla into a head-on collision. Or scatter a little gibberish on a stop sign and, poof, it isn't a stop sign anymore. When machine vision systems make mistakes they are typically massively stupid mistakes that no human (or animal) would ever make, because the "deep learning" system that's used here actually has no idea what "vision" is, what "objects" are, or anything else that's somehow inherently wired into biological systems in ways we haven't meaningfully figured out how to replicate yet. (There's at least 600 million years or so of evolutionary improvement in play here, so maybe that's understandable.) Humans are certainly flawed in a lot of ways; we doze or zone off, we get distracted, sometimes we even legitimately do make mistakes about what we're seeing, but humans are blessed with an actual native understanding of what "things" are and a native ability to understand how they move and make educated extrapolations based on our visual observations with split-second timing in a way no computer can replicate, at least when limited to similarly imprecise data inputs.

This blog post has a useful analogy for understanding roughly how "object recognition" works in these systems. The realization is starting to sink in for at least some AI researchers that there may be serious limits to just how low you can make the error rate for a system like this, and that threshold may well be unacceptably high. Tesla's whole sales pitch about automated driving was that their fleet of cars were going to collect so much information from their cameras that they'd be able to use automated Deep Learning to leap ahead of more formal/rigid/brute force systems like Waymo, but unless they have a huge crew of humans sitting around correctly labeling that data it's garbage in->garbage out before you even get started. And even *if* you feed it fully qualified data there are fundamental limitations on how accurate of a classifier you're going to be able to come up with; limits which are actually fairly poorly understood and, again, difficult to actually assign reliability ratings to. And this is why Teslas are still making stupid mistakes and killing the occasional person five years after the first time someone was punished for being dumb enough to trust their life to Autopilot.

Believe it or not I'm actually reasonably strongly in favor of some automation backstops to compensate for human frailties, such as automatic emergency object detection/braking systems and lane departure warnings that could seriously save some lives. (IE, to help stop people from backing over pedestrians or wake them up when they fall asleep at the wheel.) It may *even* be reasonable to talk about limited hands-off systems on controlled access freeways to some extent... maybe. But half-baked garbage like Tesla's that let people pull these stupid stunts without taking the sort of precautions that other automakers are taking (like GM with the eye-contact driver attention detection system and GeoFencing to compatible roads they use with Super Cruise) is irresponsible and, in my opinion, badly damages the legitimacy of the technology. Every time a Tesla smacks at full throttle into a truck stopped at a traffic jam (and let's be clear, this latest fatality was *last month* with the most recent version of their self-driving trash) they're setting things back for other more serious and responsible players.
 
Last edited:
I would never trust a self driving car. There has been quite a bit of helpful advances in cars like the backup camera, side vehicle sensing devices to get rid of blind spots, and ABS break to name a few. But the amount of data and ability to compute what is going on in highly congestive traffic during road construction or natural events like floods, rockslides, hail, etc means you have to be behind the wheel and alert.
 
Teslas do stupid things like mistake the moon for yellow traffic lights or slam on the brakes because they get confused by something they saw on a billboard all the time. (Not to mention that whole slamming at high speed into other vehicles stopped in their lanes thing, but we'll set that aside for now.) Their customers are just so soaked in the kool-ade they don't like talking about that to outsiders.

Don't get suckered.

We agree 100% here.

There are transport systems, trains on rails, airplanes that lend themselves much better to computer control than cars. Except when something goes wrong with the plane.The initial plan is to get rid of the co-pilot first and replace them with a Dog. The job of he Dog is to bite the pilot if they touch any of the controls.

Even more of a challenge would be the self driving motorbike. The motorbike will save humanity, for a while at least, from the legislation that will inevitably make human driven cars illegal, it will take much longer to achieve and they will probably have to go to three wheels.

There are multiple things that happen, when a human is driving.

For example, lets say looking far down a road with a stream of traffic in a double lane, with a concrete barrier on the middle. You see an object come over the concrete barrier, but then it is out of view again, hidden by the traffic in the adjacent lane. From the way it moved you can identify likely what it was, a cat, a dog a child, or just a piece of debris or cardboard. You automatically ascribe a risk value to it, and know at any moment it could run between the other line of cars cars and re-appear in front of you in your lane later. You can prepare in advance for that, just as when a ball rolls from the sidewalk in front of you, likely a child is following. The self driving car cannot process data like this well in advance, it only reacts and applies the brakes if the object (that it has no clue what it is) is in its path. This is just one of dozens of examples.

One problem we face is that the self driving cars are always being compared to the lowest common denominator , a drunk or drugged up human driver, not the better human driver, so of course they will win out in accident statistics, especially when the self driving cars become the dominant number of cars on the road and they are communicating with each other. By that point it might be likely that the plethora of delivery drones and flying cars, could be more dangerous.

I read a news report about an experienced race car driver, who crashed their car into a tree and died. The data from the car's ECU was downloaded for the forensic crash investigators to peruse. The reporter who reported the story said: "They could not understand how the crash occurred, because prior to the accident, the data showed he was driving like a computer". The irony being, what do computers do, they crash don't they.
 
With the kids from the university here in town, it's not uncommon to see a stop sign "tagged" or simply stolen. What does the FSD software do in such a case? A human driver might have the presence of mind to say "oh, that' a stop sign with a graffito" or "there used to be stop sign here; maybe I should stop and check for cross traffic..." I'd like to see a car with FSD enabled handle some of the crazy traffic circles we have here.
 
Back
Top