1 2 3 4 5
iansane
iansane GRM+ Memberand New Reader
12/10/19 1:17 p.m.
STM317 said:

In reply to iansane :

I guess I'd ask if it has to be under tested code? There are plenty of places where code is thoroughly tested in simulations and controlled environments before it's released in the public realm. Other automakers are doing it and rolling their tech out in much more controlled/limited ways. "As bad as an inexperienced teenager" shouldn't be the standard we set for tech being sent into the world.

A modern passenger aircraft has 150 million lines of code. How many aircraft are released with "under tested code"? 737 Max jumps to mind, but nothing else really (there are a bunch of plane people here that might clarify). What kind of track record do those aircraft have in the field? Now consider that there are far fewer "threats" for an aircraft in the air than a vehicle moving through traffic. Airliners are typically traveling very far apart, and have 3 dimensions of movement to maneuver around potential issues. Vehicles travel in close proximity to one another, at different rates of speed, in two dimensions, while also dealing with stop/go situations, hazards in the path of travel and intersecting paths, etc.

By untested I guess I mean unperfected, because I imagine there is always going to be room for improvement. Tests can be finished. And I'm not saying inexperienced teenager should be our standard but that's a worst case. Hell, is unperfected code any worse than half of the "normal" drivers out there? I don't know if you live in an urban setting but the kind of asshatery I see on our roadways multiples times a day is breathtaking. Although the bad driving I see ultimately stems from selfishness and not-giving-a-E36 M3 about others on the roadway. I guess that might be my underlying viewpoint. I'd rather see an autopilots released to be tested and improved then shelled up on closed circuits to die out and be held off for decades of testing.

Knurled.
Knurled. GRM+ Memberand MegaDork
12/10/19 1:25 p.m.

In reply to STM317 :

After I get home, I'll get some links of airline crashes where the pilots incorrectly assumed that the autopilot was in control.

 

Complacency is what causes collisions.

BlueInGreen - Jon
BlueInGreen - Jon SuperDork
12/10/19 1:32 p.m.
iansane said:
STM317 said:

In reply to iansane :

I guess I'd ask if it has to be under tested code? There are plenty of places where code is thoroughly tested in simulations and controlled environments before it's released in the public realm. Other automakers are doing it and rolling their tech out in much more controlled/limited ways. "As bad as an inexperienced teenager" shouldn't be the standard we set for tech being sent into the world.

A modern passenger aircraft has 150 million lines of code. How many aircraft are released with "under tested code"? 737 Max jumps to mind, but nothing else really (there are a bunch of plane people here that might clarify). What kind of track record do those aircraft have in the field? Now consider that there are far fewer "threats" for an aircraft in the air than a vehicle moving through traffic. Airliners are typically traveling very far apart, and have 3 dimensions of movement to maneuver around potential issues. Vehicles travel in close proximity to one another, at different rates of speed, in two dimensions, while also dealing with stop/go situations, hazards in the path of travel and intersecting paths, etc.

By untested I guess I mean unperfected, because I imagine there is always going to be room for improvement. Tests can be finished. And I'm not saying inexperienced teenager should be our standard but that's a worst case. Hell, is unperfected code any worse than half of the "normal" drivers out there? I don't know if you live in an urban setting but the kind of asshatery I see on our roadways multiples times a day is breathtaking. Although the bad driving I see ultimately stems from selfishness and not-giving-a-E36 M3 about others on the roadway. I guess that might be my underlying viewpoint. I'd rather see an autopilots released to be tested and improved then shelled up on closed circuits to die out and be held off for decades of testing.

A “bad driver” isn’t going to run into an emergency vehicle with flashing lights and flares on the shoulder because they thought a tar strip was a lane stripe.

Unless they were asleep at the wheel, I guess.

yupididit
yupididit UberDork
12/10/19 1:35 p.m.
BlueInGreen - Jon said:
iansane said:
STM317 said:

In reply to iansane :

I guess I'd ask if it has to be under tested code? There are plenty of places where code is thoroughly tested in simulations and controlled environments before it's released in the public realm. Other automakers are doing it and rolling their tech out in much more controlled/limited ways. "As bad as an inexperienced teenager" shouldn't be the standard we set for tech being sent into the world.

A modern passenger aircraft has 150 million lines of code. How many aircraft are released with "under tested code"? 737 Max jumps to mind, but nothing else really (there are a bunch of plane people here that might clarify). What kind of track record do those aircraft have in the field? Now consider that there are far fewer "threats" for an aircraft in the air than a vehicle moving through traffic. Airliners are typically traveling very far apart, and have 3 dimensions of movement to maneuver around potential issues. Vehicles travel in close proximity to one another, at different rates of speed, in two dimensions, while also dealing with stop/go situations, hazards in the path of travel and intersecting paths, etc.

By untested I guess I mean unperfected, because I imagine there is always going to be room for improvement. Tests can be finished. And I'm not saying inexperienced teenager should be our standard but that's a worst case. Hell, is unperfected code any worse than half of the "normal" drivers out there? I don't know if you live in an urban setting but the kind of asshatery I see on our roadways multiples times a day is breathtaking. Although the bad driving I see ultimately stems from selfishness and not-giving-a-E36 M3 about others on the roadway. I guess that might be my underlying viewpoint. I'd rather see an autopilots released to be tested and improved then shelled up on closed circuits to die out and be held off for decades of testing.

A “bad driver” isn’t going to run into an emergency vehicle with flashing lights and flares on the shoulder because they thought a tar strip was a lane stripe.

I've seen a "bad driver" run into an emergency vehicle with flashing lights and flares on the shoulder because they saw an emergency vehicle with flashing lights and flares on the shoulder, yet somehow ran right into it. That isnt an uncommon occurance, sadly. 

iansane
iansane GRM+ Memberand New Reader
12/10/19 1:37 p.m.
BlueInGreen - Jon said:
iansane said:
STM317 said:

In reply to iansane :

I guess I'd ask if it has to be under tested code? There are plenty of places where code is thoroughly tested in simulations and controlled environments before it's released in the public realm. Other automakers are doing it and rolling their tech out in much more controlled/limited ways. "As bad as an inexperienced teenager" shouldn't be the standard we set for tech being sent into the world.

A modern passenger aircraft has 150 million lines of code. How many aircraft are released with "under tested code"? 737 Max jumps to mind, but nothing else really (there are a bunch of plane people here that might clarify). What kind of track record do those aircraft have in the field? Now consider that there are far fewer "threats" for an aircraft in the air than a vehicle moving through traffic. Airliners are typically traveling very far apart, and have 3 dimensions of movement to maneuver around potential issues. Vehicles travel in close proximity to one another, at different rates of speed, in two dimensions, while also dealing with stop/go situations, hazards in the path of travel and intersecting paths, etc.

By untested I guess I mean unperfected, because I imagine there is always going to be room for improvement. Tests can be finished. And I'm not saying inexperienced teenager should be our standard but that's a worst case. Hell, is unperfected code any worse than half of the "normal" drivers out there? I don't know if you live in an urban setting but the kind of asshatery I see on our roadways multiples times a day is breathtaking. Although the bad driving I see ultimately stems from selfishness and not-giving-a-E36 M3 about others on the roadway. I guess that might be my underlying viewpoint. I'd rather see an autopilots released to be tested and improved then shelled up on closed circuits to die out and be held off for decades of testing.

A “bad driver” isn’t going to run into an emergency vehicle with flashing lights and flares on the shoulder because they thought a tar strip was a lane stripe.

Unless they were asleep at the wheel, I guess.

I mean no disrespect but I think you have a much higher opinion of the lemmings on the highway than I do.

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
12/10/19 1:38 p.m.
Knurled. said:

Complacency is what causes collisions.

That's a great line. It's exactly what the problem is with the current level of self-driving. It works just well enough to generate complacency.

BlueInGreen - Jon
BlueInGreen - Jon SuperDork
12/10/19 1:45 p.m.

In reply to yupididit :

My point was more that if someone falls asleep or isn’t paying attention and hits something it’s an accident and blame can be assigned.

If a computer steers into something it’s a calculated decision.

I guess you could argue that a car is still wrecked either way so it doesn’t matter but I don’t know if I’d agree with that.

STM317
STM317 UltraDork
12/10/19 1:51 p.m.

In reply to iansane :

I guess I'm not against the idea of these semi autonomous driving systems, just the way that Tesla specifically is developing theirs. I think a safer world is a good thing. But there's a general trend when it comes to the development of things that might kill people to get the tech as ready for consumption and bug free as possible. It's not always perfect, but it's usually pretty fleshed out. Testing a vehicle is different than working out the bugs of a video game or new phone that you can patch after the fact. Nobody gets hurt or killed when their video game glitches. Tesla has acted like a software company, and treated the development of their vehicles much the same as software, but there's far more on the line.

GM currently has a similarly capable system, but only allows it's use in very defined, controlled scenarios. It also is more focused on driver interaction and control while in use. It's got less upside for a user because it's limited in it's application, can't be updated as often, and doesn't benefit from Fleet Learning like Teslas do. But there's less downside too for users and everyone else on the road. If you read the article I've linked a couple of times now, it's easy to tell which company has been making/testing vehicles for a long time, and which one is a tech company that happens to make vehicles. They both bring their industry mindsets into the equation and their different approaches from the start are pretty obvious. In the end, neither system is perfect. They both have their flaws. However, I'd argue that the tech will gain wide acceptance faster (and with less litigation) if it's rolled out safely and responsibly over time than if it's thrown against the wall to see what sticks, and you get a few high profile accidents occasionally to negatively change public perception of the tech and start the lawsuits, tank the company's stock, etc.

BrewCity20
BrewCity20 New Reader
12/10/19 1:52 p.m.
Knurled. said:

In reply to _ :

That is why I don't get the appeal.  It would seem to me that just sitting there, having to constantly mind what is going on and prepared to take control, would be more fatiguing than just driving the car in the first place.

Took the words right out of my mouth. I'm fascinated by the technology and am glad it is being developed, but the idea of watching the road with my hands hovering over the steering wheel and feet hovering by the pedals during a trip sounds more exhausting than just driving to begin with.

1988RedT2
1988RedT2 MegaDork
12/10/19 3:30 p.m.
BrewCity20 said:
Knurled. said:

In reply to _ :

That is why I don't get the appeal.  It would seem to me that just sitting there, having to constantly mind what is going on and prepared to take control, would be more fatiguing than just driving the car in the first place.

Took the words right out of my mouth. I'm fascinated by the technology and am glad it is being developed, but the idea of watching the road with my hands hovering over the steering wheel and feet hovering by the pedals during a trip sounds more exhausting than just driving to begin with.

Exactly.  It's a pretty cool trick to show your friends, but if you can't trust it with your life, what the hell good is it?

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
12/10/19 3:42 p.m.

That's why I tried the Autosteer (not full "self driving") feature on my car and then turned it off again. It's like riding shotgun with a nervous 15 year old who just gives up when things get hard.

The car has lots of other attributes I like and I'm happy to have it datalog everything to add to the fleet database, but I'll do my own steering thanks. I don't find that a problem, the car goes where my eyes go so if I'm looking down the road I might as well have my hands on the wheel.

The adaptive cruise is arguably more useful on the Tesla than on other cars as you don't get the same speed feedback from the engine note that you get with an ICE. Speed control is not a new thing on cars, it was an option on my '66 Cadillac.

So - how do we get there (actual autonomy) from here (cars that can handle most but not all situations)?

Harvey
Harvey GRM+ Memberand SuperDork
12/10/19 3:55 p.m.

If we're going to say human stupidity and carelessness should negate the use of a particular technology because it might get someone killed then no one would be driving a car ever again.  There were around 40,000 automotive fatalities in 2018 and the numbers trend steadily upward as you go farther back.

If we're going to say that cars should never be deployed with possibly buggy hardware or software then again we will probably never be driving cars. The Takata airbag recall is one of the most recent in a series of large scale automotive screw ups that have killed people and in that case it was because auto manufacturers wanted to save a few bucks on safety.

Knurled.
Knurled. GRM+ Memberand MegaDork
12/10/19 4:07 p.m.

In reply to Harvey :

Was the Takata airbag thing a "screwup", or just people expecting a different spec than what they were engineered for?

 

It's my understanding that they only got to be dangerous after 10 years or so, which IMO is reasonable, given that I also remember airbag equipped cars from the 90s with placards in the doorjambs with all airbags' date of manufacture, and a note that airbags should be replaced after ten years old.  If that was the spec they were engineered to, then fine, but if that is the case, it sounds like they didn't notify the automakers of that decision.

Harvey
Harvey GRM+ Memberand SuperDork
12/10/19 4:30 p.m.
Keith Tanner said:

That's why I tried the Autosteer (not full "self driving") feature on my car and then turned it off again. It's like riding shotgun with a nervous 15 year old who just gives up when things get hard.

The car has lots of other attributes I like and I'm happy to have it datalog everything to add to the fleet database, but I'll do my own steering thanks. I don't find that a problem, the car goes where my eyes go so if I'm looking down the road I might as well have my hands on the wheel.

The adaptive cruise is arguably more useful on the Tesla than on other cars as you don't get the same speed feedback from the engine note that you get with an ICE. Speed control is not a new thing on cars, it was an option on my '66 Cadillac.

So - how do we get there (actual autonomy) from here (cars that can handle most but not all situations)?

The reality of an autonomous car will probably come down to a combination of different technologies. You've got sensors for LIDAR, RADAR, and then you have GPS, external cameras and AI with 3D mapping technology.

Wired has a pretty good series of articles going on this subject https://www.wired.com/story/guide-self-driving-cars/

It's an interesting series of problems, because you have all of this data, but each system is somewhat limited in how it can be used in different situations. For instance the best GPS can place you within 3 meters, but of course that's not accurate enough to park a car where inches might be involved so you have sensors on the car that can tell it how far away it is from objects, but those are limited as well in how they tell the car about obstacles. For instance a bumper sensor might not detect a curb or in inclement weather like snow or hard rain it might not even be able to detect anything and LIDAR and RADAR can't give an AI enough data about an obstacle to tell it whether it should crash into another car to avoid crashing into a human. Then you get into cameras all around the car, but cameras also aren't completely immune to inclement weather and obstacles. That's where 3D maps come into play. There are companies out there, including Google and Tesla, that are going around making maps of the world that can be interpreted by an AI as if it was the player in a first person high resolution virtual reality game. These companies send out vehicles that take high res photographs of everything around them (or in the case of Tesla undoubtedly use the data every one of their cars is providing). Then they use that to build these 3D maps and this fills in the gaps left by those other systems. Every rock, tree, road, sidewalk, street sign, building, etc is accounted for in these things. When the car can't see certain things via camera or sensor it still has this internal representation of the world that it can reference. Stop sign blocked by a tree or snow covering up the lines on the road? 3D map. Then you get into cross vehicle communication. If the car ahead knows it is coming to a drastic stop it can tell the cars behind it to slow down.

The complexity of creating an AI that can combine all these sources of data together and make sense of them so that a car can drive from point A to point B without running anyone over or getting into in accident is where we are at now. Most of these data sources are not fictional anymore they are a reality.

Harvey
Harvey GRM+ Memberand SuperDork
12/10/19 4:45 p.m.
Knurled. said:

In reply to Harvey :

Was the Takata airbag thing a "screwup", or just people expecting a different spec than what they were engineered for?

 

It's my understanding that they only got to be dangerous after 10 years or so, which IMO is reasonable, given that I also remember airbag equipped cars from the 90s with placards in the doorjambs with all airbags' date of manufacture, and a note that airbags should be replaced after ten years old.  If that was the spec they were engineered to, then fine, but if that is the case, it sounds like they didn't notify the automakers of that decision.

From what I've read the danger from the Takata airbags increases via a combination of age and exposure to heat and humidity. So, if you live in Florida the timeline for the airbags becoming defective could be accelerated.  Also, while older airbags indicated a lifetime on them, the risk was that they would not activate, but in the case of the Takata airbags the risk becomes them exploding and hurling shrapnel through the cabin.

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
12/10/19 4:45 p.m.

I'm not asking from a technology standpoint, I'm reasonably solid there.

But testing and deployment - how do we decide that they're okay to release, to be on the roads? Humans, as we know, are atrocious at judging risk and somehow will accept Distracted Debbie or Drunk Uncle Bob behind the wheel long before they'll accept Skynet. The biggest complaint about Tesla's FSD is that it's "not ready". How do we get it ready? We have to let them out at some point, what is that point and how do we decide we're there.

I have concerns about putting too much faith in a mapped model. That doesn't scale well, because the world is a very dynamic place and that model is obsolete basically immediately. Using every Tesla out there makes a lot of sense because it's keeps the model updated constantly. Having Google send cars around doesn't. Google Street View hasn't visited my address in over 7 years.

Harvey
Harvey GRM+ Memberand SuperDork
12/10/19 5:11 p.m.
Keith Tanner said:

I'm not asking from a technology standpoint, I'm reasonably solid there.

But testing and deployment - how do we decide that they're okay to release, to be on the roads? Humans, as we know, are atrocious at judging risk and somehow will accept Distracted Debbie or Drunk Uncle Bob behind the wheel long before they'll accept Skynet. The biggest complaint about Tesla's FSD is that it's "not ready". How do we get it ready? We have to let them out at some point, what is that point and how do we decide we're there.

I have concerns about putting too much faith in a mapped model. That doesn't scale well, because the world is a very dynamic place and that model is obsolete basically immediately. Using every Tesla out there makes a lot of sense because it's keeps the model updated constantly. Having Google send cars around doesn't. Google Street View hasn't visited my address in over 7 years.

The people making the tech can't put their faith in any one thing, that's the point. You still have the GPS, the sensors and the cameras along with the 3D mapping and possibly car communication. The 3D mapping is never going to be 100% accurate, but it's accurate enough to fill in the gaps. Google Street View hasn't visited your address in over 7 years because there is no one depending on it for accurate visual representations of the world. When every car has a series of cameras on it that record what is going on around them how difficult do you think it will be to obtain updated imagery for those 3D models? They will undoubtedly be updated in real time as cars travel to those places.

There are a series of both technical, legal and societal changes that have to occur before autonomous driving becomes mainstream, but in reality once one of these companies develops a system that is demonstrably foolproof, conquering the technical issues, most of the legal and societal concerns will be cast aside. All it will take is the tech reaching maturity.

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
12/10/19 5:21 p.m.

How do you demonstrate "foolproof"? How do you determine the tech is mature?

That's my question. It's easy to just say "oh, when it's ready it'll be accepted". But how does it get ready? How do we prove that it's ready so that a majority of people will accept it?

I'll admit I do tend towards the "calculate everything in real time" technique, because of the short-lived nature of 3D maps. You either have to count on them 100% or you can only use them for general guidance, and in that case you might as well use a street map because you can't rely on the details anyhow.

My Street View comment was intended to illustrate why we can't rely on a specific entity just doing mapping. It would have to be crowd-sourced, so to speak. All those mobile cameras are going to have to share their information.

Harvey
Harvey GRM+ Memberand SuperDork
12/10/19 6:14 p.m.
Keith Tanner said:

How do you demonstrate "foolproof"? How do you determine the tech is mature?

That's my question. It's easy to just say "oh, when it's ready it'll be accepted". But how does it get ready? How do we prove that it's ready so that a majority of people will accept it?

I'll admit I do tend towards the "calculate everything in real time" technique, because of the short-lived nature of 3D maps. You either have to count on them 100% or you can only use them for general guidance, and in that case you might as well use a street map because you can't rely on the details anyhow.

My Street View comment was intended to illustrate why we can't rely on a specific entity just doing mapping. It would have to be crowd-sourced, so to speak. All those mobile cameras are going to have to share their information.

When I referred to updating things in real time what I meant was that a car using a 3D map would not necessarily have all available imagery in its memory at once. It might only download a radius of 10 miles at a shot from a distributed network of servers via a 5G connection and at the same time upload footage that can be used to update those maps in real time for other cars to download. Some of these car makers will undoubtedly contract this sort of thing to companies that dedicate themselves purely to providing 3D map services. Those companies are already out there. It's all of a piece though with the car's GPS, local sensors and cameras, you can't rely on the 3D imagery all the time.

How you demonstrate something is foolproof is by marketing it as such and then inviting people to prove you wrong. Elon Musk loves making grand claims and people love trying to prove him wrong. The majority of people accept things when the media and their peers tell them it's okay to accept it. Ten years ago you would have been looked at as kind of an oddball if you had an all electric car since most of them didn't travel very far and there weren't a lot of options for charging it. Now that the technology has matured you see them everywhere.

I think the actual question you're asking though is if it has to be foolproof, what does that mean? Say the technology is good enough to reduce traffic fatalities by 99%, but in 1% of cases it fails. If 40,000 people a year die in auto accidents prior to the tech, once it is introduced does the headline read "AI Cars Killed 400 People Last Year" or "AI Cars Saved 36,600 Lives Last Year."

Knurled.
Knurled. GRM+ Memberand MegaDork
12/10/19 6:21 p.m.
Keith Tanner said:

How do you demonstrate "foolproof"? How do you determine the tech is mature?

 

When it can navigate at full traffic speed roads for which is has no downloaded data.  IE, it is able to rely entirely on internal sensors.  It cannot rely on lane marker lines because those are unreliable.  (What if the road was freshly paved and hasn't been painted yet?  This is a common occurrence here.  And of course rain/snow obscuring the lines)

 

In short, it should be able to function using only the things that humans use to operate vehicles, since that is the standard by which we expect cars to be driven.  If it can do that, it should be good to go.

AAZCD
AAZCD HalfDork
12/10/19 6:27 p.m.
Knurled. said:
Keith Tanner said:

How do you demonstrate "foolproof"? How do you determine the tech is mature?

 

When it can navigate at full traffic speed roads for which is has no downloaded data.  IE, it is able to rely entirely on internal sensors.  It cannot rely on lane marker lines because those are unreliable.  (What if the road was freshly paved and hasn't been painted yet?  This is a common occurrence here.  And of course rain/snow obscuring the lines)

 

In short, it should be able to function using only the things that humans use to operate vehicles, since that is the standard by which we expect cars to be driven.  If it can do that, it should be good to go.

Including all reasonable weather and construction zones: Lane shifts and detours, high snow banks, and prediction/avaoidance of hydroplaning and black ice.

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
12/10/19 6:30 p.m.

The thread title is "Tesla Auto-Pilot not ready for prime time?"
Musk marketed it as such and then invited people to prove him wrong. Well, it appears that popular opinion is saying he's wrong.

Is it simply a matter of a yelling contest in the media? In that case, the blood and guts articles will win out. AI cars killing poor innocent babies will always get more attention than AI cars not getting drunk and plowing into a tree (3D mapped or otherwise).

To me, it's an improvement when the autonomous cars are safer in terms of fatalities per mile driven than an equivalent fleet of human-driven cars. By equivalent, I'm not counting 25 year old rustbuckets because the consequences of a crash are much higher in those. You'll also want to correct somehow for people having to drive because the autonomous cars have decided they're not up to the job (when maybe the human should have also made that decision?) although that's going to take a bit of number juggling. But as long as there are AI cars on a killer rampage, that metric will not work for most people. They'll have to be an order of magnitude safer at least before we start to get even grudging acceptance.

alfadriver
alfadriver MegaDork
12/10/19 7:07 p.m.
Keith Tanner said:

I'm not asking from a technology standpoint, I'm reasonably solid there.

But testing and deployment - how do we decide that they're okay to release, to be on the roads? Humans, as we know, are atrocious at judging risk and somehow will accept Distracted Debbie or Drunk Uncle Bob behind the wheel long before they'll accept Skynet. The biggest complaint about Tesla's FSD is that it's "not ready". How do we get it ready? We have to let them out at some point, what is that point and how do we decide we're there.

I have concerns about putting too much faith in a mapped model. That doesn't scale well, because the world is a very dynamic place and that model is obsolete basically immediately. Using every Tesla out there makes a lot of sense because it's keeps the model updated constantly. Having Google send cars around doesn't. Google Street View hasn't visited my address in over 7 years.

To be honest, the rest of the industry is setting standards and answering the tough questions (like who do you kill in X situation, where death is only a choice of who).  When you go through all of those decisions, and demonstrate that you properly answer them, then you release the product.

This is where Tesla as a corporation drives me nuts- they have released cars that were not quite done before, not for just auto driving, but in just basic operation and safety.  And they get away with it.  

A lot of that is due to the hype and enthusiasm around their product, and to be honest, they have earned the enthusiasm, and all of the flaws that over looks (and it does, I have seen Ford's data that shows if you put out a POS but it's a fun one, consumers are willing to overlook it).  But once the 3 goes to the actual general public, that will certainly hurt Tesla quite a bit.

IMHO, this is the area where arrogantly, Musk thinks he can do it better than us- as it takes 3 years to fully re-develop a product and up to 6 years for a 100% brand new product.  And thousands of engineers and techs working for probably millions of hours making sure everything is done to a proper standard.  Musk and many other finds fault in the auto industry as we all take too long- it takes a long time for a reason, and as far as I have seen, nobody has discovered the magic to eliminate real development.  

So to circle back to your original question, Keith- set standards using as many proper inputs as you can, including crossing industries, DO NOT IGNORE points, demonstrate that you can deal with all of them, and THEN you are ready to release it.  The point is that it should not be left up to the public to find out if a technology is ready or not, you know it before  you release it.

(FWIW, I've only opened this last page, have no idea what happened about this specific issue, but if it's anything like the past failures, they are not totally surprising situations- they were things autonomous cars were advertised to deal with)

Knurled.
Knurled. GRM+ Memberand MegaDork
12/10/19 7:11 p.m.
iansane said:
BlueInGreen - Jon said:
iansane said:
STM317 said:

In reply to iansane :

I guess I'd ask if it has to be under tested code? There are plenty of places where code is thoroughly tested in simulations and controlled environments before it's released in the public realm. Other automakers are doing it and rolling their tech out in much more controlled/limited ways. "As bad as an inexperienced teenager" shouldn't be the standard we set for tech being sent into the world.

A modern passenger aircraft has 150 million lines of code. How many aircraft are released with "under tested code"? 737 Max jumps to mind, but nothing else really (there are a bunch of plane people here that might clarify). What kind of track record do those aircraft have in the field? Now consider that there are far fewer "threats" for an aircraft in the air than a vehicle moving through traffic. Airliners are typically traveling very far apart, and have 3 dimensions of movement to maneuver around potential issues. Vehicles travel in close proximity to one another, at different rates of speed, in two dimensions, while also dealing with stop/go situations, hazards in the path of travel and intersecting paths, etc.

By untested I guess I mean unperfected, because I imagine there is always going to be room for improvement. Tests can be finished. And I'm not saying inexperienced teenager should be our standard but that's a worst case. Hell, is unperfected code any worse than half of the "normal" drivers out there? I don't know if you live in an urban setting but the kind of asshatery I see on our roadways multiples times a day is breathtaking. Although the bad driving I see ultimately stems from selfishness and not-giving-a-E36 M3 about others on the roadway. I guess that might be my underlying viewpoint. I'd rather see an autopilots released to be tested and improved then shelled up on closed circuits to die out and be held off for decades of testing.

A “bad driver” isn’t going to run into an emergency vehicle with flashing lights and flares on the shoulder because they thought a tar strip was a lane stripe.

Unless they were asleep at the wheel, I guess.

I mean no disrespect but I think you have a much higher opinion of the lemmings on the highway than I do.

I have had somebody follow me to a stop on the berm of an offramp, well away from the end of the ramp with hazards on, and honk at me for not moving.  They sat there for a good ten seconds before they realized that we were no longer actually on the road, and they took off in a huff.  (Ten seconds is an eternity when you are trying to figure out if they are just clueless, or about to attempt to carjack you)

 

Most people are zombies when driving a vehicle.  Treat them accordingly and you will never be disappointed.

codrus
codrus GRM+ Memberand UberDork
12/10/19 7:57 p.m.
Keith Tanner said:

In 1996, I had a computer prof explain to me in great detail how full screen streaming video was impossible.

Bah.  We knew how to do full screen streaming video in 1996, we just didn't have networking hardware that was fast enough.  That's a simple application of Moore's law -- more transistors made everything faster.

As with practical fustion power, strong AI has been "20 years away" since the 1960s.  It's not a question of computer horsepower, we genuinely have no idea how to do it, even if we had computers a million times faster than the ones we have today.  As I said, we don't even really know how human intelligence actually works.

 

1 2 3 4 5

You'll need to log in to post.

Our Preferred Partners
lGUiA5pUCOIjILyklnxJBerl3D5jNeChHvn5p0bvAHbxMfuzfjGwzpnG6VDY1dmm