Saw My First Tesla Cyber Truck on the road today

20,248 Views | 267 Replies | Last: 1 yr ago by Stat Monitor Repairman
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

These are the last 3 major presentations that Tesla has done with respect to their FSD software. Most of it is over my head, but I'm fairly sure I have the basics of how it operates down.

https://www.youtube.com/live/j0z4FweCy4M?si=kxycvN9e8ppMWRNG

https://www.youtube.com/live/ODSJsviD_SU?si=5evU0N9fLBzyTiDm

Timestamp Link isn't working since it was a livestream, but it's 1h20m18s into the video. The rest is about Tesla's overall roadmap, material constraints, how they're driving down costs and rapidly deploying charging infrastructure. The talk with respect to FSD/real world AI is only about 15 minutes long.

https://m.youtube.com/live/Hl1zEzVUV7w?si=fjdlFSyCSLjPeI_t%3Ft%3D4818s
So, it appears the Tesla FSD is still SAE Level 2. Not bad, but not Level 3...

That's not a bad thing. This technology is quite difficult and mistakes can be very bad, so it's better that they take their time.

This is NOT something that needs to be hurried.
Ag with kids
How long do you want to ignore this user?
AG
TexasRebel said:

Who said rockets couldn't land & be reused? With 1960's tech, of course not.

Problem solving comes in when you see something in the road you've never seen before. You, as a human, can decide a course of action that you deem will be your best chance of survival. Software cannot.

Mechanical failures are another spot. You apply the service brakes you expect a certain feedback. Software can do this, too. If you get unexpected feedback, what do you do? Stop dead in I35 traffic with the e-brake? Hit the softest object nearby? Find a runaway ramp? Maybe a wheel falls off. A human can learn the new control characteristics in a heartbeat. Software has no heart. Not the ability to learn how a suddenly 3-wheeled car steers.

Are some humans better at problem solving than others? Yes. Are those humans better drivers? Also, yes.

Also. Do you really want to willingly or unwillingly bet your life against a kernel panic?


Case in point...the folks in San Francisco that have stopped Cruise vehicles by putting traffic cones on their hoods. The vehicles are designed to essentially stop when they encounter a problem they can't solve. Not a bad first level failsafe.

BUT...that can have second order problems on the rest of traffic that is NOT having that problem.

This stuff will get worked out, though.

Level 4 and 5 is a LOT harder to achieve than some people on here apparently think it is...
hph6203
How long do you want to ignore this user?
AG
From a regulatory perspective it's a level 2 system, but from a functionality standpoint I'd say it reaches level 3 while driving on highways, but they just don't define it as that, because they don't want to deal with the regulations. You could get on the highway in Houston and drive to Phoenix while barely, if ever, touching the controls.

Their roadmap is to operate at level 2 until they broadly achieve level 4 or 5.

The argument about which system is better, Waymo or Tesla (Cruse has fallen off significantly in the last month) is more about approach than how far along the system is towards having no driver. Waymo pre-maps the areas they operate in with HD maps, and utilize expensive LiDAR sensors to improve their perception whereas Tesla is a pure vision system (soon to add radar back in to deal with obstructions to vision). So it's a question of whether Waymo, even if it succeeds in being a broadly deplorable software system, if it can even be broadly deployed profitably from a hardware perspective.

Waymo unquestionably is further ahead in making one vehicle that can drive in a specified area of the country without a driver, the question is how close they are to being able to deploy millions of vehicles across the entire country. Believers in FSD think Tesla has the right approach, because it doesn't require pre-mapping and it can be produced at price equivalent to a mass produced vehicle.

The next major software release, V12, is supposed to improve the lumpiness in the rate of improvement and accelerate it. We'll see if that materializes.
hph6203
How long do you want to ignore this user?
AG
You underestimate the amount of data available to the system. You're talking about a million miles of information every day within the system itself and approaching 100 million miles of queryable data every day from the fleet.

Even so. The goal is not to be a flawless system, it's to be safer than a human and while a human might be able to navigate those edge cases more reliably/rapidly, the software will navigate the normal circumstances more consistently in a mature state. No drunk driving, no running red lights/stop signs, no changing lanes into the vehicle next to you. You're trading adaptability in the edge cases for consistency in the common case. Accidents aren't commonly caused by people failing in edge cases, they're caused by inattentiveness in the normal case.

You also underestimate the ability of the system to extrapolate information from a limited data set. So the vehicle is likely to deal with your flying tire scenario better than the average human. You overestimate how much problem solving is being done in that scenario by the human and how much of it is pure reaction. Sometimes those actions are successful, sometimes they're not, and we lie to ourselves and say we did a good job dealing with it rather than dealing with some level of luck that our actions met up with reality in a successful way.
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

From a regulatory perspective it's a level 2 system, but from a functionality standpoint I'd say it reaches level 3 while driving on highways, but they just don't define it as that, because they don't want to deal with the regulations. You could get on the highway in Houston and drive to Phoenix while barely, if ever, touching the controls.

Their roadmap is to operate at level 2 until they broadly achieve level 4 or 5.

The argument about which system is better, Waymo or Tesla (Cruse has fallen off significantly in the last month) is more about approach than how far along the system is towards having no driver. Waymo pre-maps the areas they operate in with HD maps, and utilize expensive LiDAR sensors to improve their perception whereas Tesla is a pure vision system (soon to add radar back in to deal with obstructions to vision). So it's a question of whether Waymo, even if it succeeds in being a broadly deplorable software system, if it can even be broadly deployed profitably from a hardware perspective.

Waymo unquestionably is further ahead in making one vehicle that can drive in a specified area of the country without a driver, the question is how close they are to being able to deploy millions of vehicles across the entire country. Believers in FSD think Tesla has the right approach, because it doesn't require pre-mapping and it can be produced at price equivalent to a mass produced vehicle.

The next major software release, V12, is supposed to improve the lumpiness in the rate of improvement and accelerate it. We'll see if that materializes.
a) There's no regulatory system that defines the levels. It's a STANDARDS system, SAE, that defines them.
b) The functionality is SAE Level 2 based on the definition of SAE automation levels. Just because you don't touch the controls, does not mean you've changed levels. You're still operating at that level.
c) Of course believers in FSD think Tesla has the right approach. FSD is a Tesla branded level of automation software.


SAE J3016


TexasRebel
How long do you want to ignore this user?
AG
Flying tire? No. Failure of your own equipment.

Heck double it up. A flying tire with modified control characteristics due to mechanical failure.

You're fatally underestimating the infinite number of edge cases.

Remember limits? The limit as x approaches infinity of c/x where x is the number of edge cases and c is a finite number of known data.

…is zero. The computer ultimately knows nothing.

No accidents happen because everyone and everthing on the road is doing what they are expected to do.
hph6203
How long do you want to ignore this user?
AG
There is liability and regulation related to the first section of the standards. The definition of level 2 vs 3 is, in part, who is responsible for operating the vehicle. When you are defined as a level 3 system you are shifting the liability to the system rather than the driver in the scenarios where it is applicable.

Enhanced autopilot is capable of changing lanes, following navigation instructions, lane keeping, and adjusting speed while on the highway. Those capabilities exceed the level 2 standards, but they do not operate at a level above that because they don't want to accept the liability so it cannot be defined as anything other than level 2

It performs all of the requirements for driving on the highway. From a functional perspective it is a level 3 system on the highway except for the shifting of responsibility to the system from the driver. Tesla never accepts responsibility for the driving, therefore it is not considered a level 3 system even if it broadly functions that way better than a system like Mercedes has where they will define it as level 3 in certain narrow circumstances, and people will say Mercedes is more advanced because in a series of nested ifs they will absorb liability, even if broadly it functions worse.

FSD is capable of making left hand turns, right hand turns, yielding and adapting to speed limit changes, but by definition it is a level 2 system because the driver is never relinquished from liability. The levels are not a representation of the extent of functionality for that reason. FSD under those standards is defined the same as Autopilot even though FSD is far more capable.
BudFox7
How long do you want to ignore this user?
Conservatives' obsession with electric cars is really weird.
hph6203
How long do you want to ignore this user?
AG
Electric vehicles have redundancy in their ability to brake that goes beyond brake/emergency brake. You're talking about a substantial mechanical failure that may or may not be resolvable by a human.

You're talking about edge case causes of crashes that are not always (or even often) avoided by humans. What you're doing is applying a standard of flawless operation to the software that you don't expect from a human, but think it's impossible for software to function because a human COULD succeed in those scenarios (the same way the software could even if it had never seen it before).

The edge case causes of crashes are not the dominant cause of crashes. It is driver error. Someone runs a red light, stop sign, lane shifts into car they didn't see in their blind spot, road rage, avoids an object in the road (like a tired) and rams into a guard rail because the panicked. Those scenarios are avoidable with an appropriately trained autonomous system. This has already been demonstrated. Waymo, in its operation, has no driver and is safer in the scenarios in which it drives than a human.

Accidents will happen with autonomous driving vehicles. People will die. The goal is to reduce the rate of those accidents and as time progresses the system improves at dealing with those scenarios, because unlike humans those edge case scenarios won't be novel scenarios to the software.

I have never experienced a rogue wheel on the highway, FSD has. Not once. Hundreds of times. Eventually that will be thousands. And millions.


Alpha zero would adapt to and exceed the capabilities of a human to a changing rule set faster than the human. You dramatically overestimate what the human brain is doing. It is, again, a belief that a soul is required in order to do something which was the same argument for chess and Go. Alpha Zero does not understand Go, it recognizes patterns that even humans don't recognize. That is how an autonomous vehicle will function, it is just a more complicated problem because the rules is broader. You're getting into a spiritual argument rather than a technical one.

The computer does not have to "know" anything, it just has to be able to react and you underestimate its capacity to do that.
hph6203
How long do you want to ignore this user?
AG
And just to be clear I'm not saying the solution to this is imminent, but that the idea that a computer isn't capable of dealing with driving because it isn't "intuitive" is not accurate and that has been claimed about systems that have been thoroughly conquered by computers in the past.
Ag with kids
How long do you want to ignore this user?
AG

Quote:

There is liability and regulation related to the first section of the standards. The definition of level 2 vs 3 is, in part, who is responsible for operating the vehicle. When you are defined as a level 3 system you are shifting the liability to the system rather than the driver in the scenarios where it is applicable.

That is completely wrong. The standards don't deal with "liability" and the regulations don't define the standards.

I gave you the SAE J3016 Standard that is being used by Tesla and most regulators (definitely the DOT). The give the explanation of how each Level is achieved.

The standard Levels are defined by what level of functionality is being performed by the automation versus the driver.

Quote:

Enhanced autopilot is capable of changing lanes, following navigation instructions, lane keeping, and adjusting speed while on the highway. Those capabilities exceed the level 2 standards, but they do not operate at a level above that because they don't want to accept the liability so it cannot be defined as anything other than level 2
If the functionality exists in the software, but is not activated, it doesn't matter that it exists. The system either functions at Level 3 or it doesn't. And Tesla's FSD functions at Level 2.

Quote:

It performs all of the requirements for driving on the highway. From a functional perspective it is a level 3 system on the highway except for the shifting of responsibility to the system from the driver. Tesla never accepts responsibility for the driving, therefore it is not considered a level 3 system even if it broadly functions that way better than a system like Mercedes has where they will define it as level 3 in certain narrow circumstances, and people will say Mercedes is more advanced because in a series of nested ifs they will absorb liability, even if broadly it functions worse.
That means that it is Level 2. Look at the standard. It doesn't have to do with "liability", it has to do with functionality.

If Tesla were to say the FSD has the code in it to operate at Level 5, but they never turn it on above Level 2, then it is a Level 2 system.

The Mercedes system is defined as Level 3 BECAUSE it operates with the functionality defined by the Level 3 standard. And it operates in the constrained environment BECAUSE they've only been granted authority to operate at Level 3 in that constrained environment.

"Worse" is not something that is taken into account in the standards. It either has the functionality or it doesn't. If it does, it meets the standard...

Quote:

FSD is capable of making left hand turns, right hand turns, yielding and adapting to speed limit changes, but by definition it is a level 2 system because the driver is never relinquished from liability. The levels are not a representation of the extent of functionality for that reason. FSD under those standards is defined the same as Autopilot even though FSD is far more capable.
You keep using that word "liability". It's FUNCTIONALITY that is not relinquished. The next sentence is 100% wrong. LOOK at the document AND chart I posted - it tells you how the levels are defined.

BTW, FSD and Autopilot are just branding names for software that provides autonomous control. There's nothing special about one vs the other EXCEPT for the level of functionality.

Tesla even tells you that Autopilot and FSD do not make the vehicle autonomous.

Quote:

The currently enabled Autopilot, Enhanced Autopilot and Full Self-Driving features require active driver supervision and do not make the vehicle autonomous.
That means (and Tesla agrees) that it is operating at SAE Level 2.
Ag with kids
How long do you want to ignore this user?
AG
BudFox7 said:

Conservatives' obsession with electric cars is really weird.
You're on this thread...
hph6203
How long do you want to ignore this user?
AG
When you're judging how advanced/far along the system is and defining one as level 2 because the operator is unwilling to accept liability, then your liable to come away with a misimpression of how advanced the system actually is. Hardly anyone cares about the standard's definitions, at least not on this forum, they care about the capabilities of the system, which is not wholly encompassed by the standards.

That's the point I'm making. When you define the system as a level 2 system you're functionally saying it's the equivalent of a lane keeping/adaptive cruise control system in terms of its complexity in most people's minds. It is not. What it is capable of and what level it operates at are not the same thing.

Waymo can operate at level 4, because it is more advanced in the circumstances that it operates in, but also because the scale at which they operate (~1000 vehicles) affords them the opportunity to operate at a riskier level than a company operating a fleet of 2,500,000. If both vehicles have the same level of functionality and they both get in accidents every 250,000 miles, like a human driver, the Waymo fleet with its 1000 vehicles would experience a crash every 6.5 days, or 56 times per year. With 2,500,000 vehicles, Tesla would experience 140,000 crashes every year, which is a crash every 4 minutes. So while liability isn't part of the standard it is part of the decision on how a vehicle operates. How it operates isn't indicative of quality. And while you may understand that, the 99% of other people reading this thread do not. Even if Tesla's software were superior to Waymo's at this stage they would not operate at level 4, because there's too much liability involved.

You saw this recently with Cruise's system where they were granted approval to operate as a level 4 system, and their system was defined as such, but behind the scenes they had 1.5 remote operators per vehicle and they had to intervene once every 2-5 miles. So is that system superior to Tesla's system, even though Tesla's is defined as a level 2?


The people on this forum are not bureaucrats, which is what the standards were designed for. I have never argued Tesla's software was operating at anything other than level 2, just explaining why that's the level they operate at.


Bolding just for emphasis in a long post, not to be aggressive.
Ag with kids
How long do you want to ignore this user?
AG

Quote:

When you're judging how advanced/far along the system is and defining one as level 2 because the operator is unwilling to accept liability, then your liable to come away with a misimpression of how advanced the system actually is. Hardly anyone cares about the standard's definitions, at least not on this forum, they care about the capabilities of the system, which is not wholly encompassed by the standards.
This is a complete load of horse***** Stop using the word liability there. Just stop.

The system has a level of functionality. That level can be defined by the functions that are activated on the system when operating. The Tesla FSD is SAE Level 2.

And the Level TELLS you the capabilities, so YEAH, people care. Just because YOU obviously don't doesn't matter.

Quote:

That's the point I'm making. When you define the system as a level 2 system you're functionally saying it's the equivalent of a lane keeping/adaptive cruise control system in terms of its complexity in most people's minds. It is not. What it is capable of and what level it operates at are not the same thing.
JFC...you just making **** up now. Now we're mind reading to determine level of functionality...awesome.

Quote:

Waymo can operate at level 4, because it is more advanced in the circumstances that it operates in, but also because the scale at which they operate (~1000 vehicles) affords them the opportunity to operate at a riskier level than a company operating a fleet of 2,500,000.
Waymo operates at Level 4 because their vehicles have SAE Level 4 functionality. Period. Full stop. The scale doesn't matter to the Level at which they're operating. They just have a large scale operation that is functioning at SAE Level 4.
Quote:


If both vehicles have the same level of functionality and they both get in accidents every 250,000 miles, like a human driver, the Waymo fleet with its 1000 vehicles would experience a crash every 6.5 days, or 56 times per year. With 2,500,000 vehicles, Tesla would experience 140,000 crashes every year, which is a crash every 4 minutes. So while liability isn't part of the standard it is part of the decision on how a vehicle operates. How it operates isn't indicative of quality. And while you may understand that, the 99% of other people reading this thread do not. Even if Tesla's software were superior to Waymo's at this stage they would not operate at level 4, because there's too much liability involved.
This has NOTHING to do with "quality". Stop talking about the levels at which they operate if you want to talk about things that don't have to do with the levels at which they operate.

And stop conflating functionality of the levels with "liability". If you're trying to explain to the "99%" then you're giving them an incorrect explanation.

If they're worried about the "liability" of operating at Level 3 it is 100% because they know that the FSD does not have the FUNCTIONALITY to operate at Level 3 currently.

Quote:

You saw this recently with Cruise's system where they were granted approval to operate as a level 4 system, and their system was defined as such, but behind the scenes they had 1.5 remote operators per vehicle and they had to intervene once every 2-5 miles. So is that system superior to Tesla's system, even though Tesla's is defined as a level 2?
Did you even look at the definition of Level 4 and see that that was part of the functionality that defined Level 4? If they were operating at Level 4, they had additional functionality from Level 2.

And define "superior"...You have 2 systems operating at different levels of autonomy. You're trying to comparing apples and oranges.

Quote:

The people on this forum are not bureaucrats, which is what the standards were designed for. I have never argued Tesla's software was operating at anything other than level 2, I have explained why that's the level they operate at.
The standards were designed for the manufacturers, operators, AND regulators. Using them incorrectly on here is presenting a false picture.

BTW...I just assumed the bold was for emphasis...no worries.
TexasRebel
How long do you want to ignore this user?
AG
You are absolutely underestimating what even the dumbest surviving human's brain is doing. Even more so while driving.

You are also confusing edge cases and mistakes. Mistakes and failures cause edge cases for anyone nearby.

If your vehicle violently begins to pull to the left into oncoming traffic without any input, what do you plan to do?
If the wheel falls off of the car in front of you what's your plan?
If you see a wild animal run into the road, what do you do? What if it's a domestic animal? How do you know the difference?

I cannot list all of the edge cases you will see while driving. They are literally infinite. I'm just listing a few I've survived.


A computer cannot do something it is not programmed to do. FULL STOP.
hph6203
How long do you want to ignore this user?
AG
What is the first item on that chart? "Who is driving?" That is liability. The levels do not wholly tell you functionality. The differentiation between levels 2 and 3 are more about LIABILITY than functionality. A lane keeping system with adaptive cruise control that cannot change lanes can be defined as a level 3 system if the operator is willing to take liability.

A system can be defined as a level 2 system even if it can drive on the highway, change lanes, follow navigation instructions, exit the highway, stop at a stop light, turn right on red, turn left on a green across 3 lanes of traffic, navigate a roundabout, pull in your driveway, open your garage door, park in your garage, and shut your garage door, because the system, in order to avoid accepting liability, requires that the driver maintain attention during all of those operations. If the liability never shifts to the software, the system will never not be a level 2 system.
Premium
How long do you want to ignore this user?
AG
My F150 is Level 7, I drive with human intelligence and take full responsibility.
hph6203
How long do you want to ignore this user?
AG
There is no one writing code to say "avoid deer". The system experiences 100,000 instances of deers/goats/elk/people/dogs and makes a probabilistic framework of what to do when something like that appears in its path that allows it to not run into a giraffe, lion or hippo if it appears in its path even though the system has never seen a giraffe, lion or hippo before.

Here is a video of Tesla avoiding hitting a tire and stopping to not hit a vehicle. This is the software driving. It swerves to avoid the tire, hits its brakes to avoid the collision and throws on its emergency blinkers.



Here is another video of the vehicle avoiding a tire.

Ag with kids
How long do you want to ignore this user?
AG


Quote:

What is the first item on that chart? "Who is driving?" That is liability. The levels do not wholly tell you functionality. The differentiation between levels 2 and 3 are more about LIABILITY than functionality. A lane keeping system with adaptive cruise control that cannot change lanes can be defined as a level 3 system if the operator is willing to take liability.
No. Just NO. SAE does NOT make ANY judgement as to "liability". They are discussing the functionality that a vehicle needs to have to achieve each level. THAT is it.

You are just 100% wrong on this.

The "who is driving" refers to WHAT that will be involved in the operation of the vehicle. As the levels of autonomy increase, the functionality changes from ALL human interaction (Level 0) to NO human interaction (Level 5). In between, it's part human, part automation.

There is no liability AT ALL in that...

STOP saying that liability has anything to do with the level.

If "adaptive cruise control that can change lanes" is a required functionality to be at Level 3, then if their vehicle has "adaptive cruise control that cannot change lanes" then it DOES NOT MEET THE STANDARD for Level 3. If they operate it without the functionality but CLAIM it operates at Level 3, then the DO have liability - for FRAUD.

Quote:

A system can be defined as a level 2 system even if it can drive on the highway, change lanes, follow navigation instructions, exit the highway, stop at a stop light, turn right on red, turn left on a green across 3 lanes of traffic, navigate a roundabout, pull in your driveway, open your garage door, park in your garage, and shut your garage door, because the system, in order to avoid accepting liability, requires that the driver maintain attention during all of those operations. If the liability never shifts to the software, the system will never not be a level 2 system.

No...a system that "requires that the driver maintain attention during all of those operations" (I'm assuming you are meaning because they would potentially take some level of control of the vehicle here, otherwise, why would they need to maintain attention) is Level 2 or 3 BECAUSE it requires human interaction. IT IS RIGHT THERE IN THE STANDARD.

If the operator has a vehicle that is capable of operating at Level 5 (meaning all the functionality IS there), but chooses to require it to operate with human interaction, it is operating at Level 2 or 3. NOT at Level 5.

And it doesn't matter WHY they make the choice to operate with human interaction. It COULD BE they want to avoid a level of liability (although, if it has the functionality to operate at Level 5, then there should not be any). It COULD BE that the vehicle is blue and they're superstitious about the color blue.

But, they are operating at Level 2 or 3 if there is human interaction.

Quote:

Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles J3016_202104

This document describes [motor] vehicle driving automation systems that perform part or all of the dynamic driving task (DDT) on a sustained basis. It provides a taxonomy with detailed definitions for six levels of driving automation, ranging from no driving automation (Level 0) to full driving automation (Level 5), in the context of [motor] vehicles (hereafter also referred to as "vehicle" or "vehicles") and their operation on roadways:
[ol]
  • Level 0:
    No Driving Automation
  • Level 1:
    Driver Assistance
  • Level 2:
    Partial Driving Automation
  • Level 3:
    Conditional Driving Automation
  • Level 4:
    High Driving Automation
  • Level 5:
    Full Driving Automation
  • [/ol]
    Please show me in those definitions of the Levels where liability is included.

    As one of the panelists said last week at the ASTM Symposium on Autonomy in Aviation, "words have meanings". And you're using them WRONG.
    Ag with kids
    How long do you want to ignore this user?
    AG
    Premium said:

    My F150 is Level 7, I drive with human intelligence and take full responsibility.
    Counting is hard on the thread apparently...
    HalifaxAg
    How long do you want to ignore this user?
    AG
    pdc093 said:



    TexasRebel
    How long do you want to ignore this user?
    AG
    That's not software figuring out what's on the road. That's the oh_****().noIdeaWhatToDo() running.

    Deer, dogs, squirrel, and cattle should absolutely be treated differently on the road. Image recognition will only pick out known objects or else they will attempt to to classify an unknown object as something it has seen before. If a computer doesn't know what a giraffe is, it will either incorrect classify it or default. I want my car to do neither of these.

    Heck. Even the automated trams at the airport default to "don't move" when their controlled environment falls outside of design parameters. As a human, you GAS about the semi bearing down on you when the puppy runs in front of you. The car doesn't. It's programmed to "STOP: Dog"
    Ag with kids
    How long do you want to ignore this user?
    AG
    TexasRebel said:

    That's not software figuring out what's on the road. That's the oh_****().noIdeaWhatToDo() running.

    Deer, dogs, squirrel, and cattle should absolutely be treated differently on the road. Image recognition will only pick out known objects or else they will attempt to to classify an unknown object as something it has seen before. If a computer doesn't know what a giraffe is, it will either incorrect classify it or default. I want my car to do neither of these.

    Heck. Even the automated trams at the airport default to "don't move" when their controlled environment falls outside of design parameters. As a human, you GAS about the semi bearing down on you when the puppy runs in front of you. The car doesn't. It's programmed to "STOP: Dog"

    Yeah. The Emergency Procedures need to be extremely robust.
    hph6203
    How long do you want to ignore this user?
    AG
    Respectfully. You are wrong. It is not a degree of human interaction, after level 2 is achieved it is not a matter of capability, it is a degree of human responsibility. The abdication of responsibility from the driver is achieved through advancements in capability, but it is about responsibility not about amount of interaction. A vehicle that can drive itself 1,000,000 miles without human interaction can still be considered level 2 if the system requires the human to be attentive at all times. They do this to remove liability from the software producer.

    This is FSD Beta on a Tesla Model S. It is a 30 minute long video. The driver touches the wheel twice in the entire drive. Once because they got nervous during a turn, but didn't change the way the vehicle drives, the other was to reduce speed going through a toll booth. Mercedes' level 3 system cannot do these things.



    This is a sped up version to 4x speed. I suggest you mute it.



    This is a sped up video of a drive from San Francisco to Los Angeles. That's 380 miles. Zero interaction. The times the driver touches the wheel what they're doing is satisfying the hands on wheel requirement by either putting slight pressure on the wheel or toggling the scroll wheel on the steering wheel.



    Level 2 is the driver is always responsible for the operation of the vehicle. Meaning that they are responsible for correcting any errors by the software. Meaning they are responsible for avoiding accidents. If the driver is never not responsible for the operation of the vehicle, the system cannot be considered level 3 or higher no matter how many different scenarios it functions in. No matter how many different maneuvers it can do.

    Level 3 is the driver is sometimes responsible for the operation of the vehicle. Meaning sometimes the responsibility of the operation of the vehicle shifts to the vehicle. Responsibility meaning liability for errors. See those boxes in the middle that say "when the feature requests" and "you must drive"? That's the point where liability shifts. What makes a level 3 system a level 3 system is that the responsibility can vacillate between the driver and the vehicle dependent upon the conditions in which it is driving. Level 2 the driver is responsible always, level 3 the driver shifts between being responsible and not responsible, level 4 the driver is never responsible. The instances where the driver is not responsible in a level 3 are conditional, that's what the green box that starts with "These features can drive the vehicle under limited conditions" means. Sometimes responsible, sometimes not.

    That is not a matter of level of interaction. That is not how the standards work.

    Tesla FSD can (when I say can, what I mean does, in case you're misunderstanding the functionality point above) operate on a freeway, driving 70 mph, whether there's traffic or not. It can change lanes. It can change its speed to follow the flow of traffic. It can follow navigation. It can exit the highway, it can make left and right hand turns. It can stop at stop signs. Despite all of that functionality and lack of human interaction It is a level 2 system, because in all cases the driver has to maintain eyes on the road and hands on the wheel even if they never make any input that changes the way the software drives.


    Mercedes' level 3 system can be operated on the freeway, when there is traffic, and the vehicle speed is below 40 mph. It can adapt its speed, stay in its lane, and change lanes. It has less capabilities than FSD, so what makes it a level 3 system? Because in those nested if scenarios the driver can take their eyes off the road, cease their attention on the operation of the vehicle, and the responsibility and liability for the operation of the vehicle shifts to the software. If the traffic clears up and the flow of traffic increases beyond 40 mph the driver is notified that they must retake control of the vehicle and put their eyes back on the road and hands back on the wheel. The liability shifts back and forth between the driver and the vehicle when the speed goes from being above or below 40 mph.


    Level 0 no automation
    -Change in Capability-
    Level 1 adaptive cruise control OR lane keeping
    -Change in capability-
    Level 2 adaptive cruise control AND lane keeping, driver maintains responsibility
    -Change in responsibility-
    Level 3 the driver in certain circumstances no longer maintains responsibility for the operation of the vehicle, in certain circumstances they do. What's that? Liability.
    -Change in responsibility-
    Level 4 the driver never operates the vehicle, but the vehicle only operates under certain conditional circumstances, like weather conditions or geographic location
    -Change in capability-
    Level 5 the vehicle is not restricted by weather or location


    It is not a degree of human interaction, it is a degree of capability from 0-2. A degree of responsibility from 2 to 3 (driver is always responsible for monitoring the vehicle in level 2, in level 3 sometimes the driver is not responsible for monitoring the vehicle). A degree of responsibility for 3 to 4, and a degree of capability from 4 to 5.
    TexasRebel
    How long do you want to ignore this user?
    AG
    Ag with kids
    How long do you want to ignore this user?
    AG


    Quote:

    Respectfully. You are wrong.
    No. I am correct in how the standards work. I'm sorry that it's being very difficult for you to understand.

    And the funniest thing is that TESLA says they're still operating at Level 2 and you disagreeing with THEM, too.

    I get that you love Tesla. That's cool. But that doesn't mean that you really understand how the different levels of automation work.

    Quote:

    This is FSD Beta on a Tesla Model S. It is a 30 minute long video. The driver touches the wheel twice in the entire drive. Once because they got nervous during a turn, but didn't change the way the vehicle drives, the other was to reduce speed going through a toll booth. Mercedes' level 3 system cannot do these things.



    This is a sped up version to 4x speed. I suggest you mute it.
    You seem to think that if a human DOESN'T touch the wheel that that means the Level is higher than it is. But, the FSD software there REQUIRES that a human BE ABLE TO interact with the vehicle, not that it HAS TO. If there's ANY requirement that a human be in the loop or over the loop, it is Level 2 or Level 3.

    Quote:


    Level 2 is the driver is always responsible for the operation of the vehicle. Meaning that they are responsible for correcting any errors by the software. Meaning they are responsible for avoiding accidents. If the driver is never not responsible for the operation of the vehicle, the system cannot be considered level 3 or higher no matter how many different scenarios it functions in. No matter how many different maneuvers it can do.
    This is CLOSER...

    Level 2 requires the human in the loop to SUPERVISE and potentially act during normal operation. Level 3 only requires that a human in the loop or over the loop act in non-normal operation.

    However, responsible is only used in the technical sense...

    BTW, your double negative hurt my head.

    Quote:

    Level 3 is the driver is sometimes responsible for the operation of the vehicle. Meaning sometimes the responsibility of the operation of the vehicle shifts to the vehicle. Responsibility meaning liability for errors. See those boxes in the middle that say "when the feature requests" and "you must drive"? That's the point where liability shifts. What makes a level 3 system a level 3 system is that the responsibility can vacillate between the driver and the vehicle dependent upon the conditions in which it is driving. Level 2 the driver is responsible always, level 3 the driver shifts between being responsible and not responsible, level 4 the driver is never responsible. The instances where the driver is not responsible in a level 3 are conditional, that's what the green box that starts with "These features can drive the vehicle under limited conditions" means. Sometimes responsible, sometimes not.
    Stop using liability. Seriously. It is just WRONG in this context.

    The more you continue on that, the less credibility you have on this subject.


    Quote:

    That is not a matter of level of interaction. That is not how the standards work.
    Sigh...yes it is a matter of the level of interaction

    [ol]
  • Level 0:
    No Driving Automation <- LEVEL OF INTERACTION = 100%
  • Level 1:
    Driver Assistance <- LEVEL OF INTERACTION < Level 0
  • Level 2:
    Partial Driving Automation <- LEVEL OF INTERACTION < Level 1
  • Level 3:
    Conditional Driving Automation <- LEVEL OF INTERACTION < Level 2
  • Level 4:
    High Driving Automation <- LEVEL OF INTERACTION < Level 3
  • Level 5:
    Full Driving Automation <- LEVEL OF INTERACTION = 0%
  • [/ol]
    Quote:

    Tesla FSD can (when I say can, what I mean does, in case you're misunderstanding the functionality point above) operate on a freeway, driving 70 mph, whether there's traffic or not. It can change lanes. It can change its speed to follow the flow of traffic. It can follow navigation. It can exit the highway, it can make left and right hand turns. It can stop at stop signs. Despite all of that functionality and lack of human interaction It is a level 2 system, because in all cases the driver has to maintain eyes on the road and hands on the wheel even if they never make any input that changes the way the software drives.
    a) I'm not missing the functionality point. I explained it to you earlier.
    b) You FINALLY GOT IT - there is a requirement that there has to be the potential for human interaction - human in the loop. THAT is what keeps at Level 2.

    Quote:

    Mercedes' level 3 system can be operated on the freeway, when there is traffic, and the vehicle speed is below 40 mph. It can adapt its speed, stay in its lane, and change lanes. It has less capabilities than FSD, so what makes it a level 3 system? Because in those nested if scenarios the driver can take their eyes off the road, cease their attention on the operation of the vehicle, and the responsibility and liability for the operation of the vehicle shifts to the software. If the traffic clears up and the flow of traffic increases beyond 40 mph the driver is notified that they must retake control of the vehicle and put their eyes back on the road and hands back on the wheel. The liability shifts back and forth between the driver and the vehicle when the speed goes from being above or below 40 mph.
    This is a lot closer, except you're still stuck on "liability" which is not part of the requirement for Level 3. They are operating below 40 mph because that is what their agreement is in order to legally operate. It doesn't change the definition of Level 3.

    And there is no "LIABILITY" in that requirement.



    Quote:

    Level 0 no automation
    -Change in Capability-
    Level 1 adaptive cruise control OR lane keeping
    -Change in capability-
    Level 2 adaptive cruise control AND lane keeping, driver maintains responsibility
    -Change in responsibility-
    Level 3 the driver in certain circumstances no longer maintains responsibility for the operation of the vehicle, in certain circumstances they do. What's that? Liability.
    -Change in responsibility-
    Level 4 the driver never operates the vehicle, but the vehicle only operates under certain conditional circumstances, like weather conditions or geographic location
    -Change in capability-
    Level 5 the vehicle is not restricted by weather or location
    Where are you getting those Level descriptions? You can't just write your own and them claim them as proof.

    Quote:

    It is not a degree of human interaction, it is a degree of capability from 0-2. A degree of responsibility from 2 to 3 (driver is always responsible for monitoring the vehicle in level 2, in level 3 sometimes the driver is not responsible for monitoring the vehicle). A degree of responsibility for 3 to 4, and a degree of capability from 4 to 5.
    No...it is just decreasing levels of the requirement for a human in the loop. Level 0 requires 100% human in the loop. Level 5 requires 0%. There is no changing of "responsibility" VERSUS "capability"...

    hph6203
    How long do you want to ignore this user?
    AG
    I have never said they aren't operating at a level 2. I have explained why they maintain the requirement that the driver remain attentive, in other words why they operate at a level 2. The standard does not explicitly state that liability shifts, but the act of saying the driver no longer has to pay attention implicitly shifts the liability to the software developer, so when you operate at level 3 you are implicitly accepting the liability when the system is operating within it's conditional parameters.

    It's why Mercedes, when they announced their level 3 system, accepted liability for the system when it's active, because it is implicit in the standard even if it is not explicitly stated, because the courts are assuredly going to say the liability shifted when the driver was told they no longer needed to pay attention. It's why they limit the speed, and scenario in which they will allow the driver to remove their attention, because they reduce the risk of major injury at 40 miles an hour and the vehicle gets more cues from the drivers around them while in traffic. They adopt the Level 3 moniker, not because they have an advanced driver assist system (compared to the leaders in the field), but because it allows them to portray that they to an average person even if it is extremely limited in application.

    hph6203 said:

    From a regulatory perspective it's a level 2 system, but from a functionality standpoint I'd say it reaches level 3 while driving on highways, but they just don't define it as that, because they don't want to deal with the regulations. You could get on the highway in Houston and drive to Phoenix while barely, if ever, touching the controls.

    Their roadmap is to operate at level 2 until they broadly achieve level 4 or 5.


    Instead of reaches, I should have said could operate as.

    And when I say exceeds level 2, what I mean is that it exceeds the minimum requirement of level 2. In other words it can do a lot more than lane keep/adaptive cruise control. Not that it is a level 3,4 or 5 system.


    And people absolutely use the levels as stand-ins for quality/capability. Most people when they see Mercedes has a level 3 system and Tesla has a level 2 system, they assume that means that Mercedes is better than Tesla's system. That's the point of clarification I'm making. Levels are not indicative of quality or capability.
    Old Sarge
    How long do you want to ignore this user?
    AG
    The answer is, and always will/should be, Internal Combustion Gasoline powered vehicles until the EV takes over because the consumer WANTS to invest their money in them.

    Not saying the EV has to fail. Let the market rule the day. When the EV has the capabilities to exceed the freeness and long range ability of the ICE, then let the consumers choose it.

    But that is not how .gov wants it, because they want CONTROL.

    **** .gov.
    "Green" is the new RED.
    TexasRebel
    How long do you want to ignore this user?
    AG
    Btw, I rented a car with radar assisted cruise control last weekend.

    It was awful.

    Car pulls out in front of you, but moving faster…. SLAM on the brakes.

    Trying to pass responsibly by not clogging the fast lane until you need to move over and get around… nope. There's a car up there. That's your new speed.

    Hell. A simple throttle stop is smarter than that mess. It's smart enough to not try to drive for me.
    TexasAggie_02
    How long do you want to ignore this user?
    AG
    hph6203 said:

    There is no one writing code to say "avoid deer". The system experiences 100,000 instances of deers/goats/elk/people/dogs and makes a probabilistic framework of what to do when something like that appears in its path that allows it to not run into a giraffe, lion or hippo if it appears in its path even though the system has never seen a giraffe, lion or hippo before.


    Ag with kids
    How long do you want to ignore this user?
    AG

    Quote:

    I have never said they aren't operating at a level 2.

    Well...you've certainly IMPLIED it...

    Quote:

    but from a functionality standpoint I'd say it reaches level 3 while driving on highways
    Quote:

    but FSD Beta (different than autopilot) is more like a SAE level 3 system

    Quote:

    I have explained why they maintain the requirement that the driver remain attentive, in other words why they operate at a level 2. The standard does not explicitly state that liability shifts, but the act of saying the driver no longer has to pay attention implicitly shifts the liability to the software developer, so when you operate at level 3 you are implicitly accepting the liability when the system is operating within it's conditional parameters.
    DAMMIT! Stop putting liability into the standard. It IS NOT THERE. The standard is liability-agnostic. It ONLY defines the functionality that is available in each level. THAT IS IT.

    Quote:

    It's why Mercedes, when they announced their level 3 system, accepted liability for the system when it's active, because it is implicit in the standard even if it is not explicitly stated, because the courts are assuredly going to say the liability shifted when the driver was told they no longer needed to pay attention.
    Mercedes is operating at Level 3 because Mercedes has the functionality activated to perform Level 3 functionality.

    BTW, Texas DOES have some regulations in place for AVs and the "owner of the automated driving system is considered the operator of the automated motor vehicle solely for the purpose of assessing compliance with applicable traffic or motor vehicle laws, regardless of whether the person is physically present in the vehicle while the vehicle is operating".

    Note that there is no "shifting of liability" anywhere between Level 1 and Level 5. If the automated system is engaged (regardless of the level), the owner of the automated driving system is responsible (or liable if you like that term.

    Quote:

    It's why they limit the speed, and scenario in which they will allow the driver to remove their attention, because they reduce the risk of major injury at 40 miles an hour and the vehicle gets more cues from the drivers around them while in traffic. They adopt the Level 3 moniker, not because they have an advanced driver assist system (compared to the leaders in the field), but because it allows them to portray that they to an average person even if it is extremely limited in application.
    The reason they operate under 40 mph is because that was required by CA when they issued them a permit to operate.

    Quote:

    The California Department of Motor Vehicles (DMV) today issued an autonomous vehicle deployment permit to Mercedes-Benz USA, LLC, allowing the company to offer its DRIVE PILOT automated driving system on designated California highways under certain conditions without the active control of a human driver. Mercedes-Benz is the fourth company to receive an autonomous vehicle deployment permit in California and the first authorized to sell or lease vehicles with an automated driving system to the public.

    Based on the SAE International levels of driving automation, an SAE Level 3 system actively performs driving tasks without the active control of a human driver under certain conditions, though the driver must remain behind the wheel to take over when prompted. The Level 3 Mercedes-Benz DRIVE PILOT system can only operate on highways during daylight at speeds not exceeding 40 miles per hour. This permit excludes operation on city or county streets, in construction zones, during heavy rain or heavy fog, on flooded roads and during weather conditions that are determined to impact performance of DRIVE PILOT.
    Their permit only allows them to operate in those conditions. I guess they WOULD have some liability for operating above 40 mph, but that's because they broke the conditions of their permit, not because some nebulous liability clause in the SAE Level 3 certification had been passed.

    Quote:

    And when I say exceeds level 2, what I mean is that it exceeds the minimum requirement of level 2. In other words it can do a lot more than lane keep/adaptive cruise control. Not that it is a level 3,4 or 5 system.
    That's a very disingenuous way to say that. They do not "exceed level 2". They operate within the parameters of Level 2 and do not have the operating functionality of SAE Level 3 engaged.

    Quote:

    And people absolutely use the levels as stand-ins for quality/capability. Most people when they see Mercedes has a level 3 system and Tesla has a level 2 system, they assume that means that Mercedes is better than Tesla's system. That's the point of clarification I'm making. Levels are not indicative of quality or capability.
    Uninformed people might do that.

    Levels are 100% indicative of capability. You are correct that they are not indicative of quality - that's not part of the standard. Although, having low quality would mean that you're probably not actually operating at that level technically.

    The Mercedes isn't necessarily "better" but it does have more functionality, though.

    BTW, FSD Beta is only different than Autopilot in the sense that iOS 17.2 Beta is different than iOS 16. It's an evolution of the technology with additional functionality (and probably additional sensors)...but it's not like it's some completely different product...

    ETA: Removed some snark.
    hph6203
    How long do you want to ignore this user?
    AG
    How are you defining "functionality?" It's possible your definition and mine are not the same thing. That is the crux of this disagreement.

    You seem to be under the impression that the limitations are imposed by California rather than self imposed restrictions derived from the application for approval. Limitations designed to limit the potential costs for the liability that Mercedes has assumed in the event of an accident. They are near identical to the restrictions of the permits they received in Germany prior to their approval in California, with the lone deviation being the change from 60 km/h to 40 mph.

    Drive Pilot operates under the following conditions:

    -Clear lane markings
    -On a pre-mapped highway
    -Daytime
    -In moderate to heavy traffic
    -Under 40 mph
    -There is no construction
    -No rain or snow
    -the driver is visible to the driver monitoring camera

    Under those conditions the driver is allowed to shift their eyes from the road and the system will adjust its speed to follow traffic and stay in its lane. That's it. That is all that it does. It does not change lanes, it doesn't follow navigation if a lane change is required. It is an autonomous traffic jam driver. What qualifies it as a Level 3 system is that the driver can take their eyes off the road and the vehicle assumes all responsibility for operations of driving with no driver oversight. It has nothing to do with the complexity (aka capabilities, aka functionality) of operations it performs beyond the fact that the driver no longer has to pay attention.

    That is where the liability aspect comes in. At level 3 the driver is no longer responsible for correcting the system when the system is active or paying attention to the road. The system has to notify the driver within a reasonable amount of time that they have to resume control (defined as about 10 seconds prior to responsibility shift).

    If you think that lawmakers, insurance providers and the public are going to allow a driving system producer to tell a driver they are no longer responsible for monitoring the road without assuming the liability for an accident then you are high. And even if in the fictional future they do decide that the software developer doesn't retain liability for accidents (they won't), that has not yet been decided and operating as a level 3 system in that murky legal framework is exposing the developer to risk. The common sense of that liability absorption is why Mercedes claims liability in the event of an accident for their Drive Pilot system, because the inevitability is that they will and fighting it is bad for brand reputation.

    By contrast, FSD can complete all operations for driving. It can change lanes, adapt speed, stay in its lane, follow navigation instructions etc. etc. without any driver input. It can do that on any road or in parking lots. Highway, city street, dirt road, lane lines no lane lines. Doesn't matter. It can be activated anywhere. What makes it level 2 is that while it is completing those operations without driver input the driver is responsible for monitoring its actions and correcting it in the event of an error. I am not saying it never requires correction, but rather that all actions necessary for driving are built into the system and every drive has a non-zero chance of having zero interventions.

    I do not think most people would define Drive Pilot as having more functionality or capability than FSD, because it puts too much weight on whether or not the driver is responsible for intervening when defining functionality or capability. It is a level 3 system. The only capability or functionality it possesses that FSD does not is that in those extremely narrow set of circumstances the driver is allowed to remove their eyes from the road.


    -A level 2 system can be as simple as a vehicle that can adjust its speed and stay in its lane.
    -A level 2 system can be as complex as a vehicle that can do all the operations of driving with no driver input. It can be of such high quality that an intervention is only required once in a million miles. As long as the driver is required to keep their eyes on the road and take over operation in the event of an error the system is classified as level 2.
    -A level 3 system can be as simple as a vehicle that can change its speed, and stay in its lane with the caveats that it be done in good weather, with good lane lines, in heavy traffic, at speeds under 10 mph on pre-mapped routes. As long as under those circumstances the driver is allowed to stop paying attention.
    -A level 4 system can be as simple as a van that drives a 2 mile circuit on a one way road in a retirement community making the same periodic stops at the same locations, making the same turns and never exceeding 15 mph. Never interacting with a stop light, never interacting with a yield sign, never completing a left hand turn across traffic. The Derek Zoolander of autonomous vehicles. The characteristic that makes it a level 4 system is the passengers never have the responsibility to overtake operation of the vehicle.
    -A level 5 system operates without conditions in all circumstances with no expectation of driver intervention.


    They are not indicative of what tasks the system can complete without the human driver providing input. They are indicative of who is responsible for the operation of the vehicle while the software completes the tasks. Level 2 the human driver is always responsible, level 3 the driver is sometimes responsible, level 4 the "human driver" (if they can even be considered that anymore) is never responsible.
    Medaggie
    How long do you want to ignore this user?
    I have driven te mercedes supposed Level 3. Only Highway, its essentially lane assist.

    I have had a Tesla FSD when it first came out. I had high hopes when it was first turned on via the Beta Version. It was quite disappointment and rarely used it even when no one was around me on the highway. Phantom stops, odd lane changes.

    Fast forward 2 years and FSD on the highway is close if not better than the average driver. I drive on the highway 90% with FSD on.

    City still needs some work but much improved.

    Anyone who thinks that Mercedes level 3 is better than Tesla Level 2, then they are uninformed or just lying. This makes these levels of Autonomy categroies invalid
    hph6203
    How long do you want to ignore this user?
    AG
    These are NTHSA's definitions of what Level 2, level 3, and level 4 autonomy are:

    Quote:

    Level 2 - Combined Function Automation: This level involves automation of at least two primary control functions (My emphasis, at least, not at most, meaning it can do more than this.) designed to work in unison to relieve the driver of control, of those functions. Vehicles at this level of automation can utilize shared authority when the driver cedes active primary control in certain limited driving situations. The driver is still responsible for monitoring the roadway and safe operation and is expected to be available for control at all times and on short notice. The system can relinquish control with no advance warning and the driver must be ready to control the vehicle safely. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering. The major distinction between level 1 and level 2 is that, at level 2 in the specific operating conditions for which the system is designed, an automated operating mode is enabled such that the driver is disengaged from physically operating the vehicle by having his or her hands off the steering wheel AND foot off pedal at the same time.

    Level 3 - Limited Self-Driving Automation: Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The vehicle is designed to ensure safe operation during the automated driving mode. An example would be an automated or self-driving car that can determine when the system is no longer able to support automation, such as from an oncoming construction area, and then signals to the driver to reengage in the driving task, providing the driver with an appropriate amount of transition time to safely regain manual control. The major distinction between level 2 and level 3 is that at level 3, the vehicle is designed so that the driver is not expected to constantly monitor the roadway while driving.

    Level 4 - Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver1 will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicle


    Level 2 has no limit to the amount of functionality and capability (again, you may be using a different definition than I am, but I think most people would define that as "tasks the system can complete".) that it can provide. It has no requirements for persistent driver input while it's active. The driver can take their feet off the pedals and hands off the wheel. The only things that makes it level 2 is that the driver must maintain attention on the road, and the ability to take control immediately. That attention can be required by (what most companies do) an in-cabin camera that monitors the drivers' eyes to make sure they don't deviate from the road. Or what Tesla does, require that the driver periodically touch the wheel to indicate they are still actively aware of what the car is doing. They do that, because early models capable of FSD do not have an in-cabin driver monitoring camera. The vehicles that do have the in cabin camera do both. The driver is always responsible for what the car does. The liability remains with the driver. That is both in the standard (based both on the fact the driver must maintain attention and that the control can be transitioned to the driver immediately) and by legal precedent.

    What do you think happens during the period in which the driver is not expected to maintain eyes on the road during level 3 operation? An object is in the road and obstructed from view by other vehicles until the last minute, and the vehicle hits the object. Who is responsible for the repairs? Better yet, the vehicle swerves to avoid the object and hits another vehicle in a lane next to it. The vehicle didn't provide a transitional period for the driver to take over. Who is liable for the repairs to the driver's vehicle, and the vehicle it hit? The answer to that question can be one of three things , the driver, the software provider or "we don't know yet." The answer is broadly we do not know, the answer for Mercedes is the software provider which sets a prior standard of practice, which means that in the future it is not unlikely that any operator that operates at level 3 will have to assume liability for any damages.

    That is why Tesla operates at level 2. It is not how capable it is. It is not about how many functions it performs.
    Those are not the defining differentiation between level 2 and 3. It is about the risk of adopting liability for the actions of the system if they operate at level 3, because in order to be level 3 full control and attention must be able to be ceded to the system at certain times. And while liability is not explicitly laid out in the standards, it is a consideration for the operator when deciding what they require of the driver.

    Tesla does not want to allow the driver to cede full control to the system, because in the event they do that, it is likely that Tesla becomes liable for damages. That's a much bigger issue for a company that has sold ~5 million vehicles with FSD capability compared to a company that sold less than fifty thousand vehicles capable of utilizing Drive Pilot. Scale matters. Not in the written standard, but in what level of responsibility the company is willing to absorb because the costs can be exorbitant. A company with an equivalently capable (rate of errors) system can operate at level 3, 4 or 5 if the number of vehicles on the road operating at those levels are small relative to the available funds to pay for damages. The larger the fleet, the higher the standards of operation/limitations of error rate you have to achieve in order to cede control to the driver.


    Cruise, as an example, operated at level 4 autonomy. Their system does the same actions that FSD does, except Cruise's system operates in narrow geographic areas and FSD can be activated anywhere. They had remote operators that intervened every 2-5 miles. Similar or worse to the intervention rate of FSD, but because the person in the vehicle did not intervene it was permitted and operated as a level 4 system. Because the person that intervenes for FSD is the driver inside the vehicle and is required to be able to take over at any moment, it is considered a level 2 system. Comparing intervention rates between the two systems is a totally reasonable thing to do.

    Cruise comically not only had intervention rates comparable to Tesla, but they had 1.5 remote operators per vehicle rather than 1.
    Teslag
    How long do you want to ignore this user?
    AG
    Medaggie said:

    I have driven te mercedes supposed Level 3. Only Highway, its essentially lane assist.

    I have had a Tesla FSD when it first came out. I had high hopes when it was first turned on via the Beta Version. It was quite disappointment and rarely used it even when no one was around me on the highway. Phantom stops, odd lane changes.

    Fast forward 2 years and FSD on the highway is close if not better than the average driver. I drive on the highway 90% with FSD on.

    City still needs some work but much improved.

    Anyone who thinks that Mercedes level 3 is better than Tesla Level 2, then they are uninformed or just lying. This makes these levels of Autonomy categroies invalid

    Same. I can't imagine sitting in traffic now without Autopilot.
     
    ×
    subscribe Verify your student status
    See Subscription Benefits
    Trial only available to users who have never subscribed or participated in a previous trial.