Saw My First Tesla Cyber Truck on the road today

19,770 Views | 267 Replies | Last: 11 mo ago by Stat Monitor Repairman
hph6203
How long do you want to ignore this user?
AG
Tesla is developing software for full self driving, in its current stage you can turn it on and it will drive on the highway/city streets. It works decently well, but you still have to pay attention and keep your hands on the wheel.

They're a couple of months from launching a new rewrite that at least during Elon's demo it looked significantly more decisive. Based upon the current pace of improvement I wouldn't be surprised if the system were able to do most of people's driving on a daily basis before the end of the decade with rare instances of correction. Meaning most drives people take they do zero input. During ~40 minutes of city driving Elon only had to intervene one time during his livestream.

Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.

.
Ag with kids
How long do you want to ignore this user?
AG
one safe place said:

hph6203 said:


-Unique look (the way it looks has grown on me), I don't particularly think there are many vehicles that are actually aesthetically appealing and I'll take interesting looking over bland.

Almost always, your first impression is correct. Others being "bland" and few being "aesthetically pleasing" are likely just a case of you rationalizing choosing the butt ugly look of the cyber truck. Even if you truly felt that way, there is nothing even close to being as awful looking as this truck. "Interesting looking" I don't think describes it.


It's breathtaking...
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

Tesla is developing software for full self driving, in its current stage you can turn it on and it will drive on the highway/city streets. It works decently well, but you still have to pay attention and keep your hands on the wheel.

They're a couple of months from launching a new rewrite that at least during Elon's demo it looked significantly more decisive. Based upon the current pace of improvement I wouldn't be surprised if the system were able to do most of people's driving on a daily basis before the end of the decade with rare instances of correction. Meaning most drives people take they do zero input. During ~40 minutes of city driving Elon only had to intervene one time during his livestream.

Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.


Just remember that under most current laws and regulations, the driver is still at least partially responsible for any incidents.

So, if you're surfing tik tok and run over a pedestrian, you're going to have a big legal issue in front of you.

Rules may change in the future though.
TexasRebel
How long do you want to ignore this user?
AG
hph6203 said:


Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.


Elon is an idiot chasing a pipe dream. You cannot program the kind of problem solving that the human brain can do. Hell. You can't program the kind of problem solving a mouse brain can do. It's not just "Eat cheese; be happy".

Centuries ago we were weeks away from changing lead to gold. It's still so close.
Ag with kids
How long do you want to ignore this user?
AG
TexasRebel said:

hph6203 said:


Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.


Elon is an idiot chasing a pipe dream. You cannot program the kind of problem solving that the human brain can do. Hell. You can't program the kind of problem solving a mouse brain can do. It's not just "Eat cheese; be happy".

Centuries ago we were weeks away from changing lead to gold. It's still so close.
You CAN change lead to gold.

It's just that the nuclear transmutation necessary to accomplish it costs WAY more than any gold you'll get out of it...
TexasRebel
How long do you want to ignore this user?
AG
Still so close.
Premium
How long do you want to ignore this user?
AG
TexasRebel said:

hph6203 said:


Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.


Elon is an idiot chasing a pipe dream. You cannot program the kind of problem solving that the human brain can do. Hell. You can't program the kind of problem solving a mouse brain can do. It's not just "Eat cheese; be happy".

Centuries ago we were weeks away from changing lead to gold. It's still so close.


He's not programming it, he's feeding more video visual hours into a machine learning through video scenarios only. It doesn't know green means go, it knows cars go because millions of other cars go when it's green. They have more video learning hours than a human can do ever dream of.

Finally, the car can see better than humans with a full 360, never get drowsy, view.

And it's already good and on its way to great. It's already safer than humans. Even if it's not fully self driving now, while monitored, your extremely much less likely to be in a wreck and your chances of living increase.
TexasRebel
How long do you want to ignore this user?
AG
Only if a person disregards their own life so much that a programmed machine has better self-preservation skills than they do.

No matter how much data is thrown at the system as fast as possible, it will only extract what it is programmed to extract. It has exactly zero problem solving ability.
lead
How long do you want to ignore this user?
If there were designated self-driving routes, then the tech is probably ready. But for integration with humans it's gonna be a lot of hurdles. Humans can communicate with each other (think traffic cop).
Premium
How long do you want to ignore this user?
AG
TexasRebel said:

Only if a person disregards their own life so much that a programmed machine has better self-preservation skills than they do.

No matter how much data is thrown at the system as fast as possible, it will only extract what it is programmed to extract. It has exactly zero problem solving ability.


It's already statistically the safest way to drive with human monitoring than without it, by a wide margin. So whoever doesn't use it cares less for their life and others.
BigRobSA
How long do you want to ignore this user?
#YOLO
Ag with kids
How long do you want to ignore this user?
AG
Premium said:

TexasRebel said:

Only if a person disregards their own life so much that a programmed machine has better self-preservation skills than they do.

No matter how much data is thrown at the system as fast as possible, it will only extract what it is programmed to extract. It has exactly zero problem solving ability.


It's already statistically the safest way to drive with human monitoring than without it, by a wide margin. So whoever doesn't use it cares less for their life and others.
Tesla?

Or autonomous vehicles in general?
Premium
How long do you want to ignore this user?
AG
Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.
zephyr88
How long do you want to ignore this user?
AG
GenericAggie
How long do you want to ignore this user?
AG
Premium said:

Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.


100%.

He has a 15 year head start on the world.
Ag with kids
How long do you want to ignore this user?
AG
Premium said:

Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.
According to Tesla, they're only operating at SAE Level 2 which isn't considered automated driving.


Quote:

Officially, Telsa describes Autopilot as "an SAE Level 2 driving automation system designed to support and assist the driver in performing the driving task," as cited by NHTSA.




They also have "a total of 17 fatalities and 736 crashes since 2019"...
hph6203
How long do you want to ignore this user?
AG
TexasRebel said:

hph6203 said:


Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.


Elon is an idiot chasing a pipe dream. You cannot program the kind of problem solving that the human brain can do. Hell. You can't program the kind of problem solving a mouse brain can do. It's not just "Eat cheese; be happy".

Centuries ago we were weeks away from changing lead to gold. It's still so close.
"A computer can never beat a human at chess, it requires creativity and a computer cannot be creative."

"Turns out chess doesn't require creativity and a computer kicks the absolute dog**** out of even the best human players."


"Go is really the game that proves human superiority as far as intellect. It requires a level of intuitiveness and will never be solved by computers."

"Turns out we were wrong and Alpha Go kicks the absolute dog **** out of human players. It has achieved Go supremacy by learning from the patterns of human players."

"Turns out watching human players play was a terrible strategy, because humans are nowhere near the peak of Go playing and just giving the basic rules to an AI system and having it play millions of games against itself led to a system, Alpha Zero, that beats Alpha Go 100-0."


When you are driving you are not problem solving. You are recognizing patterns. Some of those patterns are so consistent during the experience that they become unconscious responses. The rest is just noticing that in some instances, like a pedestrian standing next to a street, has a probabilistic chance of entering the street in front of you.

There is nothing super impressive about the brainpower needed in order to drive. It's why pretty dumb people are capable of doing it and the reason why a lot of the accidents that occur are because people get so utterly bored with the experience that they get distracted and end up making catastrophic errors. Computers don't get bored.


FSD beta is now in the Alpha Zero stage of its development, in terms of strategy, and going through its training. The system wasn't taught what a stop sign is, what a stoplight is, what a yield sign is, or a speed limit sign is, it just uses video after video of humans driving their cars and recognizes the patterns that exist.

There are autonomous vehicles on the road right now (Waymo) that are safer than human drivers, they just require hardware that makes them prohibitively expensive to manufacture at a large enough scale to replace a majority of human driving.
.
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

TexasRebel said:

hph6203 said:


Not sure if they're going to be able to have absolutely no one behind the wheel before the end of the decade, but Elon thinks they will.


Elon is an idiot chasing a pipe dream. You cannot program the kind of problem solving that the human brain can do. Hell. You can't program the kind of problem solving a mouse brain can do. It's not just "Eat cheese; be happy".

Centuries ago we were weeks away from changing lead to gold. It's still so close.
"A computer can never beat a human at chess, it requires creativity and a computer cannot be creative."

"Turns out chess doesn't require creativity and a computer kicks the absolute dog**** out of even the best human players."


"Go is really the game that proves human superiority as far as intellect. It requires a level of intuitiveness and will never be solved by computers."

"Turns out we were wrong and Alpha Go kicks the absolute dog **** out of human players. It has achieved Go supremacy by learning from the patterns of human players."

"Turns out watching human players play was a terrible strategy, because humans are nowhere near the peak of Go playing and just giving the basic rules to an AI system and having it play millions of games against itself led to a system, Alpha Zero, that beats Alpha Go 100-0."


When you are driving you are not problem solving. You are recognizing patterns. Some of those patterns are so consistent during the experience that they become unconscious responses. The rest is just noticing that in some instances, like a pedestrian standing next to a street, has a probabilistic chance of entering the street in front of you.

There is nothing super impressive about the brainpower needed in order to drive. It's why pretty dumb people are capable of doing it and the reason why a lot of the accidents that occur are because people get so utterly bored with the experience that they get distracted and end up making catastrophic errors. Computers don't get bored.


FSD beta is now in the Alpha Zero stage of its development, in terms of strategy, and going through its training. The system wasn't taught what a stop sign is, what a stoplight is, what a yield sign is, or a speed limit sign is, it just uses video after video of humans driving their cars and recognizes the patterns that exist.

There are autonomous vehicles on the road right now (Waymo) that are safer than human drivers, they just require hardware that makes them prohibitively expensive to manufacture at a large enough scale to replace a majority of human driving.
I assume they show them lots of vehicles NOT stopping at stop signs and stoplights and ignoring yield signs, right?
Premium
How long do you want to ignore this user?
AG
Ag with kids said:

Premium said:

Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.
According to Tesla, they're only operating at SAE Level 2 which isn't considered automated driving.


Quote:

Officially, Telsa describes Autopilot as "an SAE Level 2 driving automation system designed to support and assist the driver in performing the driving task," as cited by NHTSA.




They also have "a total of 17 fatalities and 736 crashes since 2019"...


Some don't understand how quickly things change. First, show me it was Tesla tech that caused deaths.

Second, they've already scrapped the old coding of automated driving and have gone to visual machine learning. It isn't even widely released yet from what I've read, but it's here.

They said you couldn't land and reuse rockets, yet here we are. 172 consecutive landing recoveries as of last night.
GenericAggie
How long do you want to ignore this user?
AG
I would like to see comparative data; fatalities/injuries vs other auto makers in totality.
hph6203
How long do you want to ignore this user?
AG
They're monitoring human drivers to see how they interact with the world and then fine tuning based upon failure states. So it starts broad and then they narrow the focus to sculpt the system into operating appropriately. So in that video they discuss the fact that 99.5% of drivers don't come to a full a complete stop at stop signs (rolling stops) so they query the fleet and show it examples of the .5% of scenarios where the vehicles do stop at stop signs in order to get the system to recognize that as the appropriate behavior.

They have 2.5 millionish vehicles on the road in their primary development area (North America) and can pull up scenarios like a vehicle pulling a wobbly trailer, or a ladder flying off the back of a truck and show the system how to respond to those scenarios through actual video and through simulation of those events. They also have ~500,000 beta testers that accumulate around 1,000,000 miles/day and they get fed instances of disengagements from the fleet so they understand where the system is not performing as it should.

From a regulation perspective they operate at SAE Level 2, which allows them to operate on that large of a scale, because the liability remains with the driver, rather than the system. In practical terms Autopilot is never designed to be any more than a SAE level 2 system, because it is restricted to highways, but FSD Beta (different than autopilot) is more like a SAE level 3 system, but operates as a driver assist system. As it improves they can extend the duration between interventions of the system until it gets to the point that there's an intervention every 10 or 100,000 miles (whatever regulators expect) between interventions and they can then deploy a level 4 or 5 system without ever officially going through the level 3 stage.

It has been a pattern of two steps forward, one step back for awhile, so they're continually improving the system it's just a matter of how long it takes to go from 5 miles between interventions to 10, to 100, to 100,000 (actual time between interventions is unknown, but you can watch videos of people going long periods without intervening on YouTube).


Tesla has 3 different driver assistance tiers.

Autopilot - comes with every vehicle, basically just adaptive cruise control and lane keeping

Enhanced Autopilot ($6000): adds lane changes (I.e. it will navigate around slow moving traffic), and navigation so it will take exits to follow your plotted course on highways.

FSD Beta ($12000): Adds driving on city streets. So you can get in your car, back out of your driveway, and activate the system and it will stop at stop signs, stop lights, make left and right hand turns and right on red/left across traffic etc etc. The driver still maintains liability for the actions of the vehicle in the same way a person using lane keep/adaptive cruise control is responsible for the vehicle.


Tesla is adding about a million vehicles to their fleet right now, with a 30% growth rate every year to that number (previously 50%, but they've mostly saturated current models and won't be adding another mass adoption vehicle until 2025/2026ish). So they'll go from 2.5 million vehicles collecting data, to 3.5, to 4.8 million etc being able to add more and more data/edge cases to train on.
.
Ag with kids
How long do you want to ignore this user?
AG
Premium said:

Ag with kids said:

Premium said:

Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.
According to Tesla, they're only operating at SAE Level 2 which isn't considered automated driving.


Quote:

Officially, Telsa describes Autopilot as "an SAE Level 2 driving automation system designed to support and assist the driver in performing the driving task," as cited by NHTSA.




They also have "a total of 17 fatalities and 736 crashes since 2019"...


Some don't understand how quickly things change. First, show me it was Tesla tech that caused deaths.

Second, they've already scrapped the old coding of automated driving and have gone to visual machine learning. It isn't even widely released yet from what I've read, but it's here.

They said you couldn't land and reuse rockets, yet here we are. 172 consecutive landing recoveries as of last night.
And...there goes the goalpost...

FWIW, I'm not against autonomy (in fact, it's a big part of my job)...

I just get tired head at the deification of some of these companies and technologies...
Ag with kids
How long do you want to ignore this user?
AG
GenericAggie said:

I would like to see comparative data; fatalities/injuries vs other auto makers in totality.
Well, other auto makers hours would dwarf the automated driving vehicles so they'd have more accidents and fatalities.
Premium
How long do you want to ignore this user?
AG
Ag with kids said:

Premium said:

Ag with kids said:

Premium said:

Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.
According to Tesla, they're only operating at SAE Level 2 which isn't considered automated driving.


Quote:

Officially, Telsa describes Autopilot as "an SAE Level 2 driving automation system designed to support and assist the driver in performing the driving task," as cited by NHTSA.




They also have "a total of 17 fatalities and 736 crashes since 2019"...


Some don't understand how quickly things change. First, show me it was Tesla tech that caused deaths.

Second, they've already scrapped the old coding of automated driving and have gone to visual machine learning. It isn't even widely released yet from what I've read, but it's here.

They said you couldn't land and reuse rockets, yet here we are. 172 consecutive landing recoveries as of last night.
And...there goes the goalpost...

FWIW, I'm not against autonomy (in fact, it's a big part of my job)...

I just get tired head at the deification of some of these companies and technologies...


No other company has 500K beta drivers with new video machine learning based on visual. All legacy auto companies will be licensing their tech before long.
GenericAggie
How long do you want to ignore this user?
AG
Ag with kids said:

GenericAggie said:

I would like to see comparative data; fatalities/injuries vs other auto makers in totality.
Well, other auto makers hours would dwarf the automated driving vehicles so they'd have more accidents and fatalities.


Comparative data = comparative.

Number of average hours driven per driver?
hph6203
How long do you want to ignore this user?
AG
Ag with kids said:

GenericAggie said:

I would like to see comparative data; fatalities/injuries vs other auto makers in totality.
Well, other auto makers hours would dwarf the automated driving vehicles so they'd have more accidents and fatalities.
You control for accidents/fatalities based upon a per mile basis. A driver using autopilot supposedly gets in half as many accidents than the average driver, but the average driver doesn't have the advanced driver assistance systems that a Tesla does, so it's hard to say Tesla is better than everything else, but that you can say that Tesla with autopilot is better than a Tesla without it (because they can query their own fleet for data).


The problem with the FSD software is that you can see whether or not you're getting closer to your goal of having a system that travels x number of miles without an intervention, but you can't really predict how long it's going to take to get to the goal, because it goes through phases of rapid improvement, phases of regression, and phases of stagnancy. It has been getting better over time, but it's not a linear/predictable improvement. So you may have a goal of 100,000 miles between interventions, climb to 75,000 miles and hit a stall or even a wall where it doesn't improve any more.


The recent rewrite is supposed to make the progression more predictable, but it has not yet been released to the public. The system in that video appears to be significantly improved over the currently available system, because the biggest issue with the current system isn't that it's too dangerous in terms of aggression it's that it's too cautious. The roundabout section is a good example, where in the current system it would pause at the roundabout and struggle to make the decision to exit to follow its path, but in Elon's video it enters smoothly and exits smoothly.
.
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

They're monitoring human drivers to see how they interact with the world and then fine tuning based upon failure states. So it starts broad and then they narrow the focus to sculpt the system into operating appropriately. So in that video they discuss the fact that 99.5% of drivers don't come to a full a complete stop at stop signs (rolling stops) so they query the fleet and show it examples of the .5% of scenarios where the vehicles do stop at stop signs in order to get the system to recognize that as the appropriate behavior.

They have 2.5 millionish vehicles on the road in their primary development area (North America) and can pull up scenarios like a vehicle pulling a wobbly trailer, or a ladder flying off the back of a truck and show the system how to respond to those scenarios through actual video and through simulation of those events. They also have ~500,000 beta testers that accumulate around 1,000,000 miles/day and they get fed instances of disengagements from the fleet so they understand where the system is not performing as it should.

From a regulation perspective they operate at SAE Level 2, which allows them to operate on that large of a scale, because the liability remains with the driver, rather than the system. In practical terms Autopilot is never designed to be any more than a SAE level 2 system, because it is restricted to highways, but FSD Beta (different than autopilot) is more like a SAE level 3 system, but operates as a driver assist system. As it improves they can extend the duration between interventions of the system until it gets to the point that there's an intervention every 10 or 100,000 miles (whatever regulators expect) between interventions and they can then deploy a level 4 or 5 system without ever officially going through the level 3 stage.

It has been a pattern of two steps forward, one step back for awhile, so they're continually improving the system it's just a matter of how long it takes to go from 5 miles between interventions to 10, to 100, to 100,000 (actual time between interventions is unknown, but you can watch videos of people going long periods without intervening on YouTube).


Tesla has 3 different driver assistance tiers.

Autopilot - comes with every vehicle, basically just adaptive cruise control and lane keeping

Enhanced Autopilot ($6000): adds lane changes (I.e. it will navigate around slow moving traffic), and navigation so it will take exits to follow your plotted course on highways.

FSD Beta ($12000): Adds driving on city streets. So you can get in your car, back out of your driveway, and activate the system and it will stop at stop signs, stop lights, make left and right hand turns and right on red/left across traffic etc etc. The driver still maintains liability for the actions of the vehicle in the same way a person using lane keep/adaptive cruise control is responsible for the vehicle.


Tesla is adding about a million vehicles to their fleet right now, with a 30% growth rate every year to that number (previously 50%, but they've mostly saturated current models and won't be adding another mass adoption vehicle until 2025/2026ish). So they'll go from 2.5 million vehicles collecting data, to 3.5, to 4.8 million etc being able to add more and more data/edge cases to train on.
So, they do not incorporate any real data from when the vehicles drive? That is, it doesn't used any of the sensor data to feed back into the algorithms?
Ag with kids
How long do you want to ignore this user?
AG
Premium said:

Ag with kids said:

Premium said:

Ag with kids said:

Premium said:

Tesla is what I'm talking about right now. No one is in the same stratosphere as Tesla on the tech.
According to Tesla, they're only operating at SAE Level 2 which isn't considered automated driving.


Quote:

Officially, Telsa describes Autopilot as "an SAE Level 2 driving automation system designed to support and assist the driver in performing the driving task," as cited by NHTSA.




They also have "a total of 17 fatalities and 736 crashes since 2019"...


Some don't understand how quickly things change. First, show me it was Tesla tech that caused deaths.

Second, they've already scrapped the old coding of automated driving and have gone to visual machine learning. It isn't even widely released yet from what I've read, but it's here.

They said you couldn't land and reuse rockets, yet here we are. 172 consecutive landing recoveries as of last night.
And...there goes the goalpost...

FWIW, I'm not against autonomy (in fact, it's a big part of my job)...

I just get tired head at the deification of some of these companies and technologies...


No other company has 500K beta drivers with new video machine learning based on visual. All legacy auto companies will be licensing their tech before long.
Video...radar...they're all electromagnetic energy being recorded and used to feed algorithms.
hph6203
How long do you want to ignore this user?
AG
Not 100% sure I'm following the question, but it's not a direct feedback from the vehicle into the training. Meaning that it's not a real time training. The way it operates is the entire fleet (using the software or not) collects sensor data to create a data set, the system processes a broad subset of that data to create a basic system for driving, that system is run through simulations to determine its effectiveness, fail states are discovered and then they narrow the data set of sensor data from the fleet to find successes in those fail state scenarios, they feed those successes back into the system, the system learns from those successes and it goes back through simulations to test how well it has learned from the videos.

Intermittently the software hits a publish state where it is sent out to the fleet to be tested in real world scenarios and the fleet discovers fail states by instances where the driver intervenes (hits the gas pedal to encourage the system to be more aggressive) or disengages (hits the brake pedal to get it to stop doing what it's doing, turns the wheel to avoid an error), or the driver does not intervene or disengage, but indicates to the system that it did not operate optimally (if it, for example changed lanes unnecessarily that didn't require disengagement but wasn't what it should have done). Those fail states are fed back into the central system, analyzed and labeled for the scenarios and they can then go back to the fleet to find similar scenarios where the driver/system successfully navigated the scenario, and repeat the process described in the first paragraph.


The software is run locally on two independent systems and, if I remember right, they internally check against each other and can operate independently in the instance of a failure (one computer fails, the system doesn't fail), but the training/learning is done remotely and the system gets updated in batches.
.
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

It's not a direct feedback from the vehicle. Meaning that it's not a real time training. The way it operates is the entire fleet (using the software or not) collects sensor data to create a data set, the system processes that data to create a basic system for driving, that system is run through simulations to determine its effectiveness, fail states are discovered and then they narrow the data set of sensor data from the fleet to find successes in those fail state scenarios, they feed those successes back into the system, the system learns from those successes and it goes back through simulations to test how well it has learned from the videos.

Intermittently the software hits a publish state where it is sent out to the fleet to be tested in real world scenarios and the fleet discovers fail states by instances where the driver intervenes (hits the gas pedal to encourage the system to be more aggressive) or disengages (hits the brake pedal to get it to stop doing what it's doing, turns the wheel to avoid an error), or the driver does not intervene or disengage, but indicates to the system that it did not operate optimally (if it, for example changed lanes unnecessarily that didn't require disengagement but wasn't what it should have done). Those fail states are fed back into the central system, analyzed and labeled for the scenarios and they can then go back to the fleet to find similar scenarios where the driver/system successfully navigated the scenario, and repeat the process described in the first paragraph.


The software is run locally on two independent systems and, if I remember right, they internally check against each other and can operate independently in the instance of a failure (one computer fails, the system doesn't fail), but the training/learning is done remotely and the system gets updated in batches.
The aerospace world requires triple redundancy. Surprised having a system that will work in such close proximity to other vehicles doesn't do that. I know there's no requirement, but there is the precedence they could look to.

It would be interesting to see how they do their offline feedback testing.
hph6203
How long do you want to ignore this user?
AG
I'll add that there's an additional step that I forgot to mention where the system is run in shadow mode, meaning that it doesn't actually take control of the vehicle, it just runs as a analyzer to see what it predicts should be done versus what is actually done and the deviation between the two is analyzed to determine how well aligned with reality it is.
hph6203
How long do you want to ignore this user?
AG
These are the last 3 major presentations that Tesla has done with respect to their FSD software. Most of it is over my head, but I'm fairly sure I have the basics of how it operates down.

https://www.youtube.com/live/j0z4FweCy4M?si=kxycvN9e8ppMWRNG

https://www.youtube.com/live/ODSJsviD_SU?si=5evU0N9fLBzyTiDm

Timestamp Link isn't working since it was a livestream, but it's 1h20m18s into the video. The rest is about Tesla's overall roadmap, material constraints, how they're driving down costs and rapidly deploying charging infrastructure. The talk with respect to FSD/real world AI is only about 15 minutes long.

https://m.youtube.com/live/Hl1zEzVUV7w?si=fjdlFSyCSLjPeI_t%3Ft%3D4818s
BigRobSA
How long do you want to ignore this user?
hph6203 said:

These are the last 3 major presentations that Tesla has done with respect to their FSD software. Most of it is over my head, but I'm fairly sure I have the basics of how it operates down.

https://www.youtube.com/live/j0z4FweCy4M?si=kxycvN9e8ppMWRNG

https://www.youtube.com/live/ODSJsviD_SU?si=5evU0N9fLBzyTiDm

https://m.youtube.com/live/Hl1zEzVUV7w?si=fjdlFSyCSLjPeI_t%3Ft%3D4818s



Here...let me help:


"It makes the car go "Vroooom!" ".
Ag with kids
How long do you want to ignore this user?
AG
hph6203 said:

I'll add that there's an additional step that I forgot to mention where the system is run in shadow mode, meaning that it doesn't actually take control of the vehicle, it just runs as a analyzer to see what it predicts should be done versus what is actually done and the deviation between the two is analyzed to determine how well aligned with reality it is.
So, open loop...
TexasRebel
How long do you want to ignore this user?
AG
Who said rockets couldn't land & be reused? With 1960's tech, of course not.

Problem solving comes in when you see something in the road you've never seen before. You, as a human, can decide a course of action that you deem will be your best chance of survival. Software cannot.

Mechanical failures are another spot. You apply the service brakes you expect a certain feedback. Software can do this, too. If you get unexpected feedback, what do you do? Stop dead in I35 traffic with the e-brake? Hit the softest object nearby? Find a runaway ramp? Maybe a wheel falls off. A human can learn the new control characteristics in a heartbeat. Software has no heart. …Nor the ability to learn how a suddenly 3-wheeled car steers.

Are some humans better at problem solving than others? Yes. Are those humans better drivers? Also, yes.

Also. Do you really want to willingly or unwillingly bet your life against a kernel panic?

Chess and go are deep thought games. Computers are very good at creating and searching trees really fast.

How does Alpha Zero do if you suddenly change a fundamental rule creating game states it knows to be impossible according to its database?
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.