Mr President Elect said:
VitruvianAg said:
Deputy Travis Junior said:
Waiting for it to start right now.
I don't expect much - the weird insistence that Tesla self-driving doesn't use LIDAR hamstrings their tech and has put them far behind Waymo. But, I'm still interested to see where they are and what they've done. Also interested to see what they're doing with their Optimus robot. The worldwide demographic decline has created a huge surge of interest in robots (they will supplement the shrinking workforce), so Tesla could be moving into a goldmine market.
Do you use LIDAR when you're driving?
Waymo has a goefence problem; their system requires frequent accurate mapping; don't change anything... They have limited heuristic capabilities, hence the geofencing....and each vehicle requires remote human intervention, AND they play with the definition of same.
FSD is AI...same AI that lets the robots get around...they have excellent hand-eye coordination and a dexterity and clearly the robots can hear, not sure the cars do. Elon might just be the real Eldon Tyrell.
Yep, I was going to post about the same. It drives me crazy when people think Waymo is ahead of Tesla. Completely different approaches and IMO Waymo is the one Way behind. They have to limit it to the roads it has mapped to every detail and when something goes awry it can't handle it. It will be a long time before they are able to handle "real world" scenarios as they have purposely hamstrung themselves.
Also, the lidar thing... I thought it was commonly accepted that ditching Lidar was the right move, haven't others followed suit? There is a thing as "too many sensors", not knowing which one to trust the most and having to do a total bail-out. The camera's are more capable than human eye's already, so why do you need some technology that doesn't work in all conditions and is only for up-close. I think the initial pushback was in part to how hard of a Computer Vision problem it would be to do FSD from vision alone. That was true, but it looks like they are nearing that finish line.
You don't necessarily need to know which sensor to trust the most. You could mesh the sensor data and let the algorithm training learn how to interpret it under different conditions. Our brains do this. When we see people talk we sync what we hear with what we see, but even when they don't match, like when you have a bad dub in a movie, you're not confused and still default to the sound. Your brain doesn't like it, but you can still hear what's said. Same with seeing loud things like clapping from far away. Your brain knows to stop syncing and delay sound beyond a certain distance. You have tons of different sensory inputs all the time. Your brain takes them all together to build your reality and mental picture. There's no reason a computer can't do the same.
The cameras are more capable than human eyes, but LIDAR and cameras have different capabilities and strengths. Neither work in all conditions. It's like the difference between sight and sound. You can get away without having one, but life is much easier and better when you have both.
FSD from vision alone is a pretty hard problem. It's a pretty hard problem in general because it's pretty much a 95/5 problem. 5% of the work gets you 95% of the way there. Getting the last mile is a ***** though. IMO, it's been plagued by optimism bias from the start and probably still is. Reliable FSD from vision alone probably requires something closer to general object recognition or generalized AI because driving is a task that goes beyond staying in a lane and maintaining distance. I used to think FSD would be much easier than it has, but the more I've thought about it the more I've come to realize how much decision making and input goes into driving that we take for granted. I think everyone, including Musk, Carly underestimated the scope and complexity of the problem and the amount of work necessary.