FSD

Tesla Full Self-Driving Capability (FSD) To Make Quantum Leap In Its Evolution

Recent series of tweets and replies by the driven Elon Musk, are pointing to big upcoming changes regarding Autopilot and it’s smarter sibling FSD (Full self-driving). 


Large quantities of acronyms are being thrown around. Even as an IT professional, I’m struggling to keep up just with the jargon, constructs, and theories. This makes me feel bad for anyone else in the same boat who's trying to decipher the meaning of this mambo jumbo.


I won’t use GPT-3’s processing to extrapolate all that text, as patronizing that might sound, and have it explain it to you as if you were a 2nd grader. Believe it or not, that has been done to simplify some of the user license agreements as proof of concept for GPT-3.


So, what is all that gibberish to us mortals that they are spewing around?
As complex as it may sound, it’s not. It’s about a process you, I, and rest of the humankind have done since the beginning of time.


We’ve learned to work through our problems (or most of us did, so let’s leave politics out of it), and in turn, we make sure that we utilize what we’ve learned to make our lives easier or to avoid mistakes.


The same concept is implemented here but instead of teaching ourselves or our kids, we’re teaching computers to do the same.


GPT-3 is considered an AGI or Artificial General Intelligence in its infancy. I won’t bother with the metrics or where it stands since it would need more digging than I, or for that matter, you might care to do but let’s just say that it has a steady diet of data that it can consume and with yet another large number of parameters at its disposal (ca. 175 Billion), it’s able to translate, write programs and even anticipate your requests with a very little amount of data provided by you.


Elon has voiced his concerns regarding AGI and how dangerous it might be for the humankind (to make it quick, think Terminator movie plot). With that in mind, enter Narrow AI: an AI that, unlike the smarter and potentially homicidal brother AGI, can comprehend only specific tasks needed for its function.


This is elemental for FSD innovation, legislation, and implementation.


So, how do you make such a thing?


Elon’s approach is instead of spending decades in a test environment, where someone will have to spoon-feed the system, he deployed a predecessor system called Autopilot (Beta). Equipped with ultrasonic sensors, radar, and camera(s), the system is always passively on while driving.


Autopilot is available in all Tesla vehicles since late 2014, and its abilities have been steadily growing over time. From driving on the highways for you, reducing the micro-adjustments and fatigue caused by it, over automatic parking, to self-parking without you being inside the vehicle.


In the meantime, the system was gathering information, even if the owner hasn't purchased the feature or even engaged it. No, it’s not listening to your farts or things you’ve said about your wife or boss while driving it.
If Autopilot is engaged and for whatever reason, you had to take over or it was monitoring your driving while comparing it to its own, virtual driving, the system was comparing and reporting those details back to the main system: The neural network or NN for short.


This is where all that data is compiled, processed, compared and its product finally pushed back to the car.


This allows for a quick succession of improvements in a very short time. However, this isn’t quick enough to get FSD to the state of completion within Elon’s previously released timeframe since it was supposed to be out already. It’s fair to say since the future of Tesla depends on it, nobody understands better than Elon that this isn’t something that should be rushed to the consumer.

Consider for a moment, the implications of turning the wheel over to what some consider a box that only understands the meaning of on and off. Questionable incidents that drivers blamed on Autopilot aside, the system has shown statistical improvement over driving without this assist. The natural progression, if that could be attributed to electronic systems, is asking for even better improvement, which Elon has already announced as 4D processing. Currently, he’s the only person on the planet being able to test.


As previously stated, the system is using video camera(s) along with other sensors, but to process only what’s needed from it, it only utilizes freeze frames within the task at hand.


While the system is currently doing very well, one component that is missing is time to complete the whole ‘picture’.


So, the new system will be analyzing video instead of just an image, to evaluate and react while driving. Having a video, which computers can now extrapolate a 3D object from (called labeling) with help of the 4th dimension (time), the system is able to predict the situations more accurately.

According to Elon’s own words, “The FSD improvement will come as a quantum leap…”, with limited public release within the next 10 weeks.


Hopefully, I’ve made some sense from all the Twitter soundbites, and with it, I hope you stay safe and use your Autopilot! 

About the Author

Tom Simic

Tom Simic

Follow me on X

Reading next

Tesla Accessories