
Understanding the Use of LiDAR in Smart and Autonomous Vehicles: It’s More Than Just a Technical Issue
In the realm of autonomous driving, a sharp viewpoint has emerged: LiDAR is seen as the “Marxism” of the industry—replacing emergent understanding with centralized measurement and using data as a substitute for comprehension. While this statement is compelling, it only tells part of the story.
On the other side of the world, the reality observed at the 2026 Beijing Auto Show reflects significant advancements: the Leap A10 has integrated LiDAR into vehicles priced around 80,000 yuan, while models like the BYD Seagull now offer optional LiDAR. Changan has introduced mass-produced LiDAR vehicles at the 100,000 yuan mark, and Huawei’s latest ADS方案 boasts up to six LiDAR units for enhanced perception. Notably, two Chinese companies, Suoteng Juchuang and Hesai Technology, have captured over 80% of the global market share for automotive LiDAR.
Interestingly, a recent technology open day by Hesai transitioned the narrative of the LiDAR industry from “automotive sensors” to “physical AI infrastructure,” showcasing groundbreaking products like the world’s first 6D full-color LiDAR chip, Picasso, and the ETX radar with 4320 lines. This indicates that the second act of LiDAR has begun, with robots taking the lead role rather than cars.
While the Elon Musk and Xiaopeng camps proclaim that “pure vision is the only correct path,” the Huawei, BYD, and Waymo factions are opting for a hardware-heavy approach, and players like Hesai are betting on humanoid robots and world models. So, who is right? This article dissects the issue across five dimensions—technical principles, engineering implementation, commercial viability, geographical differentiation (the real situation in Europe, the U.S., and Japan), and physical AI (the necessity of LiDAR for robots)—along with three temporal slices (current, 2028, and beyond) to predict whether LiDAR should be utilized in vehicles or robots. The answer lies in the half of the question that has not been asked.
Conclusion First: LiDAR is the “Crutch” of Autonomous Driving
A person with a broken leg needs crutches; once healed, they discard them. However, a severely ill patient, like a Robotaxi, may need crutches for life. This does not imply that LiDAR is “useless” or that “pure vision will prevail.” Rather, the value of a sensor is determined by the product of algorithmic capability, scene tolerance, and cost structure. These three variables differ significantly across time, vehicle models, and scenarios.
Keep this in mind as we explore the following three dimensions.
Dimension One: Technical Principles—Musk is Right, but Only 70%
A friend at the beginning of this article referenced Rich Sutton’s “Bitter Lesson”: a generalized approach relying on expanded computing power and data will ultimately outperform handcrafted architectures. This principle has been validated over the past 30 years of AI development, from Deep Blue to AlphaGo to GPT.
In the context of autonomous driving, this logic suggests that no matter how many sensors you pile on, you cannot surpass a sufficiently large end-to-end neural network combined with ample real-world data. The first half of this judgment is correct: LiDAR fundamentally provides geometric measurements, indicating “there is something there” but failing to clarify whether it’s a plastic bag or a rock and how it will move next. The last bit of reliability is a cognitive issue, not a perceptual one, which Musk accurately points out.
However, there’s a catch: Sutton’s bitter lesson assumes that we can wait for algorithms to surpass hardware. In the autonomous driving industry, those who cannot wait may face deadly consequences. An intriguing physical fact arises here: LiDAR and cameras have different failure modes. Cameras may become “blind” in strong backlighting, at tunnel entrances, complete darkness, or reflections during heavy rain, while LiDAR’s point cloud quality plummets in fog, heavy rain, or snowfall (Xiaopeng’s tests show effective detection distance dropping from 200 meters to 30 meters in heavy rain). Yet, their failure scenarios overlap very little. This is the value of “redundancy.” It’s not merely “one isn’t enough, so add another”; it’s that “A’s weaknesses are precisely B’s strengths.” This is why Waymo’s sixth-generation system retains 16 17MP cameras, short-range LiDAR, imaging radar, and external microphones—not to solve perceptual accuracy issues but rather to mitigate single-point failure risks.
This isn’t to say that Tesla’s pure vision approach is flawed. Tesla is likely to achieve success in the long run, but it is betting on a future that may take 5 to 8 years to materialize. In the interim, should others not provide users with safety measures?
Dimension Two: Engineering Implementation—The “Verifiability” Dilemma of End-to-End Black Boxes
The biggest headache for engineers is not whether a solution will work, but how to explain it if something goes wrong. End-to-end large models have resolved the upper limits of autonomous driving—Tesla’s FSD V12, V13, and V14 are becoming increasingly intelligent, while Xiaopeng aims for a goal of “less than one takeover per 100 kilometers.” However, this has led to a more challenging issue: verifiability. When an end-to-end neural network hesitates for 2 seconds at an intersection or fails to stop in time when a child suddenly appears, engineers cannot debug. They cannot print intermediate variables, draw a decision tree, or write a unit test to ensure the same mistake won’t happen again. You can only continue feeding data and training, hoping the next version will improve.
In this context, LiDAR serves as a “safety net.” It does not participate in the intelligent decision-making of the large model; it merely performs one task: detecting an object 3 meters ahead, regardless of what it is, and applying the brakes. This is why Li Xiang of Li Auto made a statement that resonated in the industry: “If Musk were to come to China, Tesla would also retain LiDAR.” Given the complex road conditions in China—electric vehicles, erratic drivers, construction barriers, and pedestrians suddenly crossing the street—the value of providing a safety net in extreme long-tail scenarios is maximized. LiDAR happens to be the most reliable and direct sensor available.
After the Xiaomi SU7 high-speed smart driving incident in March 2025, which sparked widespread discussion, Chinese automakers collectively shifted towards a more conservative approach: Xiaomi’s Yu7 now comes standard with LiDAR and 4D millimeter-wave radar, while Li Auto’s L series has also integrated LiDAR into its Pro version. This doesn’t imply that Chinese automakers rely on “stacking materials” due to a lack of technology. On the contrary, it reflects that, while algorithms have not reached a stage where they can independently ensure safety, hardware is being utilized to protect users’ lives first and foremost. Engineering decisions are not about selecting the most elegant solution; they are about choosing the least regrettable one.
Dimension Three: Commercial Viability—The $200 Revolution and Underlying Factors in China’s Industry
In 2016, a 64-line mechanical LiDAR was priced at $80,000. By 2025, the mass-produced prices of Hesai’s ATX and Suoteng’s MX had dropped below $200. This represents a staggering 400-fold price decrease over a decade. Even more surprising, Hesai’s main ATX product is projected to further drop to $150 in 2026. This is an economic variable that must be taken seriously. When a product’s price decreases by two orders of magnitude, your evaluation must be rewritten.
Musk’s famous quote in 2019, “Anyone relying on LiDAR is doomed,” was predicated on the exorbitant cost of LiDAR. During that era, he was correct. However, by 2026, the cost of a single LiDAR unit had dwindled to 1-2% of the total vehicle BOM—similar to a good central display. Once the cost structure changes, decision-making will follow suit.
This is why: BYD is investing in Suoteng and has made the “Heavenly Eye” feature available on the entire Dynasty and Ocean series at a price of 70,000 yuan; Changan aims to integrate LiDAR into vehicles priced at 100,000 yuan; Huawei’s ADS has maintained a fusion approach, with CEO Jin Yuzhi publicly stating that “to achieve L3/L4, LiDAR is essential,” and they are increasingly adding more units—previous articles showcase how they now use six LiDAR units.
Leap, Geely, and Chery are following suit by integrating LiDAR into lower-end models, while Toyota’s joint venture in China plans to use Hesai’s ATX LiDAR in new models, marking a commitment from traditional Japanese automakers to the Chinese LiDAR supply chain. An underlying narrative that is often overlooked is that China holds a dominant position in the LiDAR supply chain (with an 84% global market share) while being relatively behind in areas such as automotive AI chip manufacturing.
Choosing a pure vision route effectively surrenders one’s defensive moat to the computational advantages of companies like Nvidia and Tesla. Conversely, opting for a fusion approach transforms one’s supply chain advantage into product differentiation.
Some numbers illustrate the scale of the industry:
- As the leading global LiDAR company, Hesai Technology’s financial data indicates that “this business is already thriving”:
- In 2025, over 1.6 million units will be delivered, doubling for five consecutive years, with a peak monthly shipment surpassing 200,000 units.
- By 2026, production capacity is set to double to 4 million units, with fully automated production lines averaging one unit every 10 seconds.
- Hesai has cumulatively delivered over 2.4 million units, making it the world’s first LiDAR manufacturer to surpass one million annual deliveries.
- Hesai’s ATX model has already received 4 million orders, with mass production starting in April 2026.
- Nine out of the top ten global Robotaxi companies have chosen Hesai—this includes Baidu’s Apollo, Didi Autonomous Driving, WeRide, Pony.ai, and Motional.
- Hesai has been selected as a LiDAR partner for Nvidia’s DRIVE AGX Hyperion 10 L4 platform.
- The “Galileo” factory in Bangkok, Thailand, is set to commence production in early 2027, marking a global expansion.
Suoteng Juchuang’s data for Q1 2026 is equally impressive: in a single quarter, they shipped 330,000 units and secured agreements with 22 automotive companies for 80 models. These two leading Chinese manufacturers are pushing the entire industry into a new phase with their production capacity and cost structure.
Thus, the collective decision of Chinese automakers to choose fusion is not just an engineering judgment; it is also an industrial judgement. This doesn’t imply that Chinese companies are selecting LiDAR out of protectionism. Rather, it suggests that the choice of technology routes has never been solely a technical issue; it combines aspects of industrial structure, geopolitical considerations, and supply chain security. What is optimal for Musk in the U.S. may not be optimal for Wang Chuanfu in China.
Dimension Four: Geographical Differentiation—Recent Developments in Europe, the U.S., and Japan Undermine the “Route War” Fallacy
If you only compare Tesla to Chinese automakers, you might think it’s a binary battle of “pure vision vs. fusion.” However, looking at Europe, Japan, and North America reveals a more complex reality with four distinct camps, each offering different solutions.
Europe: Once the staunchest L3+LiDAR camp, has seen a complete defeat by 2026. This is the biggest news of the past six months, and many Chinese media outlets have yet to catch on:
- Mercedes-Benz’s Drive Pilot: The world’s first commercial L3 system in 2021, featuring 35 sensors + LiDAR, allowed for hands-free driving on German autobahns. However, in January 2026, they announced a suspension of this system, replacing it with the new S-Class utilizing an L2++方案 without LiDAR.
- BMW’s Personal Pilot L3: Scheduled for 2024, but the 2026 7 Series has removed it altogether, reverting to L2.
- Volvo: Previously committed to equipping all EX90/ES90 models with Luminar LiDAR, has removed LiDAR from all 2026 models and terminated a five-year contract with Luminar, putting the company at risk of bankruptcy.
- Polestar 3: Following Volvo’s lead, the 2026 version has abandoned the standard LiDAR.
European luxury brands have collectively provided an answer over five years: “L3 + LiDAR” as a consumer product is commercially unfeasible. Why? The Mercedes Drive Pilot costs between 6,000 and 9,000 euros, but can only be used under specific conditions—on German highways, during daylight, in good weather, with a preceding vehicle, and at speeds under 95 km/h. After experiencing this limitation, consumers realized that they spent 7,000 euros on a feature with extremely limited functionality, while Tesla’s FSD (L2++) is applicable in nearly all scenarios. Mercedes CEO Ola Källenius remarked after a test drive at CES that the car felt as if it were on rails—referring to the Nvidia-based L2++, not their own L3 Drive Pilot.
A harsh truth has emerged: the concept of L3 as a consumer product may have been a fallacy from the start.
Japan: A conservative approach, betting on LiDAR while relying on external AI. The three major Japanese automakers have adopted a similar strategy—integrating hardware while outsourcing AI development:
- Toyota: Collaborating with Waymo to develop an L4 platform while using Hesai ATX LiDAR in joint venture models set for 2026 mass production.
- Honda: Partnering with the California AI company Helm.ai for a next-generation autonomous driving system using LiDAR + AI, and also investing in FMCW LiDAR company SiLC.
- Nissan: The Ariya test vehicle is equipped with 11 cameras, 5 millimeter-wave radars, and the next-gen LiDAR, in collaboration with UK’s Wayve, aiming for L4 production in 2027.
Note Nissan’s approach—it’s a fusion perception solution, explicitly targeting L4 (skipping the awkward L3). The Japanese have learned from the costs incurred by the Europeans.
The U.S.: A complex landscape with three concurrent paths. The situation in the U.S. is intricate because it features both Tesla and Waymo, each a global benchmark for their respective routes:
- Tesla: The full rollout of FSD V14 is underway, with the Cybercab slated for mass production in April 2026—pure vision is their endpoint.
- Waymo: With $16 billion in financing for 2026 (valued at $126 billion), they are expanding into over 20 cities, using a sixth-generation system with 16 cameras and short-range LiDAR—fusion is their endpoint.
- General Motors Super Cruise: Announced a “LIDAR + Hands-Free” plan for 2028, based on Nvidia’s platform—clearly shifting towards LiDAR + fusion.
- Zoox (Amazon) and Aurora: Both are utilizing LiDAR.
- Luminar: Once a star in the U.S. LiDAR industry, is facing bankruptcy due to losing the Volvo contract—signifying a collapse of the domestic LiDAR supply chain.
The narrative in the U.S. is clear: in the personal vehicle market, Tesla has defined the answer (pure vision); for Robotaxi and commercial fleets, LiDAR remains the standard solution.
When you piece together these four regions, an overlooked key conclusion emerges: each area is indeed different. For L2++ (the mass market), vision is primary, and whether to add LiDAR depends on industry chain and market preferences. Europe is still developing, NOA’s conflict has just begun, while China incorporates LiDAR, which is all reasonable. For L3 (consumer products in limited scenarios), its existence is being questioned. Mercedes and BMW have backed out, while Huawei in China is pushing hard, and Tesla in the U.S. is skipping it.
For L4 (Robotaxi/commercial), LiDAR is a global consensus; Waymo, GM’s Cruise, Nissan, Zoox, Baidu’s Apollo, and Xiaopeng’s Robotaxi all retain LiDAR. The automotive industry is grappling with whether to develop L3 or evolve directly from L2++ to L4. The answer to this question is not merely technical; it may be a composite function of technology, business, and policy.
Moreover, an interesting detail: Mercedes-Benz’s MB.Drive Assist Pro, developed for the Chinese market, utilizes a solution from Chinese company Momenta. Even European luxury brands acknowledge that China’s intelligent driving capabilities have become part of the global benchmark.
Dimension Five: The New Variable of Physical AI—Do Humanoid Robots Really Need LiDAR?
If you thought the story of LiDAR ended here, you are mistaken. What truly excites the industry in 2026 is not L3 or Robotaxi but Physical AI—specifically, embodied intelligence represented by humanoid robots. The debate over “whether to use LiDAR” within this arena is even more intense than in the automotive sector.
Turning Point: Hesai’s “Second Act” Declaration
On April 17, 2026, Hesai Technology held a technology open day considered by insiders as a “watershed” event. This was not an ordinary press conference; it shifted the narrative of the LiDAR industry from “automotive sensors” to “physical AI infrastructure.” They introduced three groundbreaking products, each redefining the industry:
- Picasso SPAD-SoC—The world’s first 6D full-color LiDAR chip. Traditional LiDAR only perceives three-dimensional space (XYZ), but Picasso merges RGB color information directly into the point cloud at the chip level—each point carries color data, achieving pixel-level temporal and spatial alignment, thus eliminating the need for post-processing between cameras and LiDAR. This fundamentally addresses the inherent contradiction of “LiDAR sees geometry but doesn’t understand semantics.”
- ETX 4320-Line LiDAR—The world’s first 6D full-color high-line count platform. The production version will support 1080/2160/4320 line configurations, measuring distances of up to 400 meters at a 10% reflectivity rate, capable of identifying water barriers from 300 meters away and small animals from 280 meters away. Mass production is expected in the latter half of 2026, to be featured in flagship models from 2027 to 2028.
- Kosmo—Spatial intelligent AI hardware designed to transform real-world spatial data from a “luxury” into a “standard resource.” This is the most significant revelation. Kosmo is not intended for vehicles; it is for training embodied intelligence. Its goal is to collect high-fidelity 3D spatial data on a large scale to feed into the world models of robots—essentially creating a “data engine” for the Physical AI era.
Hesai CEO Li Yifan remarked, “The most critical opportunity of this era is AI, particularly the construction of physical world AI infrastructure and the digitization of the physical world.” In other words, LiDAR is no longer just a vehicle component; it serves as the “eyes” of Physical AI.
The Robot Faction Debate: Pure Vision vs. Fusion
The perception routes in the humanoid robot industry are far more complicated than in the automotive sector. First, let’s examine the two main factions:
- Pure Vision Faction (emulating Tesla’s automotive approach):
- Tesla Optimus: 8 cameras + FSD-derived neural network, entirely without LiDAR. The Fremont factory’s Gen 3 is set for production in 2026.
- Xiaopeng IRON: Equipped with an “AI Eagle Eye” 720° vision system + 3 Turing AI chips (2250 TOPS), completely devoid of LiDAR. He Xiaopeng’s logic is that IRON must share 70% of the AI code with Xiaopeng vehicles, necessitating a pure vision route.
Early Figure 02: The shared philosophy of those relying primarily on vision: humans do not have laser eyes, and neither should robots. Moreover, a pure vision approach allows for the reuse of the same end-to-end large model for both vehicles and robots.
Fusion Faction (LiDAR + cameras + tactile sensing):
- Figure 03 (to be released in October 2025): Three pairs of stereo RGB cameras + solid-state LiDAR + ToF depth sensors + 6-axis torque sensors + fingertip tactile sensing. This represents a reversal from the pure vision route taken in Figure 02.
- Yushu G1: 3D LiDAR + depth camera coverage, retail price of 99,000 yuan.
- Yushu Go2 robotic dog: Equipped with a standard 4D ultra-wide-angle LiDAR.
- Boston Dynamics Atlas: LiDAR + stereo vision.
- The first humanoid robot from Honor: Features the Hesai JT128 LiDAR, winning a championship in the Beijing Robot Marathon.
- Suoteng Juchuang: Introduced Active Camera + 192-line Airy semi-spherical radar specifically designed for robots.
A significant signal of change: Figure has added solid-state LiDAR to Figure 03. This company, once deeply partnered with OpenAI and now valued at $39 billion, has chosen a path opposite to Tesla’s. The reason is simple: home scenarios (glass, mirrors, dim lighting, cluttered spaces) present greater perceptual challenges than road scenarios, which pure vision cannot handle effectively.
Why Do Robots Need LiDAR More Than Cars? Three Physical Reasons:
You might ask: if even Tesla’s Optimus doesn’t use LiDAR, why would robots need it? The answer lies in the physical essence of their use cases:
- 360° Perception vs. “Forward Perception”: Cars focus 95% of their attention forward. Robots need to walk, grasp objects, avoid pets, and navigate furniture—requiring omnidirectional perception. This is a natural shortcoming of cameras, necessitating multiple cameras to create a complete view, whereas LiDAR inherently provides 360° coverage.
- Millimeter-Level Precision vs. Meter-Level Precision: Cars measure braking distances in meters, while robots require millimeter precision for tasks like handling fragile eggs. Visual depth estimation declines significantly at close range, whereas LiDAR can achieve centimeter or even millimeter-level accuracy.
- Transparent/Reflective/Dark Environments are “Camera Hell”: How much glass is in a home? How many mirrors and dim corners exist? For pure vision robots, these are high-risk zones. LiDAR does not rely on ambient light; it actively emits laser beams, making these environments its stronghold.
This is why Hesai’s JT series mini LiDAR—designed specifically for robots and offering 360° coverage—is projected to deliver over 200,000 units by 2025, with exclusive supply agreements signed with leading brands like Dreame and Mova, amassing over 10 million units in orders. The total addressable market (TAM) for LiDAR in the robotics field is estimated to be twice that of the automotive sector.
In the Physical AI era, the role of LiDAR has transformed. An intriguing insight is that even if the final product deployed in robots does not include LiDAR, the training process for these robots will still rely on LiDAR. Why? Because the world model requires high-fidelity 3D spatial data for training. What will you use to collect this data? The answer is the fusion of LiDAR and camera point clouds. Hesai’s Kosmo and Suoteng’s Active Camera are fundamentally focused on this goal—digitizing the physical world to become fuel for AI training.
This is why Hesai has repositioned LiDAR as “physical AI infrastructure.” Even if Tesla’s Optimus ultimately does not carry LiDAR, the data used to train it will likely come from vehicles or robots equipped with LiDAR, or specialized devices like Kosmo.
This isn’t to suggest that “humanoid robots must have LiDAR.” Rather, it indicates that LiDAR has evolved from “a component of vehicles” to “a foundational data production tool for Physical AI.” Its market space and strategic importance have expanded an order of magnitude beyond that of the L2/L3 era.
Temporal Dimension: Projections Across Three Time Slices
By layering the first four dimensions onto a timeline, the picture becomes clearer.
Present (2026): Four Camps Following Their Own Paths
- Pure Vision Consumer Vehicle Camp: Tesla FSD V14 is rolling out globally, with the Cybercab in mass production as of April, followed by Xiaopeng’s GX.
- Fusion Robotaxi Camp: Waymo is expanding into over 20 cities (aiming for 1 million rides per week by 2026), with Nissan targeting L4 production in 2027, all betting on LiDAR.
- Chinese Smart Driving Equalization Camp: BYD, Changan, Huawei, Li Auto, and Xiaomi are fiercely pursuing a fusion route, with LiDAR aggressively integrated into vehicles priced as low as 100,000 yuan.
- European Luxury Retreat Camp: Mercedes, BMW, Volvo, and Polestar have collectively abandoned the L3+LiDAR route, reverting to L2++ powered by Nvidia, but will still include LiDAR.
- Emerging Physical AI Arena: Tesla’s Optimus and Xiaopeng’s IRON are on a pure vision path; Figure 03, Yushu G1, and Boston Dynamics are adopting a fusion approach; Hesai and Suoteng are shifting focus from vehicles to “physical AI infrastructure.”
At this stage, it’s not about “who wins or loses,” but rather each camp establishing roots in its optimal ecological niche.
Mid-Term (2027-2028): Global Confirmation of the Failure of L3 Consumer Products, with Humanoid Robot Mass Production Commencing
This judgment may seem counterintuitive: by 2027-2028, L3 as a consumer product will begin to be declared a failure worldwide. The rationale: Europe has provided answers in advance (Mercedes and BMW’s withdrawal), the U.S. is skipping directly (Tesla pursuing L2++, Waymo aiming for L4), and while China claims to be moving toward L3, true mass production still revolves around L2++ variants (with the need for a driver to take over, “L3” is not genuine L3).
Simultaneously, the period from 2026 to 2028 will be crucial for humanoid robots transitioning from lab experiments to mass production:
- Tesla’s Optimus Gen 3 is already in pilot production in Fremont, targeting consumer launch in 2027.
- Figure 03’s BotQ factory has a production capacity of 12,000 units per year, aiming for 100,000 units within four years.
- Xiaopeng IRON is set for mass production in 2026, supported by advanced factories and retail outlets.
- Hesai’s ETX 6D full-color LiDAR is slated for mass production in the second half of 2026, while the Kosmo spatial intelligent hardware will launch in the latter half of 2026.
The real race during this phase involves three tracks:
- L2++ Extreme Evolution: End-to-end large models approach driving assistance to a point where “intervention is rarely required”—Tesla, Xiaopeng, Horizon, Huawei, and Momenta are collaborating on this.
- L4 Robotaxi Scale Expansion: Waymo aims for 1 million rides per week by the end of 2026; Xiaopeng, Baidu Apollo, and Nissan’s Wayve are expanding in various markets.
- Physical AI Data Race: The need for vast amounts of 3D spatial data for world model training unlocks the “second curve” for LiDAR companies—Hesai and Suoteng’s robot radar shipments are growing faster than their automotive business.
Prediction 1: By 2028, the penetration rate of LiDAR in vehicles priced above 150,000 yuan in the Chinese passenger car market will reach 60-70%.
Prediction 2: Consumer-grade L3 “hands-free” products will likely be a minority worldwide. The industry consensus will be—either L2++, or L4; the intermediate tier lacks commercial value.
Prediction 3: By around 2028, the installation volume of LiDAR on humanoid robots will surpass that in vehicles. The claim that “the robotics LiDAR TAM is twice that of automotive” will begin to materialize at this point.
Long-Term (2030+): Divergence of Endgames, but Not a “Who Defeats Whom” Scenario
When the end-to-end large model, sufficient real-world data, and vehicle-side computing power reach a critical threshold (e.g., Xiaopeng’s Turing chip at 2200 TOPS, Li Auto’s “Schumacher” with 40 billion transistors, Tesla’s HW5/6), the day will arrive when pure vision genuinely catches up to fusion solutions. However, by then, the market will likely divide into six forms rather than a binary opposition.
Prediction 4: LiDAR will not “disappear”; instead, it will undergo a role transformation—degrading from “the heart of intelligent driving” to “a safety belt for specific scenarios” (in the automotive sector), while upgrading to “a foundational data production tool for Physical AI” (in robotics and world model contexts). The total industry landscape will not shrink; it will expand and strengthen.
Final Judgment: Do We Need LiDAR or Not?
Returning to the initial question, my answer is layered, regional, and species-specific:
If you are asking about the ultimate technical form—Musk is correct. The true endpoint of universal autonomous driving will undoubtedly be end-to-end, centered on vision. This philosophical victory is unshakeable.
However, if the future of machines surpasses humanity, then some extreme scenarios will require enhanced sensors, rather than merely human-like sensors. If you ask whether smart vehicles in China should be equipped with LiDAR over the next five years—yes, they should be. The logic of consumers willing to spend more on hardware configurations, coupled with the fact that the LiDAR industry chain costs have dropped to $150-200, makes this expense a reasonable safety net and a key anchor in consumer perception before end-to-end large models truly mature.
If you inquire whether Robotaxis should utilize LiDAR—the answer is yes, they must. Among the top ten global Robotaxi companies, nine have opted for LiDAR. When 100% of the responsibility lies with the operator, redundancy is not a cost; it is a lifeline, a form of insurance.
If you ask whether humanoid robots need LiDAR—it depends on the application. The consumer-grade home sector may follow the pure vision (Tesla) path, while industrial and precision operation scenarios will require LiDAR (Figure 03/Yushu/Boston Dynamics path). Yet, the training of all robots’ world models will rely on high-fidelity 3D data collected by LiDAR—this is the foundational logic of the industry. If you ask whether the LiDAR industry will vanish post-2030—the answer is no; rather, it will grow larger and stronger. It will transition from being seen as “an automotive component” to “the physical AI infrastructure,” paving the way for companies like Hesai with the Picasso chip, Kosmo, and Suoteng’s Active Camera to position themselves at the entrance of embodied intelligence for the next 20 years.
In summary: LiDAR is not the wrong answer; it may be a transitional answer and also a ticket into the next era. Musk has won the philosophical debate, Waymo has triumphed in the Robotaxi sector, Chinese automakers have succeeded in the consumer market’s five-year transition, and European luxury brands have exhibited the wisdom of timely loss mitigation. Companies like Hesai and Suoteng might win the longest game: data infrastructure for the Physical AI era. No one has lost; each has succeeded in their respective scenes.
Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/the-role-of-lidar-in-autonomous-vehicles-a-complex-decision-beyond-technology/
