
1. Executive Summary
Automatic Emergency Braking (AEB) is transitioning from a premium feature to a global regulatory requirement. As vehicle safety expectations rise, the limitations of traditional sensor fusion architectures are becoming increasingly apparent under new regulatory expectations and real-world testing conditions. This white paper explores the regulatory landscape for AEB systems, details the sensor-level limitations of current solutions (radar and camera), and presents a case for integrating high-resolution solid-state LiDAR based on Newsight Imaging’s NSI9000 sensor. The paper provides an in-depth analysis of how NSI9000 addresses current regulations and is poised to meet future mandates, including nighttime pedestrian detection, adverse weather robustness, and the trend toward active evasive maneuvers.
2. The Global Regulatory Landscape
The regulatory requirements for AEB are rapidly evolving. In the United States, the National Highway Traffic Safety Administration (NHTSA) finalized FMVSS No. 127 in 2024, mandating AEB on all new light vehicles (up to 10,000 lbs / 4,535 kg) by September 1, 2029. Key performance metrics include:
– Detection and avoidance of vehicles at speeds up to 90 mph (no contact up to 62 mph).
– Pedestrian detection and braking up to 45 mph, including in complete darkness.
– Forward Collision Warning (FCW) with visual and auditory alerts.
Other global regions are following:
– Europe (Euro NCAP): Awards points for pedestrian and cyclist detection, and for systems that include emergency steering assist.
– Japan/Korea: Mandating vehicle and vulnerable road user (VRU) detection with a growing focus on nighttime performance.
Future trends point toward mandatory autonomous steering interventions, cross-junction awareness, and more accurate classification of non-conventional obstacles. As regulators push for more robust capabilities, OEMs must incorporate sensors that can handle complex environments, variable lighting, and diverse object types.
While traditional radar and camera-based systems are widely used today, regulatory expectations and real-world conditions are pushing for the inclusion of complementary technologies such as LiDAR. High-resolution LiDAR sensors provide consistent performance in low-light and adverse weather conditions and are becoming increasingly cost-effective. LiDAR integration is encouraged by several advanced testing protocols, such as those evaluated by the European New Car Assessment Programme (Euro NCAP). Several industry publications, including SAE International (https://www.sae.org/news/2023/07/ncap-lidar-future ), suggest LiDAR may soon be critical to achieving full AEB compliance under evolving standards.
3. What is an AEB System?
Autonomous Emergency Braking (AEB) is an advanced safety feature that continuously monitors the road ahead using radar, cameras, and other sensors. When a potential collision is detected and the driver fails to respond, the system automatically applies the brakes to prevent or mitigate impact.
4. Object Recognition and Error Metrics in AEB Systems
Automatic Emergency Braking (AEB) systems rely on advanced object recognition algorithms to detect
vehicles, pedestrians, bicycles, and obstacles in real time.
Object recognition is achieved using a combination of sensors such as cameras, LiDAR, radar, and AI-based
software models trained on large datasets.
Accurate object classification and distance estimation are critical for triggering warnings or emergency
braking in time to avoid collisions.
Misclassification or delay can lead to safety risks or unnecessary activations.
AEB systems must differentiate between various objects to make accurate decisions:
– Vehicles: moving and stationary
– Pedestrians: walking or running
– Cyclists
– Non-hazardous objects (e.g., plastic bags, road signs)
Failing to distinguish these correctly can lead to false positives (unnecessary braking) or false negatives
(failure to brake).
To be considered safe and effective, AEB object recognition must meet specific thresholds:
– Pedestrian detection false negative rate: <10% (ideally <5%)
– False positive rate: <5% to avoid nuisance braking
– Detection range for pedestrians: >20 meters with accurate classification
– Reaction time: typically <250 ms from detection to decision
5. Sensor Technology Evaluation
Current AEB systems typically rely on sensor fusion involving radar and cameras.
Radar sensors provide robust velocity and range data across a variety of environmental conditions, including fog and rain. However, their angular resolution is limited, making it difficult to differentiate between closely spaced objects such as a pedestrian and a signpost.
Cameras offer high-resolution image data and strong object classification capabilities, especially when paired with machine learning algorithms. However, their performance significantly deteriorates in poor lighting, glare, heavy rain, or fog. Additionally, cameras can misinterpret shadows as physical objects—or fail to distinguish actual obstacles—because they lack inherent depth perception and cannot reliably assess the 3D characteristics of a scene.
These weaknesses can result in false negatives (missed detections of real threats such as a child partially occluded by a vehicle) and false positives (e.g., plastic bags or shadows triggering unnecessary braking). These limitations highlight the critical need for depth-based sensing via LiDAR.
6. The Case for High-Resolution LiDAR
LiDAR complements radar and camera systems by providing accurate depth perception under all lighting conditions. It enables precise spatial mapping of the environment, including the contour and reflectivity of objects, regardless of ambient light. Unlike cameras, LiDAR does not rely on texture or contrast for detection, and it works even in total darkness.
High-resolution LiDAR systems can distinguish small objects and define their position and movement with accuracy unmatched by radar or camera alone. This becomes essential in complex driving environments involving pedestrians, bicycles, animals, and road debris.
Leading analysts from McKinsey & Company and Yole Développement forecast that LiDAR adoption in ADAS will accelerate between 2025–2030 due to improving performance-to-cost ratios and growing regulatory pressure. See Yole’s automotive LiDAR report (https://www.yolegroup.com/product/report/status-of-the-lidar-industry-for-automotive-2023/).
7. NSI9000: A Disruptive LiDAR Platform
The NSI9000 by Newsight Imaging is a compact, solid-state enhanced Time-of-Flight (eTOF) LiDAR sensor platform that uniquely combines high resolution, high frame rate, and long-range depth sensing – a combination that traditional iTOF and dTOF LiDAR systems typically cannot achieve simultaneously. Key features include:
· High-resolution depth imaging at 1024 × 480 pixels, enabling detailed 3D mapping
· Multi-slice gated imaging and low duty cycle, segmenting depth data by range to effectively filter fog, rain, and multipath reflections
· High frame rates and low power consumption, ideal for detecting objects at distances of tens of meters
· Reliable operation in zero ambient light, with outstanding night-time sensitivity
· High reflectivity sensitivity, capable of detecting low-reflectance objects like dark-clothed pedestrians at night
· Proprietary signal processing for enhanced edge detection and depth accuracy
· True per-pixel distance measurement, enabled by eTOF’s simplified, low-cost architecture—a major advantage over complex iTOF/dTOF systems
This makes the NSI9000 an ideal solution for automotive, robotics, and smart infrastructure applications where high performance and affordability are critical.
Unlike traditional mechanical or MEMS-based LiDARs, the NSI9000 is built for integration in cost-sensitive applications without sacrificing resolution or robustness. Its performance characteristics make it a viable candidate for OEMs seeking scalable solutions for advanced driver-assistance systems.
8. System-Level Advantages
– Nighttime safety: NSI9000 provides high signal-to-noise performance even in total darkness.
– False positive rejection: By measuring true depth and reflectivity contours, the system reduces unnecessary braking for non-threatening objects.
– Environmental resistance: Gated slices minimize interference from rain or fog, avoiding common challenges in camera-based systems.
– Fusion-ready architecture: Seamlessly integrates with existing radar and camera pipelines to enhance AEB decision confidence.
– Autonomous steering potential: High-resolution 3D data supports predictive path planning and obstacle avoidance, paving the way for future Level 3+ ADAS systems.
9. Looking Forward: Evolving Standards and Opportunities
Emerging AEB regulations are expanding in scope beyond simple collision avoidance. Future systems are expected to interpret complex urban scenarios, including crosswalks, occluding children, construction zones, and erratic traffic behavior. Standards will increasingly require systems to distinguish object types—such as plastic debris, animals, and humans—and respond accordingly.
An example of a future challenge is the detection of a child partially hidden behind a parked SUV—a scenario that demands fine-grained spatial resolution and rapid edge recognition to ensure timely and accurate response.
NSI9000’s ability to generate dense 3D point cloud maps and gated depth data makes it adaptable for next-generation ADAS and autonomous functions.
10. Conclusion
To meet current and future regulatory challenges in automotive safety, OEMs and Tier 1s must evolve their sensor architecture. The NSI9000 LiDAR platform offers a compelling alternative to legacy LiDAR systems, providing performance, scalability, and affordability in a compact and power-efficient solution. By enabling reliable 3D perception and working seamlessly with radar and vision, NSI9000 helps close the performance gap in today’s AEB systems while preparing vehicles for the autonomous features of tomorrow.


