Smarter Crowd Mapping: New LiDAR Reconstruction Method Boosts Real-Time Depth Accuracy for Emergency Evacuations
Doshisha University researchers develop an RGB-guided LiDAR method to improve crowd mapping during disasters. Discover how sparse data becomes dense 3D maps.
By: AXL Media
Published: Feb 27, 2026, 4:07 AM EST
Source: The information in this article was sourced from Doshisha University

The Critical Need for Real-Time Crowd Dynamics
Effective emergency evacuations during natural disasters rely on the ability to identify congestion points and movement patterns in real-time. While remote sensing technology has long been a part of disaster preparedness, the industry has struggled to balance cost with image quality. High-density LiDAR systems provide excellent detail but are often too expensive for large-scale urban deployment. Conversely, affordable "non-repetitive scanning LiDAR" offers a wide field of view but produces sparse, "point cloud" images that lack the depth and continuity required for reliable situational awareness. To bridge this gap, a research team at Doshisha University has pioneered a method to reconstruct these sparse measurements into dense, actionable maps.
Harnessing RGB-Guided Depth Completion
Lead author Zixuan Zhang and colleagues developed an innovative framework that uses standard color (RGB) images to guide the recovery of missing depth data. By combining computational techniques like confidence-aware bilateral filtering and masked reconstruction, the system can distinguish between actual human silhouettes and background noise. This method prevents "over-smoothing"—a common error where sharp edges like human shoulders are blurred into the background—ensuring that the structural depth of individuals remains distinct. This allows disaster response teams to see a fuller, more accurate picture of crowd density even under realistic sensing constraints.
Simulating Reality: Overcoming Data Scarcity
A major obstacle in developing LiDAR technology has been the lack of public datasets that replicate the irregular sampling patterns of affordable sensors. To solve this, the Doshisha team created a custom simulated dataset using 3D human motion data paired with color images. They mimicked the specific rotation and scanning mechanics of non-repetitive LiDAR to generate 30,000 samples. This allowed the researchers to optimize their parameters without needing massive amounts of "real-world" ground-truth data, which is difficult and dangerous to collect during actual emergency events.
Categories
Topics
Related Coverage
- Humanitarian Groups Launch Massive Relief Operations as Heavy Rains Submerge Cape Town Settlements
- Japanese Academic Study Reveals Emotional Support Systems Significantly Boost Student English Communication Confidence
- Massive Midnight Blaze Destroys 200 Homes in Sandakan’s Kampung Bahagia Floating Village
- "We Are Suffering": Kuruman Residents Demand Urgent Intervention as Year-Long Flooding Displaces Families