Artificial Intelligence
25 Mar 2025

Optical Flow-Based Egomotion Estimation for Lunar Landing

Autonomous lunar landing is a challenging problem, especially when no prior information about the landing site, such as a Digital Elevation Model (DEM) or depth map, is available. Traditional approaches rely on extensive pre-mapped terrain data or computationally expensive depth estimation techniques. Our research aims to develop a robust, low-computation method for estimating egomotion using only optical flow, making it highly suitable for space missions with constrained resources.

Methodology

The approach leverages optical flow to estimate egomotion in real time, providing reliable motion cues for autonomous landing. Optical flow describes the apparent motion of surface features across sequential images and can be computed efficiently using onboard vision systems. By analyzing this flow, the lander can determine its velocity.

Instead of relying on a depth map, we assume either a planar or spherical surface model. This assumption allows us to derive an overdetermined system of equations, which can then be solved to estimate egomotion accurately. This method reduces computational complexity while maintaining robustness in unknown environments.

We will validate our approach using:

  • Simulated datasets generated with PANGU, a high-fidelity planetary surface simulation tool.
  • Real-world datasets from past lunar missions to test robustness and adaptability.

Relevance for Private Lunar Missions

Our research holds significant value for private companies aiming to develop autonomous lunar landing capabilities. The proposed method:

  • Eliminates the need for pre-mapped terrain data, reducing mission complexity.
  • Minimizes computational cost, making it ideal for embedded space systems.
  • Ensures robust performance in varying lighting conditions and unknown terrain.

Implementation details can be found in our repository, which includes code, datasets, and documentation for replicating the results.

References

  1. Izzo, D., Hadjiivanov, A., Dold, D., Meoni, G., & Blazquez, E. (2022). Neuromorphic computing and sensing in space. In Artificial Intelligence for Space: AI4SPACE (pp. 107-159). CRC Press. https://arxiv.org/abs/2212.05236
  2. Azzalini, L. J., Blazquez, E., Hadjiivanov, A., Meoni, G., & Izzo, D. (2023). On the generation of a synthetic event-based vision dataset for navigation and landing. arXiv preprint arXiv:2308.00394. https://arxiv.org/abs/2308.00394
Hamburger icon
Menu
Advanced Concepts Team