This paper presents the design and preliminary validation of the Lunar Intelligent Navigation via Neural Architecture (LINNA) payload, a technology demonstration integrated into the SelenITA mission—Brazil’s first lunar CubeSat. The experiment is motivated by the future needs for autonomous state estimation in the Very Low Lunar Orbit (VLLO) regime, where Mascon-induced gravitational anomalies challenge traditional inertial-only navigation. The primary goal of LINNA is to evaluate the feasibility of conducting onboard, vision-based positioning as a future auxiliary system for autonomous Guidance, Navigation, and Control (GNC).
LINNA operates as a standalone technological experiment, using a nadir-pointing optical sensor to capture lunar surface data at altitudes between 30 km and 100 km. The methodology focuses on an onboard processing pipeline where a lightweight Neural Network (NN) identifies surface landmarks to provide stable references for a Visual SLAM (Simultaneous Localization and Mapping) algorithm. This architecture is going to be tested during nominal flight phases, aiming to characterize the performance of edge-computing in a representative deep-space environment. High-fidelity simulations using synthetic lunar imagery are employed to assess the neural network’s reliability under varying lighting conditions without compromising the mission’s primary scientific objectives.
The experiment focuses on benchmarking the vision-based state estimates against the spacecraft’s reference orbital truth data. By quantitatively measuring the vision-based system’s ability to provide independent position and altitude updates, the project seeks to establish foundational flight-heritage for AI-driven payloads. The expected outcome is a comprehensive characterization of the system’s potential to bound inertial drift in perturbed environments. These insights are essential for the future development of scalable and autonomous small satellite missions, contributing to the long-term roadmap of lunar exploration.