Atmospheric attenuation and cloud cover remain a primary bottleneck for optical Earth Observation (EO) mission operations, often rendering downlinked data unusable and straining limited bandwidth budgets. This work presents a novel framework to improve actionable intelligence by integrating real-time atmospheric assessment and image restoration directly into the onboard image processing chain. To overcome the scarcity of representative space-based training data, a custom simulation pipeline was developed to synthesize optical imagery that accounts for sensor noise, geometric distortions, and physically based atmospheric effects. The generated dataset was evaluated using the Structural Similarity Index Measure (SSIM) and employed to train lightweight convolutional neural networks for near real-time scene classification, along with an attention-based network for on-board image dehazing. Experimental benchmarking was performed on flight-representative computing architectures, including the Intel E3900 Atom processor and NVIDIA Jetson Orin Nano. The classification network attained an accuracy of 98.7%, while the dehazing algorithm exhibited high reconstruction fidelity with an SSIM of 0.91 and a Peak Signal-to-Noise Ratio (PSNR) of 19.69 dB-representing a 5.48 dB enhancement over raw atmospheric inputs. With a computational complexity of only 5.47 GMacs, the proposed model remains suitable for power-constrained edge processing environments. The framework improves data utility and bandwidth efficiency by filtering degraded images and improving scene interpretability prior to downlink, directly aiding autonomous decision-making in upcoming EO missions. The architecture is planned for on-orbit verification in the upcoming Atmos Space Cargo Phoenix 2.2 technology demonstration mission.