The rapid proliferation of SmallSat constellations is fundamentally reshaping space missions across Earth observation, communications, navigation augmentation, and defense. However, current architectures remain largely constrained by ground-centric data processing models that impose severe penalties in latency, bandwidth consumption, operational resilience, and autonomy. As onboard sensors generate increasingly large and complex data streams, these limitations have become a primary bottleneck to mission effectiveness.
This paper introduces DCiS (Data Center in Space), a novel architectural paradigm that elevates computation from a satellite payload function to persistent, shared orbital infrastructure. DCiS deploys modular, scalable compute and storage nodes directly in orbit, forming a distributed, virtualized edge-datacenter fabric optimized for SmallSat-class platforms. By hosting compute, storage, and networking resources in space, DCiS enables low-latency on-orbit data processing, artificial intelligence (AI) inference, distributed data fusion, and autonomous decision-making without continuous reliance on ground infrastructure.
The paper presents the DCiS system architecture, building blocks, and operational concept (CONOPS), emphasizing incremental deployment and graceful scaling from single-node demonstrators to constellation-level fabrics. Quantitative compute, storage, power, and bandwidth envelopes are discussed, demonstrating orders-of-magnitude improvement over traditional SmallSat payload processors and data-handling units while remaining compatible with realistic spacecraft constraints. Particular attention is given to resilience and survivability, showing how spatial redundancy, functional replication, and distributed state synchronization enable fault tolerance and robustness in contested or degraded environments.
DCiS represents a fundamental shift in how space systems are designed and operated: from mission-specific, hardware-defined assets to reprogrammable orbital infrastructure that adopts cloud-level software stacks and usage models. This approach reduces downlink dependence, lowers lifecycle costs, and unlocks new classes of latency-sensitive and autonomy-driven missions. The architecture is directly applicable to both commercial and government constellations, offering a scalable foundation for next-generation space systems.