Researchers at Stony Brook University have created an artificial intelligence system designed to generate realistic, three-dimensional videos of the surface of Mars. The development aims to improve how space agencies simulate Martian exploration and plan for future missions.
Chenyu You, assistant professor in the Department of Applied Mathematics and Statistics and Department of Computer Science at Stony Brook University, led the project called Martian World Models. According to You, existing AI models trained on Earth imagery face challenges when interpreting data from Mars due to differences in lighting, textures, and geometry. “Mars data is messy,” said You. “The lighting is harsh, textures repeat, and rover images are often noisy. We had to clean and align every frame to make sure the geometry was accurate.”
To address these challenges, the team developed a specialized data engine named M3arsSynth. This tool reconstructs physically accurate 3D models of Martian terrain by processing pairs of photographs captured by NASA rovers. By calculating depth and scale from these images, M3arsSynth builds digital landscapes that closely match Mars’s actual structure.
These reconstructions form the basis for MarsGen, an AI model capable of generating new videos of Mars using single frames, text prompts, or camera paths. The resulting video sequences display both visual detail and physical realism. As You explained: “We’re not just making something that looks like Mars. We’re recreating a living Martian world on Earth — an environment that thinks, breathes, and behaves like the real thing.”
Further details about this research can be found in a story by Ankita Nagpal on the AI Innovation Institute website.



