Mathematical Sciences and Systems Engineering
An Implementation of 3D Gaussian Splatting for Characterizing Satellite Geometries
Team Leader(s)
Emma SandidgeTeam Member(s)
Emma SandidgeFaculty Advisor
Dr. Ryan T. WhiteAn Implementation of 3D Gaussian Splatting for Characterizing Satellite Geometries File Download
Project Summary
As the numbers of cooperative and non-cooperative spacecraft in orbit increase, they have created interest in the development of autonomous chaser satellites for on-orbit servicing, active debris removal, and satellite inspections. Performing these operations requires accurate estimation and identification of satellite geometry. This project depicts an implementation of 3D Gaussian Splatting for mapping satellite geometries. We share training methods and the 3D rendering capabilities of the model using a realistic satellite mock-up that is tested across several realistic lighting conditions. We present training and rendering metrics, along with comparisons to past 3D reconstruction methods. Our model is capable of training on board and produces high-quality renders of novel views of an unknown satellite. We achieve a rendering speed nearly two orders of magnitude faster than previous neural radiance field (NeRF) based methods. These abilities play a crucial role in subsequent machine intelligence tasks involving autonomous navigation and control tasks.Project Objective
Our goal is to identify and reconstruct the geometry of an unknown satellite using a single video feed of data with a low-compute algorithm that will have the capability to be implemented onboard a spacecraft.Analysis
We analyze the performance of the model based on standard metrics for generative modeling. These include Structural Similarity Index (SSIM), Peak Signal to Noise Ratio (PSNR), and Learned Perceptual Image Patch Similarity (LPIPS), which we evaluate on images of the satellite mock-up not used during training. SSIM measures the perceived difference between the two images for qualities like luminance and contrast. High SSIM indicates high performance. PSNR is a measurement of image quality at the pixel level. High PSNR indicates good performance. LPIPS is a more complicated tool that aims to calculate the human-perceived similarity between the two images. This metric uses a VGG neural network to compute the distance between a real and synthetic image patch. Low LPIPS indicates that the two images are more similar to one another. For rendering performance and computational requirements, we also analyze training time, rendering frame rates, and VRAM for both training and rendering. All metrics are measured on a single NVIDIA GTX 3080Ti GPU.Future Works
Future plans for this 3D reconstruction model involve incorporating novel rendered views into a YOLOv5 object detector for more accurate, reliable, and precise detections of satellite components.UN Sustainable Development Goals Dependence on Inflation
Team Member(s)
Annika LeisethFaculty Advisor
Ryan WhiteUN Sustainable Development Goals Dependence on Inflation File Download