Generative AI for Space Applications and Technologies; Excerpts
On September 12, 2024, Space in Africa, in partnership with Amazon Web Services (AWS), hosted its eighth webinar series, “Generative AI for Space Applications and Technologies.” This was an innovative training workshop tailored for space professionals eager to harness AI tools’ power to enhance their projects within the space sector.
In addition, the webinar provided an in-depth overview of generative AI, explaining its foundational principles and the underlying models that power this innovative technology. It highlighted the AWS services and frameworks available to facilitate the efficient development of generative AI applications, enabling seamless integration into various domains. Specific focus was given to space mission applications, showcasing use cases where generative AI adds tangible value, such as mission planning and autonomous operations. The session also featured practical demonstrations and best practices, illustrating how generative AI can be effectively implemented to enhance capabilities and outcomes in space-related projects.
The webinar speakers included;
Joy Fasnacht, Principal Solution Architect at AWS Aerospace & Satellite
Ayooluwa Adetola, Programmes & Corporate Communication Associate (Moderator)
Click here to watch the webinar.
Key Takeaways from the Event
Joy started the discussion by sharing key figures, highlighting that analysts estimate that generative AI technology will increase the global GDP by 7% over the next 2 years and that the generative AI market will reach at least USD 100 billion by 2030. She further introduced generative AI, noting that the massive proliferation of data, the availability of highly scalable computing capacity, and the advancement of machine learning technologies over time led to generative AI finally taking shape.
She noted that generative AI leverages the latest advances in machine learning and is primarily powered by foundational models. Joy added that the advantage of these foundational models is that they can perform many more tasks because they contain many parameters that make them capable of learning complex concepts. Additionally, the foundational models can be applied in a wide range of contexts, and one can customise them using one’s data to perform domain-specific tasks.
The global space industry is forecasted to grow to USD 1 trillion by 2040, and generative AI is expected to accelerate this growth. Joy further highlighted some key use cases of areas that will drive the growth of the space economy, such as the geospatial and remote sensing sectors. One scenario mentioned was the enhancement of satellite imagery. Joy shared that foundation models train on more data than models in the past, indicating they do so in an unlabeled manner. Furthermore, she stated that foundation models trained on years of satellite imagery while integrating other data sources could detect undiscovered patterns over larger-scale periods and generate predictive scenarios. This would make critical applications possible, such as meteorological forecasting and environmental planning.
Joy added that additional use cases included improving object identification with training data set augmentation using synthetically generated data, delivering better query results using natural language over satellite imagery and keeping imagery secure by detecting doctored images. She highlighted that computer vision models have low fidelity when asked to identify uncommon objects, making it time-consuming to search manually. To offer a solution, she proposed using a text-to-image foundation model to generate simulated satellite images of rare objects with more diversity than previously achieved with image manipulation techniques. This would result in better and testable model performance.
Highlighting natural language queries on images, Joy noted that a challenge for satellite imagery consumers is quickly identifying images relevant to their missions. Using a multi-modal foundation model, she proposed a solution that users can pose natural language queries to identify points of interest within images, analyse what is occurring within an image, or even track changes occurring in images taken over time. This is done without performing a manual analysis of the entire data set. She added that users can even fine-tune existing foundation models with their existing tagged data sets for custom mission needs. Giving an example of this, she shared two satellite images captured from a flood in the United States 2 years ago in Yellowstone National Park. Her team then loaded the images into a multimodal, onto a multimodal model, where it understands both vision and text. They further gave the AI the role of describing the two images and giving an opinion on the cause of the differences in the two satellite images, which it did successfully, providing key insights and reasons for the same.
To highlight a customer success story, she shared about Degas Limited, a Japanese agri-fintech startup in Ghana. To support their large base of smallholder farmers, the field agents that reach the local farmers needed to know how to use regenerative agricultural practices. Therefore, Degas Limited developed a large language model-based chatbot that could take in natural language prompts and analyse regenerative agricultural manuals, satellite imagery, weather data, and proprietary ground data that Degas procures. In this way, those 130 field agents advise optimal regenerative agricultural prompt practices to these smallholder farmers. She added that they’ve been able to influence and suggest technologies and practices for over 35,000 farmers in Ghana.
Moving on to other use cases, Joy highlighted the impactful role of generative AI in design and engineering. For example, it revolutionised spacecraft design with generative design tools for hardware components, identified innovative and optimal testing scenarios for digital twins, and reduced software development risk and technical debt. For example, the BT Group, a multinational telecommunication company, has used Amazon Q Developer, a large language model tool, to improve the productivity of its software engineers, automating about 12% of the time-consuming, repetitive tasks for those engineers. BT Group made this solution available to 1200 software engineers across the business. The solution provides 15 to 20 suggestions of code per active user per day, and those users accept the code that is adjusted at an acceptance rate of about 37%. Within the first four months of deployment at BT Group, Q developer generated over 100,000 lines of code for the software developers there.
Joy highlighted a third use case and shared the third application, which is launch, flight exploration, and operations, further noting that these areas can be automated and improve regenerative AI technologies. The mission cases included applications for increasing automated mission planning with optimal flight path generation. With this, she noted that the current generation of flight paths through human expertise could be slow, costly, and labour-intensive. Generative AI could define multiple scenarios in the future quickly and at a low cost, which could then be tested manually or automatically to find optimal flight paths. The other area of interest in generative AI is accelerating operator training with natural language queries against one’s documents, and the area of launch and operations is added support missions with enhanced scenario generation.
Joy insisted that data is the key differentiator for many companies and organisations and that generative AI is powerful and has many key applications in the ever-growing space industry.
Please tune in to our YouTube page to watch the entire session. You’ll learn more on the applications of generative AI technologies for both the space industry and in general use cases.
Meet the Space in Africa Team
- There are no upcoming events.