Theresa Maienschein, M.Sc.
Publications
2025
Rüter, J; Davydov, P; Maienschein, T; Durak, U; Dauer, J
Are Vision-Language Foundation Models Able to Fly? Proceedings Article
In: 2025 AIAA DATC/IEEE 44th Digital Avionics Systems Conference (DASC), 2025.
@inproceedings{nokey,
title = {Are Vision-Language Foundation Models Able to Fly?},
author = {J Rüter and P Davydov and T Maienschein and U Durak and J Dauer},
doi = {10.1109/DASC66011.2025.11257290},
year = {2025},
date = {2025-12-02},
booktitle = {2025 AIAA DATC/IEEE 44th Digital Avionics Systems Conference (DASC)},
abstract = {Safe autonomous aircraft require accurate environment perception, which can be achieved through semantic segmentation of camera images. However, training neural networks relies on large, diverse datasets that are often unavailable in aviation. Vision-language foundation models offer a promising alternative, but their accuracy for aviation tasks is an open question as the aerial perspective might not be adequately represented in the original training data. Against this background, this paper investigates the performance of two vision-language foundation models, CLIPSeg and CAT-Seg, on an aerial image dataset. Our experiments show that the models can achieve competitive semantic segmentation performance without aviation-specific training. This paper further examines prompt engineering and discusses challenges of deploying these models in aviation. While certification and runtime constraints pose significant hurdles, our findings suggest that vision-language foundation models have potential for improving environment perception in aviation and may reduce the need for extensive training data in the future.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Safe autonomous aircraft require accurate environment perception, which can be achieved through semantic segmentation of camera images. However, training neural networks relies on large, diverse datasets that are often unavailable in aviation. Vision-language foundation models offer a promising alternative, but their accuracy for aviation tasks is an open question as the aerial perspective might not be adequately represented in the original training data. Against this background, this paper investigates the performance of two vision-language foundation models, CLIPSeg and CAT-Seg, on an aerial image dataset. Our experiments show that the models can achieve competitive semantic segmentation performance without aviation-specific training. This paper further examines prompt engineering and discusses challenges of deploying these models in aviation. While certification and runtime constraints pose significant hurdles, our findings suggest that vision-language foundation models have potential for improving environment perception in aviation and may reduce the need for extensive training data in the future.
2024
Rüter, J; Maienschein, T; Schirmer, S; Schopferer, S; Torens, C
Filling the Gaps: Using Synthetic Low-Altitude Aerial Images to Increase Operational Design Domain Coverage Journal Article
In: MDPI sensors 2024, 2024.
@article{nokey,
title = {Filling the Gaps: Using Synthetic Low-Altitude Aerial Images to Increase Operational Design Domain Coverage},
author = {J Rüter and T Maienschein and S Schirmer and S Schopferer and C Torens},
doi = {10.3390/s24041144},
year = {2024},
date = {2024-02-09},
urldate = {2024-02-09},
issuetitle = {AI-Based Object Detection and Tracking in UAVs: Challenges and Research Directions},
journal = {MDPI sensors 2024},
abstract = {A key necessity for the safe and autonomous flight of Unmanned Aircraft Systems (UAS) is their reliable perception of the environment, for example, to assess the safety of a landing site. For visual perception, Machine Learning (ML) provides state-of-the-art results in terms of performance, but the path to aviation certification has yet to be determined as current regulation and standard documents are not applicable to ML-based components due to their data-defined properties. However, the European Union Aviation Safety Agency (EASA) published the first usable guidance documents that take ML-specific challenges, such as data management and learning assurance, into account. In this paper, an important concept in this context is addressed, namely the Operational Design Domain (ODD) that defines the limitations under which a given ML-based system is designed to operate and function correctly. We investigated whether synthetic data can be used to complement a real-world training dataset which does not cover the whole ODD of an ML-based system component for visual object detection. The use-case in focus is the detection of humans on the ground to assess the safety of landing sites. Synthetic data are generated using the methods proposed in the EASA documents, namely augmentations, stitching and simulation environments. These data are used to augment a real-world dataset to increase ODD coverage during the training of Faster R-CNN object detection models. Our results give insights into the generation techniques and usefulness of synthetic data in the context of increasing ODD coverage. They indicate that the different types of synthetic images vary in their suitability but that augmentations seem to be particularly promising when there is not enough real-world data to cover the whole ODD. By doing so, our results contribute towards the adoption of ML technology in aviation and the reduction of data requirements for ML perception systems.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A key necessity for the safe and autonomous flight of Unmanned Aircraft Systems (UAS) is their reliable perception of the environment, for example, to assess the safety of a landing site. For visual perception, Machine Learning (ML) provides state-of-the-art results in terms of performance, but the path to aviation certification has yet to be determined as current regulation and standard documents are not applicable to ML-based components due to their data-defined properties. However, the European Union Aviation Safety Agency (EASA) published the first usable guidance documents that take ML-specific challenges, such as data management and learning assurance, into account. In this paper, an important concept in this context is addressed, namely the Operational Design Domain (ODD) that defines the limitations under which a given ML-based system is designed to operate and function correctly. We investigated whether synthetic data can be used to complement a real-world training dataset which does not cover the whole ODD of an ML-based system component for visual object detection. The use-case in focus is the detection of humans on the ground to assess the safety of landing sites. Synthetic data are generated using the methods proposed in the EASA documents, namely augmentations, stitching and simulation environments. These data are used to augment a real-world dataset to increase ODD coverage during the training of Faster R-CNN object detection models. Our results give insights into the generation techniques and usefulness of synthetic data in the context of increasing ODD coverage. They indicate that the different types of synthetic images vary in their suitability but that augmentations seem to be particularly promising when there is not enough real-world data to cover the whole ODD. By doing so, our results contribute towards the adoption of ML technology in aviation and the reduction of data requirements for ML perception systems.
2021
Torens, C; Jünger, F; Schirmer, S; Schopferer, S; Maienschein, T; Dauer, J
Machine Learning Verification and Safety for Unmanned Aircraft - A Literature Study Proceedings Article
In: AIAA SCITECH 2022 Forum, 2021.
@inproceedings{nokey,
title = {Machine Learning Verification and Safety for Unmanned Aircraft - A Literature Study},
author = {C Torens and F Jünger and S Schirmer and S Schopferer and T Maienschein and J Dauer},
doi = {10.2514/6.2022-1133},
year = {2021},
date = {2021-12-29},
urldate = {2021-12-29},
booktitle = {AIAA SCITECH 2022 Forum},
abstract = {Machine learning (ML) has proven to be the tool of choice for achieving human-like or even super-human performance with automation on specific tasks. As a result, this data-driven approach is currently experiencing massive interest in all industry domains. This increased use also applies for the safety critical aviation domain. With no human pilot on board, the potential use cases of ML for unmanned aircraft are particularly promising. Even upcoming Urban Air Mobility (UAM) concepts are planning to remove the onboard pilot and instead use ML to support a remote pilot, possibly supervising a fleet of vehicles. However, the verification of ML algorithms is a challenging problem, since established safety standards and assurance methods are not applicable. Thus, this work comprises a literature study on the topic of ML verification and safety. This research paper uses a systematic approach to map and categorize the research and focus on specific subtopics that are of particular interest in the context of existing guidance documents.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Machine learning (ML) has proven to be the tool of choice for achieving human-like or even super-human performance with automation on specific tasks. As a result, this data-driven approach is currently experiencing massive interest in all industry domains. This increased use also applies for the safety critical aviation domain. With no human pilot on board, the potential use cases of ML for unmanned aircraft are particularly promising. Even upcoming Urban Air Mobility (UAM) concepts are planning to remove the onboard pilot and instead use ML to support a remote pilot, possibly supervising a fleet of vehicles. However, the verification of ML algorithms is a challenging problem, since established safety standards and assurance methods are not applicable. Thus, this work comprises a literature study on the topic of ML verification and safety. This research paper uses a systematic approach to map and categorize the research and focus on specific subtopics that are of particular interest in the context of existing guidance documents.