Theses
- Open topics
- Assigned topics
- Completed
Bachelor
ifgicopterUAV / UAS (Drohnen) Remote Sensing/GIS: Vegetationsspezifische Geodatenanalyse/Workflows
Thema: Im Rahmen der gemeinsamen IFGIcopter und ILÖK UAV Initiative werden kontinuierlich vegetationsspezifische Fernerkundungsdaten unterschiedlichster UAV-Sensoren (Drohnen) aufgenommen und ausgewertet. Besondere Schwerpunkte sind die Erfassung und Analyse von Vegetationsmustern, Vitalitätsparametern und inversiver Arten mittels multispektraler UAS Daten. In diesem Kontext spielen die Datenverarbeitung und Visualisierung (auch 3D) mittels verschiedenster geoinformatischer Werkzeuge (GIS, kommerzielle Software, Web-Tools und Eigenprogrammierungen etc.) eine große Rolle. Wer Interesse an einer interdisziplinären Fragestellung in diesem Bereich hat, wende sich an die beiden Ansprechpartner [2017].
Ansprechpartner: Torsten Prinz / Jan Lehmann
Contact: Torsten Prinz
SILVisualisierung von raum-zeitlichen SensorDaten auf der openSenseMap
Contact: Thomas Bartoschek
SITCOMQGIS plugin for automating the documentation of map making
Reproducibility is a core element of the scientific method.
In the Geosciences, the insights derived from geodata are frequently communicated through maps, and the computational methods to create these maps vary in their ease of reproduction.
While GIS desktop applications (e.g., QGIS, ArcGIS) are widely used by professionals and researchers in the Geosciences for map production, they may hinder map reproducibility as the details of the map making process become more challenging to document.
For this thesis, a QGIS plugin will be developed, which automates the documentation of the datasets, the spatial operations and other metadata that were used to produce maps.
The plugin should output a structured JSON file that ties together all the necessary components, steps and information for the (re)production of a map within the environment of QGIS, in order to facilitate the reproducibility of maps that were not created programmatically.
The plugin will be developed in Python (unless the student feels comfortable with C++).
QGIS plugins for reference: https://plugins.qgis.org/plugins/MetadataDbLinker/, https://plugins.qgis.org/plugins/mapexport/, https://plugins.qgis.org/plugins/project_report/
Structured schemas for reference: https://schema.org/Map, https://schema.org/SoftwareApplication, https://www.researchobject.org/ro-crate/
Contact: Eftychia Koukouraki
SITCOMExploring forms of interactions in the Immersive Video Environment (IVE)
IVE is a panoramic video footage that is displayed on large screens in a cave like environment that creates a sense of physical presence and enables people to better interact and intervene the image of their surroundings. However, the forms of interactions are very limited - yet needed - when users need to create objects on the screen as video overlays and interact with them (e.g., adding and scaling a tree, modifying a building façade etc.).
This study aims to explore the forms of interactions on IVE for effective creation and interaction of the overlays. The student will work with the IVE system in the Sitcom Lab at Ifgi. The study will cover the following steps: creating a video footage and overlays for IVE, exploration of the tools (e.g., smart phone, HTC Vive controllers, touch pad), design of the forms (e.g., adding, removing, scaling, rotating, placing the overlays) of interaction and the UI component.
Contact: Simge Özdal Oktay
SILRäumliches Bewegungsverhalten
Contact: Angela Schwering
SPARC3D SketchMaps: A VR tool for understanding spatial knowledge in the vertical
Sketch maps are traditionally drawn on a flat sheet of paper. However, with accessible, consumer-grade Virtual Reality devices it is now very easy to put on a VR headset and "draw in the air". This might be particularly useful in situations where you want to communicate how the environment looks in the vertical. For instance, when you want to draw a path of drone flying above some landscape, or describe to another person how to navigate a multi-level shopping mall, or explain to a friend how to get out of a metro station near your home.
The goal of this thesis is to understand the potential of 3D sketch maps in VR as a tool for communicating spatial knowledge.
The thesis can be approached from two perspectives:
From the technological perspective it is possible to design new ways of drawing 3D sketch maps in Virtual Reality, and compare them to traditional paper sketch maps. It is also interesting to explore how complex 3D sketch maps can be analysed systematically and what tools can we design to support the analysis of such (sometimes very complex and very messy) 3D drawings.
From the research perspective it is necessary to conduct user experiments to understand whether people can make good use of this new possibility offered by VR, or do they always find it more intuitive to draw on paper. Also, it is important to identify contexts in which 3D sketch maps are necessary - perhaps for most navigational scenarios a sheet of paper is sufficient? If so, what are specific cases (aviation, complex buildings) where 3D sketch maps are necessary?
One straight-forward way to study this is to ask participants to play a VR game where vertical navigation is important (virtual scuba diving, submarine simulator, flying a star wars battleship, a drone, or a passanger plane, downhill skiing) and ask them to draw sketch maps based on this experience.
Contact: Jakub Krukar
SPARCControlling the optic flow of Virtual Reality systems
Contact: Jakub Krukar
CVMLSAktuelle Verfahren des maschinellen Lernens in der Bildanalyse
Kaum ein anderes Feld der Informatik entwickelt sich so rasant das maschinelle Lernen. Mit über 100 Veröffentlichungen am Tag wird es zunehmend komplizierter einen Überblick über die für die Geoinformatik relevanten Bildanalyseverfahren zu behalten. Aus diesem Grund sollen in diesem Projekt ausgewählte Verfahren des maschinellen Lernens erarbeitet und auf diverse Bildanalyseaufgaben angewandt werden. Ziel ist es hierbei eine spezifische Algorithmenklasse des maschinellen Lernens kennen zu lernen und das Verfahren in den Kontext der existierenden Algorithmen einzuordnen.
Contact: Benjamin Risse
SPARCNavigating computer games
Contact: Jakub Krukar
Earth Observation (EO) data cubes are multidimensional arrays of spatial and temporal data, crucial for monitoring and analyzing environmental changes. Machine learning (ML) models applied to EO data cubes enable advanced data analysis and predictive capabilities. However, the diversity of programming languages used in the spatial data science and geoinformatics community, particularly R and Python, poses challenges for interoperability and reproducibility of these ML models.
The outcomes of this research are expected to facilitate smoother integration and collaboration among spatial data scientists and geoinformatics professionals who rely on different programming environments, promoting the reproducibility and interoperability of EO data analysis projects. This work will contribute to the broader goal of advancing geospatial data science by bridging the gap between diverse computational ecosystems.
Use Case:
Carrying out spatial-temporal analysis, such as time-series crop classification in Germany, leveraging the ONNX interoperability format : https://onnx.ai/
- Model Portability: How can a deterministic machine learning model, such as Support Vector Machine (SVM), be trained on Earth Observation data cubes in Python and then ported to R using the ONNX format?
- Performance Evaluation: What are the differences in performance and accuracy of the SVM model when ported from Python to R for time-series crop classification in Germany
- Interoperability Challenges: What are the challenges and potential solutions in ensuring interoperability and reproducibility of machine learning models between Python and R programming environments using ONNX?
- What is the feasibility of implementing identical deep learning models for Earth Observation data cubes in R, Python, and Julia, and ensuring their interoperability?
- How do the available tools and libraries for machine learning in R, Python, and Julia compare in terms of ease of use, performance, and integration with EO data cubes?
- What are the differences in command structure and interface among R, Python, and Julia for machine learning tasks related to EO data cubes, and how do these differences impact the reproducibility and interoperability of the models?
Please contact:
Brian Pondi brian.pondi@uni-muenster.de
and
Edzer Pebesma edzer.pebesma@uni-muenster.de
Contact: Brian Pondi
ifgicopterRe-Design des Geodatenportals StudMap14
Das ZDM/IVV sucht ab sofort in Zusammenarbeit mit dem IFGI eine/n BSc-Kandidaten/in aus der Geoinformatikzwecks innovativen "Re-Designs" des Geodatenportals StudMap14 (http://gdione4all.uni-muenster.de/joomla/index.php/studmap14)
Kenntnisse/Einarbeitung in die GeoServer-Umgebung und Interesse an modernen GDI-Lösungen sind Voraussetzungen für dieses BSc-Projekt (ggf. ist eine Finanzierung mit 5 SHK-Stunden für 6 Monate möglich).
Bei Interesse direkt mit Dr. Torsten Prinz in Kontakt treten!
Contact: Torsten Prinz
SITCOMOrtsbasiertes Überprüfen von Fakten (Themen-Cluster)
Contact: Christian Kray
Die openSenseMap ist eine Plattform für Umweltsensordaten von Messstationen jeglicher Art. Zur Zeit werden nur Rohdaten von senseBoxen gespeichert und die Daten können sich nur pro senseBox angezeigt werden lassen. Zudem gibt es die Möglichkeit sich die gesammelten Daten für einen Zeitpunkt interpoliert darstellen zu lassen.
Ziel der Arbeit ist es, für die openSenseMap ein Portal zu entwicklen, in dem der Benutzer die Möglichkeit hat mehrere senseBoxen und Sensoren mit statistischen Methoden zu vergleichen und externe Datenquellen, wie zB. vom DWD, einzubinden.
Contact: Thomas Bartoschek
SPARCMeasuring spatial knowledge in VR compared to real world environments
Virtual Realtiy is known to cause distortions in our pereception of distance. Things inside VR seem closer than they would be in reality. This is an interesting problem because many wayfinding studies use VR as a substitute of real life environments. Yet, if we know that distance estimations are consistently distorted in VR, can we trust VR results using other methods for measuring spatial knowledge?
The goal of this thesis is to compare different methods of measuring spatial knowledge, in particular:
- sketch maps (qualitative and metric-based analyses)
- distance estimation tasks
- pointing tasks
- perspective taking tasks
across the Virtual Reality environment and corresponding real life environment.
The hypothesis is that some of these methods work equally well in VR and in real life environments, while others should not be used in VR. For example, our analysis from the paper below suggests that people who explore the same building in VR and in the real world draw equally good sketch maps, even though their distance estimation is distorted in VR.
Contact: Jakub Krukar
SILGeovisualisierung von offenen Umweltdaten im Web
Die openSenseMap bietet live Daten zu verschiedensten Umweltphänomenen, Jedoch ist es zur Zeit schwierig diese Daten erkunden. Ziel dieser Bachelor Arbeit wäre es neue Möglichkeiten zu schaffen, die Daten interaktiv darzustellen. Interessant wären zum Beispiel live Interpolationen über Feinstaubwerte oder die Temperaturentwicklung in Innenstädten im Hochsommer. Um diese Daten einem möglichst grossem Publikum zur Verfügung zu stellen, soll in dieser Bachelorarbeit untersucht werden, welche Möglichkeiten hier neuste Webtechnologien bieten. Verschiedene Visualisierungen sollen generiert werden und mit einer Nutzerstudie evaluiert werden.
Contact: Thomas Bartoschek
SILQualitätssicherung von crowd-sourced Sensordaten
In der Arbeit soll untersucht werden, inwieweit Qualitätssicherung von crowd-sourced Sensordaten in einem Sensornetzwerk automatisierbar ist. Dies ist ein neues und hoch relevantes Forschungsfeld: große Datenmengen erlauben die Anwendung statistischer oder machine learning-Verfahren. Traditionelle Verfahren sind häufig nicht nutzbar, da die Daten in Echtzeit vorliegen müssen. Zudem stellen crowd-sourced Daten eine spezielle Herausforderungen dar, da nicht davon ausgegangen werden kann, dass alle Daten mit korrekten bzw. konsistenten Messverfahren erhoben wurden. Schließlich haben low-cost-Sensoren selbst Messfehler, die von professionellen Sensoren stark abweichen. oder Messstationen sind von Citizen Scientists schlecht montiert. Das Ziel ist, die Einflussfaktoren auf die Datenqualität und die Messgenauigkeit der Sensoren zu erforschen, Verfahren zur automatisierten Identifikation fehlerhafter Daten und möglicher Fehlerquellen zu entwickeln sowie automatisiert Entscheidungen über Möglichkeit zur Korrektur der Daten (bspw. über Nachkalibrierung der Sensoren) oder Ausschluss bestimmter Daten zu treffen.
Contact: Thomas Bartoschek
SILHybride Beteiligung an Stadtplanungsprojekten
Contact: Christian Kray
SITCOMAR evaluation toolkit
Augmented Reality applications are widely available now and expected to increase further in the future. They are, however, difficult to evaluate effectively as they strongly depend on interacting with their environment. For example, to assess the effectiveness and usability of a particular user interface or overlay, it is important to consider how it interacts with what people see around them. Are users able to connect the overlay to the corresponding real-world object? Can they interact with the UI elements while being in the actual environment?
The goal of this thesis is to develop and evaluate an evaluation toolkit for AR applications, which allows for systematic, repeatable and low-effort evaluation. The approach to investigate here is to use a virtual environment (such as the Immersive Video Environment at ifgi) and to “trick” an app into believing it is located at the site shown by the virtual environment. This ensures a controlled environment and thus allows for a systematic evaluation of AR applications.
Contact: Christian Kray
SILEntwicklung und Evaluierung von dynamischen Lerntutorials mit Gamification-Ansatz
Contact: Thomas Bartoschek
SITCOMSITCOM topics (general information)
Research in SITCOM generally focusses on enabling all kinds of users to solve real-world problems using spatial information. Have a look at the group's web page for more details and example projects.
If you are generally interested in this area or have an idea of a thesis topic that falls into that area, feel free to get in touch with one of the current members of SITCOM.
Contact: Christian Kray
SILUser Centered Design for educational WebGIS
Mit WebGIS NRW (webgis.nrw) existiert ein prototypisches WebGIS für den Bildungskontext, das auf modernen open Source Technoligien basiert (MapBox GL). Ziel der Arbeit ist die Weiterentwicklung des WebGIS nach User Centered Design Prinzipien und eine Evaluation der Usability.
Contact: Thomas Bartoschek
SILExplorative Analyse von Open Source Hardware Sensorik
Im Rahmen dieser Bachelorarbeit sollen neue Sensorkomponenten für Umweltphänomene (z.B. Wind, Wasser, Radioaktivität o.ä.) für die senseBox identifiziert und in das senseBox Ökosystem aus Open Source Hardware, openSenseMap Geodateninfrastruktur, Blockly-Programmierumgebung integriert und evaluiert werden.
Contact: Thomas Bartoschek
SPARCGeneralisation in Sketch Maps
When people draw sketch maps, they generalise information compared to the ground-truth information they perceived in the world. For example many buildings belonging to university campus are drawn as a single polygon labelled "campus",
This is a challenge for analysing sketch maps because this information is not wrong, yet a computer system for automated analysis would interpret it as such.
In the paper linked below we presented a classification of generalisation types in sketch maps. We also also have a working software prototype for analysing generalisation in sketch maps.
In this thesis you will test the impact of one (chosen) variable on the level of generalisation. Sample research questions:
- If we ask people to draw different size of an area, do they start to generalise more?
- do people generalise important streets more/less compared to less important ones (e.g., accounting for "integration" Space Syntax metric)?
- If we give people less time to draw, do they generalise or omit information?
- if we ask people to draw the sketch map with a different task (e.g., walking through a campus vs. walking near a campus vs. walking away from campus) - are generalisations different?
Contact: Jakub Krukar and Angela Schwering
SIIUX-Analyse - Geodateninfrastrukturen
Die Entwicklung von Geodateninfrastrukturen hat das Ziel, die Verfügbarkeit und Nutzbarkeit von Geodaten für verschiedenste Anwendungszwecke spürbar zu verbessern. Zwar ist in allen Strategiepapieren zu lesen, dass hierbei die Anforderungen der Anwender im Mittelpunkt stehen sollen, tatsächlich ist die Entwicklung aber in starkem Maße angebotsgetrieben, ohne systematische Berücksichtigung der Nutzeranforderungen und entsprechender Erfolgskontrollen.
Im Rahmen der Arbeit soll am Beispiel der GDI NRW gezeigt werden, wie durch eine leichtgewichtige, fokussierte UX-Analyse ein klares Bild der Stärken und Schwächen der GDI bezüglich der Anforderungen einer bestimmten Nutzergruppe (Windpark-Planer) erzeugt werden kann, und dass sich aus diesem Bild konkrete Entwicklungsziele ableiten und priorisieren lassen.
Gegenstand der Arbeit: Aufarbeitung der Grundlagen und Einwertung verwandter Arbeiten, Vorbereitung und Durchführung der UX-Analyse unter Einbeziehung von Experten-Interviews und eigener technischer Tests. Interpretation der Ergebnisse, Diskussion des Verfahrens, Empfehlungen.
Contact: Albert Remke
SPARCSketch Maps as a tool for learning new environments
For decades, sketchmaps have been used as a tool for measuring spatial knowledge - i.e., for estimating how well participants know and understand some areas. However, evidence from psychological memory studies demonstrates, that drawing something can also be a good strategy to memorise a set of object. For instance, if you need to memorise the setting of a room, drawing the room as you see it is a better memorisation strategy than repeating the names of the objects verbally or in your head. This thesis will test whether drawing a sketch map is a good memorisation strategy for spatial environments and how this approach can be implemented in a gamified app. The problem is relevant for situations in which people must learn new spatial environments, e.g. to become taxi/delivery drivers, or when they move to a new city.
The thesis can be completed with focus on one of two aspects:
**Computational focus:** You will design a teaching app that (a) records the user's trajectory together with a list of landmarks that were visible along the route, and (b) after a delay, asks users to draw the area that they have travelled. Here the key problem may be to select routes and landmarks that the user should be asked to draw (based on the recorded trajectories).
**Evaluation focus:** You will design and conduct an experiment to evaluate the following research question: does drawing a sketchmap help people memorise the environment better, compared to alternative strategies? This does not require creating an app, and can be conducted as an in-situ experiment, or inside our Virtual Reality lab.
Contact: Jakub Krukar
SPARCIn-and-Out: What happens when we enter and exit buildings?
Navigation is usually studied either completely outdoors or completely indoors. But our real-life wayfinding is different - we continuously enter and exit buildings without feeling that this is now a completely different experience.
The goal of this thesis is to understand what happens when people exit/enter buildings - how their navigation changes, and how technology can embrace the difference between these two contexts.
This can be approached from two perspectives.
From the technological perspective it is possible to design a prototype navigation system that---assuming indoor localisation is possible---works indoors and outdoors, and changes its behaviour depending on this context.
From the research perspective it can be investigated how human navigation changes when people enter or exit buildings. This can be studied for instance with a mobile eye-tracking, by asking experiment participants to move indoors/outdoors and analysing how eye-tracking measures vary between these two contexts.
Contact: Jakub Krukar
SILKollaborative GeoGames in virtueller Realität
As the complexity and diversity of spatial-temporal machine learning (ML) and deep learning (DL) models continue to grow, there is a critical need for an effective system to catalog, search, and apply these models in geospatial analysis. This thesis proposes the creation of a web-based interface designed to enable the accessibility and usability of these models for practitioners and researchers in spatial data science.
Spatial-temporal data, characterized by its multi-dimensional nature, requires sophisticated analytical approaches that can model and predict patterns over time and space. Given the rapid proliferation of ML and DL techniques, there is a pressing need for a system that can organize these models in an accessible manner. The proposed web interface will serve not only as a repository of varied models but also as an exploratory tool that enables users to identify and implement the most suitable models for specific geographical and temporal datasets.
The design focus will be on creating a user-friendly, intuitive interface that supports extensive search capabilities, model comparisons, and real-time implementation feedback. This database of models will include metadata on each model's performance metrics, use case compatibility, computational requirements, etc. By integrating these elements, the interface will provide invaluable guidance for researchers and practitioners in selecting and applying appropriate ML and DL models to solve real-world spatial-temporal problems.
Guiding Questions:
Bachelor's Level:
- How can the MLM STAC model extension [1] be utilized to create a catalog of spatial-temporal ML models within a web-based interface?
- What are the key features and functionalities needed in a web-based interface to effectively catalog and search ML models for spatial-temporal data?
- How can an ML Catalog backend [2] be integrated into a web-based interface to facilitate the organization and accessibility of ML models for geospatial analysis?
Master's Level:
- How can deep learning models and components, such as encoders, be cataloged and searched within a web-based interface for spatial-temporal data analysis?
- What methods can be implemented in a web-based interface to evaluate the suitability and applicability of DL models for specific spatial-temporal datasets?
- How can the web-based interface provide insights into the performance and compatibility of DL models by analyzing metadata, use case compatibility, and computational requirements?
[1] https://github.com/crim-ca/mlm-extension
[2] https://github.com/PondiB/openearth-ml-server
Please contact:
Brian Pondi brian.pondi@uni-muenster.de
and
Edzer Pebesma edzer.pebesma@uni-muenster.de
Contact: Brian Pondi
SPARCThe "wow" effect of buildings: Computational modelling of perceived spaciousness in VR
Perceived Spaciousness is the subjective feeling of how large a given space is. Just think about the difference between the seminar room 242 vs. the atrium in the centre of the GEO1 building. Does the atrium feel 5x larger? 10x larger? 20x larger? Or imagine walking through a small entrance into a large temple. How much volume does it need to have to create the feeling of "wow, this is huge!" ?
Resaerch has shown that people have very distorted way of judging this and that the volume of space alone (the amount of "empty air" surrounding you) is not the only important factor. It matters what shape this empty space has (is it taller or longer?), where you are currently standing (above or below the open space? close to the wall or in its centre?), and where you entered it from (entering into a large room from a small one vs a small one from a large one).
The goal of this thesis is to understand what affects our peceived spaciousness. The thesis can approached from two perspectives.
From the technological perspective it is necessary to create new ways of studying perceived spaciousness in Virtual Reality. Researchers used various ways to ask people "how spacious does this space feel right now?" - they asked them verbally to rank it from 1 to 7, or gave them a circular dial that participants rotated when they felt more spacious. VR opens the chance to create better ways to gather this kind of data.
From the research perspective VR offers us the chance to design architectural experiments impossible in real life. For instance, we can move a person from a very small space to an extremely spacious one in a matter of seconds, and test their reactions. We can change their pathway (e.g., reverse it so that they move from a large space to a small one) and see if the reaction is symmetric. We can also systematically modify the Virtual Reality rooms by changing their shape, size, or lighting, and test how these changes affect perceived spaciousness.
Contact: Jakub Krukar
CVMLSInsektendetektion in visuellen terrestrischen Fernerkundungsdaten
In den vergangene Jahren ist die Zahl der Insekten dramatisch zurückgegangen, jedoch fehlt es nach wie vor an Verfahren für die nicht-invasive Überwachung von Insektenpopulationen. In diesem Projekt wird mittels eines neuen Insektendatensatzes, welcher verschiedene visuelle Hinweise wie Farbinformationen und Bewegungshinweise bietet die folgenden Ziele verfolgt: (1) neue Bildgebungsmodalitäten und Insektendatensätze testen; (2) die relevanten maschinellen Lerntechniken zu eruieren; und (3) angepasste maschinelle Lernmodelle zu entwickeln, um die winzigen Tiere in unübersichtlichen Umgebungen automatisiert zu erkennen. Insbesondere sollen aktuelle Algorithmen zur Objekterkennung genutzt werden, die nahezu universell in der Geoinformatik anwendbar sind und sich daher nicht auf die in dieser Arbeit vorgestellten terrestrischen Fernerkundungsdaten beschränken.
Contact: Benjamin Risse
SPARCExtracting spatial complexity of interior spaces from youtube videos
Contact: Jakub Krukar
SITCOMPrivacy preserving location-based services (adaptive algorithms, infrastructures, visualisations)
In order to benefit from location-based services (LBS) such as navigation support, local recommender systems or delivery services, users need to share their location with the service provider. This can have negative implications for their privacy as the service provider might learn a lot about users, e.g. movement patterns, places they frequent and inferred knowledge such as health conditions.
At the same time, service provision would also be possible if users did not share their precise location: a weather forecast app, for example, might work well enough with very coarse-grained location information. For some LBS, having access to lower-quality location information might be problematic. For example, providing turn-by-turn instructions might be impossilbe if users share coarse-grained location information.
The goal of this topic is thus to develop and evaluate approaches for different types of LBS to adapt to different levels of quality of location information. The topic can be tackled from different perspectives and therefore can serve as a starting point for several different theses projects:
- on the algorithmic level, an analysis of common algorithms used in LBS to provide certain services (such as routing) can be carried out to develop and evaluate new/improved algorithms that can better cope with different levels of location quality
- on the infrastructure level, different frameworks and libraries for the development of LBS can be analysed regarding how well they support copting with different levels of location quality; this can then inform the design and evaluation of an improved solution
- on the visualisation/user interface level, an analysis of exsting solutions to convey instructions/information to users in LBS can be carried out with respect to how well they work when provided with location information of lower quality; this can then inform the design and evaluation of improved, adaptive visualsations and user interfaces
Students interested in this topic area can have a look at the SIMPORT project and the publications listed below:
-
Linking location privacy, digital sovereignty and location-based services: a meta review. (2023) S Özdal Oktay, S Heitmann, C Kray. Journal of Location Based Services, 1-52.
-
‘Informed’consent in popular location based services and digital sovereignty. (2022) J Dreyer, S Heitmann, F Erdmann, G Bauer, C Kray. Journal of Location Based Services 16 (4), 312-342.
-
Ranasinghe, C., Schiestel, N., & Kray, C. (2019, October). Visualising location uncertainty to support navigation under degraded gps signals: A comparison study. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-11).
-
Ranasinghe, C., Heitmann, S., Hamzin, A., Pfeiffer, M., & Kray, C. (2018, December). Pedestrian navigation and GPS deteriorations: User behavior and adaptation strategies. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 266-277).
Contact: Christian Kray
Master
ifgicopterUAV / UAS (Drohnen) Remote Sensing/GIS: Vegetationsspezifische Geodatenanalyse/Workflows
Thema: Im Rahmen der gemeinsamen IFGIcopter und ILÖK UAV Initiative werden kontinuierlich vegetationsspezifische Fernerkundungsdaten unterschiedlichster UAV-Sensoren (Drohnen) aufgenommen und ausgewertet. Besondere Schwerpunkte sind die Erfassung und Analyse von Vegetationsmustern, Vitalitätsparametern und inversiver Arten mittels multispektraler UAS Daten. In diesem Kontext spielen die Datenverarbeitung und Visualisierung (auch 3D) mittels verschiedenster geoinformatischer Werkzeuge (GIS, kommerzielle Software, Web-Tools und Eigenprogrammierungen etc.) eine große Rolle. Wer Interesse an einer interdisziplinären Fragestellung in diesem Bereich hat, wende sich an die beiden Ansprechpartner [2017].
Ansprechpartner: Torsten Prinz / Jan Lehmann
Contact: Torsten Prinz
SITCOMAnalysing and mapping emotions for noise quality
Sustainability is a broad concept with various interconnecting aspects. As one of them citizens’ feelings and perceptions of environmenment is usually neglected in practice due to the insufficient tools and expertise. However, the improvements in the citizen science and the possibilities provided by the digital visualisation techniques allow a better understanding of people’s emotions towards existing circumstances. This knowledge leads to more accurate assessment of sustainability and better decisions at the local scale.
The thesis aims addressing the local emotional indicators for noise quality in Münster through analysing and spatio-temporal modelling of dynamic emotions. To achieve this aim, the study suggests to replicate selected locations in Münster at different time frames by using the Immersive Video Environment (IVE) and work with the citizens to collect information about their emotions. The student is free to choose the software and language for the analysis and spatio-temporal modelling. Basic knowledge of working with Unity and one of the programming languages will be an asset for this study.
Suggested readings:
Kals, E., Maes, J. (2002). Sustainable Development and Emotions. In: Schmuck, P., Schultz, W.P. (eds) Psychology of Sustainable Development. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0995-0_6Kals, E., Maes, J. (2002). Sustainable Development and Emotions. In: Schmuck, P., Schultz, W.P. (eds) Psychology of Sustainable Development. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0995-0_6
Murphy, Enda and Eoin A. King. “Mapping for sustainability: environmental noise and the city.” (2013).
Contact: Simge Özdal Oktay
SITCOMExploring forms of interactions in the Immersive Video Environment (IVE)
IVE is a panoramic video footage that is displayed on large screens in a cave like environment that creates a sense of physical presence and enables people to better interact and intervene the image of their surroundings. However, the forms of interactions are very limited - yet needed - when users need to create objects on the screen as video overlays and interact with them (e.g., adding and scaling a tree, modifying a building façade etc.).
This study aims to explore the forms of interactions on IVE for effective creation and interaction of the overlays. The student will work with the IVE system in the Sitcom Lab at Ifgi. The study will cover the following steps: creating a video footage and overlays for IVE, exploration of the tools (e.g., smart phone, HTC Vive controllers, touch pad), design of the forms (e.g., adding, removing, scaling, rotating, placing the overlays) of interaction and the UI component.
Contact: Simge Özdal Oktay
SILSpatio-Temporal & Semantic Analysis of Spatial Movement
Contact: Angela Schwering
SPARCUsing VR to create self-adjusting buildings based on spatio-temporal data of their occupants
Buildings of the future will have to be much more flexible than they are now. One envisioned possibility is that building interiors will change their shapes depending on the current context of use, personal preference of their users, or tasks that the occupants have to perform within them at the given moment. While this may sound like a distant vision of the future, Virtual Reality equipment already allows us to study such scenarios today.
In this thesis, you will design a Virtual Reality building that participants will explore in Head-Mounted Displays. The VR system will monitor spatio-temporal data of the building user, and create the remaining (yet unvisited) parts of the building in response to this data, before the user gets there.
The specific context of this thesis can be adjusted based on your interests. One possibility would be to detect navigational confusion based on the occupant's walking trajectory, and - in response - provide a navigationally simplified space in the next room that the occupant is going to visit. Another possibility is to detect loss of attention in a virtual museum gallery, and - in response - provide the user with a more exciting space in the next room. The application should be evaluated in a simple user study.
Contact: Jakub Krukar and Chris Kray
SPARC3D SketchMaps: A VR tool for understanding spatial knowledge in the vertical
Sketch maps are traditionally drawn on a flat sheet of paper. However, with accessible, consumer-grade Virtual Reality devices it is now very easy to put on a VR headset and "draw in the air". This might be particularly useful in situations where you want to communicate how the environment looks in the vertical. For instance, when you want to draw a path of drone flying above some landscape, or describe to another person how to navigate a multi-level shopping mall, or explain to a friend how to get out of a metro station near your home.
The goal of this thesis is to understand the potential of 3D sketch maps in VR as a tool for communicating spatial knowledge.
The thesis can be approached from two perspectives:
From the technological perspective it is possible to design new ways of drawing 3D sketch maps in Virtual Reality, and compare them to traditional paper sketch maps. It is also interesting to explore how complex 3D sketch maps can be analysed systematically and what tools can we design to support the analysis of such (sometimes very complex and very messy) 3D drawings.
From the research perspective it is necessary to conduct user experiments to understand whether people can make good use of this new possibility offered by VR, or do they always find it more intuitive to draw on paper. Also, it is important to identify contexts in which 3D sketch maps are necessary - perhaps for most navigational scenarios a sheet of paper is sufficient? If so, what are specific cases (aviation, complex buildings) where 3D sketch maps are necessary?
One straight-forward way to study this is to ask participants to play a VR game where vertical navigation is important (virtual scuba diving, submarine simulator, flying a star wars battleship, a drone, or a passanger plane, downhill skiing) and ask them to draw sketch maps based on this experience.
Contact: Jakub Krukar
SPARCControlling the optic flow of Virtual Reality systems
Contact: Jakub Krukar
SPARCNavigating computer games
Contact: Jakub Krukar
Earth Observation (EO) data cubes are multidimensional arrays of spatial and temporal data, crucial for monitoring and analyzing environmental changes. Machine learning (ML) models applied to EO data cubes enable advanced data analysis and predictive capabilities. However, the diversity of programming languages used in the spatial data science and geoinformatics community, particularly R and Python, poses challenges for interoperability and reproducibility of these ML models.
The outcomes of this research are expected to facilitate smoother integration and collaboration among spatial data scientists and geoinformatics professionals who rely on different programming environments, promoting the reproducibility and interoperability of EO data analysis projects. This work will contribute to the broader goal of advancing geospatial data science by bridging the gap between diverse computational ecosystems.
Use Case:
Carrying out spatial-temporal analysis, such as time-series crop classification in Germany, leveraging the ONNX interoperability format : https://onnx.ai/
- Model Portability: How can a deterministic machine learning model, such as Support Vector Machine (SVM), be trained on Earth Observation data cubes in Python and then ported to R using the ONNX format?
- Performance Evaluation: What are the differences in performance and accuracy of the SVM model when ported from Python to R for time-series crop classification in Germany
- Interoperability Challenges: What are the challenges and potential solutions in ensuring interoperability and reproducibility of machine learning models between Python and R programming environments using ONNX?
- What is the feasibility of implementing identical deep learning models for Earth Observation data cubes in R, Python, and Julia, and ensuring their interoperability?
- How do the available tools and libraries for machine learning in R, Python, and Julia compare in terms of ease of use, performance, and integration with EO data cubes?
- What are the differences in command structure and interface among R, Python, and Julia for machine learning tasks related to EO data cubes, and how do these differences impact the reproducibility and interoperability of the models?
Please contact:
Brian Pondi brian.pondi@uni-muenster.de
and
Edzer Pebesma edzer.pebesma@uni-muenster.de
Contact: Brian Pondi
SPARCMeasuring spatial knowledge in VR compared to real world environments
Virtual Realtiy is known to cause distortions in our pereception of distance. Things inside VR seem closer than they would be in reality. This is an interesting problem because many wayfinding studies use VR as a substitute of real life environments. Yet, if we know that distance estimations are consistently distorted in VR, can we trust VR results using other methods for measuring spatial knowledge?
The goal of this thesis is to compare different methods of measuring spatial knowledge, in particular:
- sketch maps (qualitative and metric-based analyses)
- distance estimation tasks
- pointing tasks
- perspective taking tasks
across the Virtual Reality environment and corresponding real life environment.
The hypothesis is that some of these methods work equally well in VR and in real life environments, while others should not be used in VR. For example, our analysis from the paper below suggests that people who explore the same building in VR and in the real world draw equally good sketch maps, even though their distance estimation is distorted in VR.
Contact: Jakub Krukar
SILQualitätssicherung von crowd-sourced Sensordaten
In der Arbeit soll untersucht werden, inwieweit Qualitätssicherung von crowd-sourced Sensordaten in einem Sensornetzwerk automatisierbar ist. Dies ist ein neues und hoch relevantes Forschungsfeld: große Datenmengen erlauben die Anwendung statistischer oder machine learning-Verfahren. Traditionelle Verfahren sind häufig nicht nutzbar, da die Daten in Echtzeit vorliegen müssen. Zudem stellen crowd-sourced Daten eine spezielle Herausforderungen dar, da nicht davon ausgegangen werden kann, dass alle Daten mit korrekten bzw. konsistenten Messverfahren erhoben wurden. Schließlich haben low-cost-Sensoren selbst Messfehler, die von professionellen Sensoren stark abweichen. oder Messstationen sind von Citizen Scientists schlecht montiert. Das Ziel ist, die Einflussfaktoren auf die Datenqualität und die Messgenauigkeit der Sensoren zu erforschen, Verfahren zur automatisierten Identifikation fehlerhafter Daten und möglicher Fehlerquellen zu entwickeln sowie automatisiert Entscheidungen über Möglichkeit zur Korrektur der Daten (bspw. über Nachkalibrierung der Sensoren) oder Ausschluss bestimmter Daten zu treffen.
Contact: Thomas Bartoschek
SILSketchMapia – A Research Software to Assess Human Spatial Knowledge
Sketch mapping, i.e. freehand drawings of maps on a sheet of paper, is a popular and powerful method to explore a person's spatial knowledge. Although sketch maps convey rich spatial information, such as the spatial arrangement of places, buildings, streets etc., the methods to analyse sketch maps are extremely simple. At the spatial intelligence lab, we developed a software suite, called SketchMapia, that supports the systematic and comprehensive analysis of sketch maps in experiments. In this master thesis, you develop systematic test data for a sketch map analysis method and evaluate the SketchMapia analysis method w.r.t. its compleness, correctness and performance against other sketch map analysis methods.
Contact: Angela Schwering, Jakub Krukar
SITCOMlocation-based fact checking (topic cluster)
Recent years have seen a sharp increase in fabricated or outright false information being widely distributed (e.g. conspiracy theories, base-less claims, invented events). The broad availability of generative AI will most likely not only strengthen this trend but make it much harder to recognise fabricated information.
The idea behind this topic is to investigate ways to use location information strategically to verify the truthfulness of shared information. A large percentage of all information has a direct link to a real-world location, for example, photographs (or generated images) taken at a particular place (or claimed to be taken at that location).
This location in turn can be used to check the truthfulness of the information or media in different ways. One approach could be to use trustworthy cartographic information to compare image contents or textual descriptions to information from a map. A second approach could look into checking facts on site, e.g. via a location-based service that directs fact-checkers to where they can assess the truthfulness of some information by following a particular protocol that ensures they can provide strong evidence (e.g. via AR-based comparative overlays). A third approach could investigate ways to use location-sensor data (GPS, compass, gyro, timestamps) to create a blockchain for photos taken a particular location. This could provide a way for individual to create trustworthy information and for others to verify information. Finally, using reliable historical location information (maps, photos, datasets) to assess the truthfulness of newly posted information is another approach worthwhile investigating. For all different approaches, looking into potential attacks (particularly those presented by prompt engineering for current LLMs or coordination between malicious people on site) would be important.
Either of these approaches can become a thesis topics. In principle, these topics could be done either as Master thesis or a Bachelor thesis. However, they will require thorough background research and can potentially be technically demanding. If you are interested in this general topic area or one of the listed examples, please get in touch with Chris.
Contact: Christian Kray
SITCOMHybrid participation platform for urban planning
For urban projects to meet the needs and preferences of people, it is essential to ensure public participation. Ideally, this is done along the entire planning process and enables a broad range of groups to get involved.
Different approaches have been proposed in this context. They vary according to the degree of participation (from just being informed to actively taking decisions), to the temporal dimension (synchronous vs. asynchronous) as well as to the location (on site or remotely) and the medium (online, mobile or in person).
The goal of this thesis is to investigate, develop and evaluate hybrid options that combine multiple media and facilitate synchronous and asynchronous participation in urban planning projects. This could, for example, take the form of a web-based system that can be accessed through a public display and allows for synchronous/asynchronous communication between citizens and planners.
There is flexibility regarding the exact combination of technologies and functionalities that is investigated as well as with respect to how urban planning projects are visualised (maps, 3D, Augmented Reality).
Tthe thesis can be developed either as a Master or Bachelor thesis. There is potential for evaluating the approach/system in the context of one of the events/activities organised by the StadtLabor Münster.
Contact: Christian Kray
SITCOMAR evaluation toolkit
Augmented Reality applications are widely available now and expected to increase further in the future. They are, however, difficult to evaluate effectively as they strongly depend on interacting with their environment. For example, to assess the effectiveness and usability of a particular user interface or overlay, it is important to consider how it interacts with what people see around them. Are users able to connect the overlay to the corresponding real-world object? Can they interact with the UI elements while being in the actual environment?
The goal of this thesis is to develop and evaluate an evaluation toolkit for AR applications, which allows for systematic, repeatable and low-effort evaluation. The approach to investigate here is to use a virtual environment (such as the Immersive Video Environment at ifgi) and to “trick” an app into believing it is located at the site shown by the virtual environment. This ensures a controlled environment and thus allows for a systematic evaluation of AR applications.
Contact: Christian Kray
SITCOMSITCOM topics (general information)
Research in SITCOM generally focusses on enabling all kinds of users to solve real-world problems using spatial information. Have a look at the group's web page for more details and example projects.
If you are generally interested in this area or have an idea of a thesis topic that falls into that area, feel free to get in touch with one of the current members of SITCOM.
Contact: Christian Kray
Walkability is a notion referring to the extent that streets foster the activity of walking, both as a mobility mode and leisure activity. The perception of walkability is dependent on the streets’ physical characteristics, e.g. wide sidewalks, presence of greenery, etc. and on the individuals’ subjective perception thereof, e.g. the imparted sense of safety and beauty.
This thesis will build upon an ongoing project exploring a unique mixture of technologies and methods to give an account on how individuals subjectively perceive the walkability of streets.
In a previous experiment, participants were invited to a so-called Cave Automatic Virtual Environment, where images of streets were displayed. Wearing an eye-tracking device, they were asked to inspect the images and report their perception of the displayed streets. Data obtained in a follow-up interview with the participants complement a rich dataset to investigate how the physical features of streets, the individuals’ eye-movement when inspecting the images, and their demographic and behavioral traits relate to their reported walkability perception.
The student will be able to explore the diversity and richness of the available dataset when addressing an independent, project-related, research question.
References:
Liao B, Berg PEW, Wesemael PJV, Arentze TA (2022) Individuals’ perception of walkability: Results of a conjoint experiment using videos of virtual environments. Cities, 125, 103650.
Li Y, Yabuki N, Fukuda T (2022) Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning. Sustainable Cities and Society, vol 86, 104140.
Contact: Tessio Novack
Learning Analytics is a method to collect, measure, analyze and visualize data about learners and their context. It enables the understaning of the learning process and allows an adaption of learning paths based on the collected data. It also gives feedback to the learner and teacher about the learning process. The spatial intelligence lab has developed several learning platforms (GeoGami, Blockly for programming senseBox), where data revealing information about the learning proces sis collected. The thesis will investigate how real time data on the learning process can be used to guide the learning process using learning analytics.
Contact: Thomas Bartoschek
SILFostering Navigational Map Reading Competence
The ability to orient oneself and read maps is essential to successfully navigate in unfamiliar environments. It is well known that the ability to orient oneself with maps varies from person to person. While there are numerous navigation systems to help us find our way, very few efforts have been made to use GI technologies to promote orientation and map reading skills and overcome the individual differences. GeoGami is a location-based game using digital maps to systematically teach navigational map reading competence. The thesis will investigate how to design trainings to promote people’s navigational map reading competence with digital maps. How to design trainings for specific sub-competencies of navigational map reading such as self-localization, map alignment or object recognition? How to design virtual environments to provide an optimal environment to systematically test navigational map reading competence?
SPARCTesting the new Taxonomy of Human Wayfinding Tasks
In their seminal paper Wiener et al. (2009) defined the taxonomy of human wayfinding tasks. The taxonomy is based on the type of knowledge possessed by the navigator. However, it did not differentiate between any subcategories of the "Path Following" task. In other words, according to the taxonomy, there is no difference between (a) knowing your route without knowing anything about the wider surrounding enviornment, and (b) knowing your route AND knowing about the wider surrounding enviornment.
Schwering et al. (2017) argued that there are substantial differences between such two tasks and that they deserve to be distinguished in an updated taxonomy.
The goal of this thesis will be to test the hypothesis that following the same route, with the same knowledge about the route, is a cognitively different task depending on whether the navigator has, or does not have, survey knowledge about the broader envionment.
Wiener, J. M., Büchner, S. J., & Hölscher, C. (2009). Taxonomy of human wayfinding tasks: A knowledge-based approach. Spatial Cognition & Computation, 9(2), 152–165.
Schwering, A., Krukar, J., Li, R., Anacta, V. J., & Fuest, S. (2017). Wayfinding Through Orientation. Spatial Cognition & Computation, 17(4), 273–303. doi:10.1080/13875868.2017.1322597
Contact: Jakub Krukar
SPARCGeneralisation in Sketch Maps
When people draw sketch maps, they generalise information compared to the ground-truth information they perceived in the world. For example many buildings belonging to university campus are drawn as a single polygon labelled "campus",
This is a challenge for analysing sketch maps because this information is not wrong, yet a computer system for automated analysis would interpret it as such.
In the paper linked below we presented a classification of generalisation types in sketch maps. We also also have a working software prototype for analysing generalisation in sketch maps.
In this thesis you will test the impact of one (chosen) variable on the level of generalisation. Sample research questions:
- If we ask people to draw different size of an area, do they start to generalise more?
- do people generalise important streets more/less compared to less important ones (e.g., accounting for "integration" Space Syntax metric)?
- If we give people less time to draw, do they generalise or omit information?
- if we ask people to draw the sketch map with a different task (e.g., walking through a campus vs. walking near a campus vs. walking away from campus) - are generalisations different?
Contact: Jakub Krukar and Angela Schwering
SITCOMAssisting map comparison with annotation tool
Maps are predominant representational artifacts in the Geosciences for communicating research results and describing phenomena. Frequently we have to compare maps for a number of reasons: change detection, accuracy assessment, replicability and reproducibility evaluation. Comparing maps is commonly done with visual side-to-side comparison, which can be error-prone and cognitively exhausting for the reader. The aim of this thesis is to assist this comparison and to keep track of the observed differences by manually highlighting them. For this purpose, a prototype for annotating map differences will be developed and evaluated. The student has to ivestigate which annotation form is apropriate for each kind of difference and to use an appropriate structured vocabulary to decribe them.
Suggested reads:
Oren, E., Möller, K., Scerri, S., Handschuh, S., & Sintek, M. What are semantic annotations. Relatório técnico. DERI Galway, 9, 62 (2006).
Diaz, L., Reunanen, M., Acuña, B., Timonen, A. ImaNote: A Web-based multi-user image map viewing and annotation tool. ACM J. Comput. Cult. Herit. 3, 4, Article 13 (2011). http://doi.acm.org/10.1145/1957825.1957826
Contact: Eftychia Koukouraki
SPARCEye-tracking in the Virtual and Real World: What are participants (not) seeing?
Eye-tracking is a common method for studying the usability of buildings and spatial behaviour in buildings. Many of such studies are conducted in Virtual Reality: either on desktop computers or in head-mounted displays. However, this is a problem because the field of view and head movement in such set-ups is greatly restricted. This might affect what people do or do not see when navigating a building, especially if important information is visible in the periphery of their visual field.
In this thesis you will try to answer the question: What visual information in the periphery do participants of VR experiments miss when navigating a building?
You will compare two groups of people: one group navigating the real building, and the other group navigating the virtual replica of GEO1. You will analyse the eye-tracking data and compare it across the two conditions. The virtual replica of GEO1 is provided.
Contact: Jakub Krukar
SPARCSketch Maps as a tool for learning new environments
For decades, sketchmaps have been used as a tool for measuring spatial knowledge - i.e., for estimating how well participants know and understand some areas. However, evidence from psychological memory studies demonstrates, that drawing something can also be a good strategy to memorise a set of object. For instance, if you need to memorise the setting of a room, drawing the room as you see it is a better memorisation strategy than repeating the names of the objects verbally or in your head. This thesis will test whether drawing a sketch map is a good memorisation strategy for spatial environments and how this approach can be implemented in a gamified app. The problem is relevant for situations in which people must learn new spatial environments, e.g. to become taxi/delivery drivers, or when they move to a new city.
The thesis can be completed with focus on one of two aspects:
**Computational focus:** You will design a teaching app that (a) records the user's trajectory together with a list of landmarks that were visible along the route, and (b) after a delay, asks users to draw the area that they have travelled. Here the key problem may be to select routes and landmarks that the user should be asked to draw (based on the recorded trajectories).
**Evaluation focus:** You will design and conduct an experiment to evaluate the following research question: does drawing a sketchmap help people memorise the environment better, compared to alternative strategies? This does not require creating an app, and can be conducted as an in-situ experiment, or inside our Virtual Reality lab.
Contact: Jakub Krukar
Contact: Poshan Niraula, Edzer Pebesma
SPARCIn-and-Out: What happens when we enter and exit buildings?
Navigation is usually studied either completely outdoors or completely indoors. But our real-life wayfinding is different - we continuously enter and exit buildings without feeling that this is now a completely different experience.
The goal of this thesis is to understand what happens when people exit/enter buildings - how their navigation changes, and how technology can embrace the difference between these two contexts.
This can be approached from two perspectives.
From the technological perspective it is possible to design a prototype navigation system that---assuming indoor localisation is possible---works indoors and outdoors, and changes its behaviour depending on this context.
From the research perspective it can be investigated how human navigation changes when people enter or exit buildings. This can be studied for instance with a mobile eye-tracking, by asking experiment participants to move indoors/outdoors and analysing how eye-tracking measures vary between these two contexts.
Contact: Jakub Krukar
SILTinyAIoT: Ressource-efficient AI Models for IoT Sensors
Modern IoT applications often rely on sensors that run on microcontroller units and communicate via network protocols such as LoRaWAN or Bluetooth Low Energy. To operate autonomously for extended periods of time, application resource requirements must be minimized. This master's thesis investigates the development of resource-efficient IoT applications through the utilization of AI models. These models aim to save energy by reducing the computational load, camera resolution or data transmission, while maintaining the ability to perform specific tasks. You will develop, implement and compare different resource-efficient AI models with sensors such as cameras, distance sensors, vibration sensors and test your implementation in different application scenarios.
Contact: Benjamin Karic, Thomas Bartoschek
- How do TensorFlow and PyTorch compare in terms of model fitting correspondences and the structure of intermediate layers when applied to EO data cubes?
- What are the differences in the interface and tooling availability between TensorFlow and PyTorch for spatio-temporal modeling, e.g. using ConvLSTM?
- How does the performance and interoperability of spatio-temporal deep learning models vary across different versions of TensorFlow and PyTorch when applied to EO data cubes?
NB : ONNX format can be used to port DL models between Tensorflow and Torch ; https://onnx.ai/
Please contact:
Brian Pondi brian.pondi@uni-muenster.de
and
Edzer Pebesma edzer.pebesma@uni-muenster.de
Contact: Brian Pondi
As the complexity and diversity of spatial-temporal machine learning (ML) and deep learning (DL) models continue to grow, there is a critical need for an effective system to catalog, search, and apply these models in geospatial analysis. This thesis proposes the creation of a web-based interface designed to enable the accessibility and usability of these models for practitioners and researchers in spatial data science.
Spatial-temporal data, characterized by its multi-dimensional nature, requires sophisticated analytical approaches that can model and predict patterns over time and space. Given the rapid proliferation of ML and DL techniques, there is a pressing need for a system that can organize these models in an accessible manner. The proposed web interface will serve not only as a repository of varied models but also as an exploratory tool that enables users to identify and implement the most suitable models for specific geographical and temporal datasets.
The design focus will be on creating a user-friendly, intuitive interface that supports extensive search capabilities, model comparisons, and real-time implementation feedback. This database of models will include metadata on each model's performance metrics, use case compatibility, computational requirements, etc. By integrating these elements, the interface will provide invaluable guidance for researchers and practitioners in selecting and applying appropriate ML and DL models to solve real-world spatial-temporal problems.
Guiding Questions:
Bachelor's Level:
- How can the MLM STAC model extension [1] be utilized to create a catalog of spatial-temporal ML models within a web-based interface?
- What are the key features and functionalities needed in a web-based interface to effectively catalog and search ML models for spatial-temporal data?
- How can an ML Catalog backend [2] be integrated into a web-based interface to facilitate the organization and accessibility of ML models for geospatial analysis?
Master's Level:
- How can deep learning models and components, such as encoders, be cataloged and searched within a web-based interface for spatial-temporal data analysis?
- What methods can be implemented in a web-based interface to evaluate the suitability and applicability of DL models for specific spatial-temporal datasets?
- How can the web-based interface provide insights into the performance and compatibility of DL models by analyzing metadata, use case compatibility, and computational requirements?
[1] https://github.com/crim-ca/mlm-extension
[2] https://github.com/PondiB/openearth-ml-server
Please contact:
Brian Pondi brian.pondi@uni-muenster.de
and
Edzer Pebesma edzer.pebesma@uni-muenster.de
Contact: Brian Pondi
SPARCThe "wow" effect of buildings: Computational modelling of perceived spaciousness in VR
Perceived Spaciousness is the subjective feeling of how large a given space is. Just think about the difference between the seminar room 242 vs. the atrium in the centre of the GEO1 building. Does the atrium feel 5x larger? 10x larger? 20x larger? Or imagine walking through a small entrance into a large temple. How much volume does it need to have to create the feeling of "wow, this is huge!" ?
Resaerch has shown that people have very distorted way of judging this and that the volume of space alone (the amount of "empty air" surrounding you) is not the only important factor. It matters what shape this empty space has (is it taller or longer?), where you are currently standing (above or below the open space? close to the wall or in its centre?), and where you entered it from (entering into a large room from a small one vs a small one from a large one).
The goal of this thesis is to understand what affects our peceived spaciousness. The thesis can approached from two perspectives.
From the technological perspective it is necessary to create new ways of studying perceived spaciousness in Virtual Reality. Researchers used various ways to ask people "how spacious does this space feel right now?" - they asked them verbally to rank it from 1 to 7, or gave them a circular dial that participants rotated when they felt more spacious. VR opens the chance to create better ways to gather this kind of data.
From the research perspective VR offers us the chance to design architectural experiments impossible in real life. For instance, we can move a person from a very small space to an extremely spacious one in a matter of seconds, and test their reactions. We can change their pathway (e.g., reverse it so that they move from a large space to a small one) and see if the reaction is symmetric. We can also systematically modify the Virtual Reality rooms by changing their shape, size, or lighting, and test how these changes affect perceived spaciousness.
Contact: Jakub Krukar
CVMLSAdvanced Topics of Deep Learning
Hardly any other field of computer science is developing as rapidly as machine learning. With over 100 publications a day, it is becoming increasingly difficult to maintain an overview of the variety of existing and new deep learning concepts and methodologies such as self-supervised learning, transformers and NeRF models. Given the computationally heavy data analysis of geoinformatics, often requiring to process huge datasets of spatio-temproal data, a variety of machine learning paradigms are of particular importance. The aim of this project is therefore to investigate a particular state-of-the-art deep learning algorithm with a particular focus (but not limited to) geoinformatics applicability.
Contact: Benjamin Risse
SPARCReplicability of wayfinding research
"Replication" refers to the process of re-creating an experiment published by other researchers in an effort of obtaining results pointing to the same conclusion. A "replication crisis" showed that many published research is not replicable. We can distinguish two types of replication:
- an "exact replication" is the attempt of recreating every detail of the original experiment
- a "conceptual replication" is the attempt of creating a similar experiment, with similar hypotheses, but perhaps with a different stimuli, instructions, or groups of participants.
This thesis focuses on a "conceptual replication" of navigation research.
Navigation research is usually performed in very specific spatial context (such as the city in which the paper's authors are based or the virtual environment that they have created). This introduces a challenge to generalizability and replicability of navigation research because we do not know whether classic research findings would be equally applicable in different spatial contexts (e.g., a different city).
This thesis focuses on replicating an existing wayfinding paper (to be chosen by the student) in Münster, or in a virtual environment available at ifgi.
The key challenge is finding a way to make the new spatial context (of Münster) comparable to that of the original paper.
Thesis co-supervised by Daniel Nüst (with technical support w.r.t. replicability).
Examples of papers that can be replicated:
https://doi.org/10.1080/17470218.2014.963131
https://doi.org/10.1016/j.cognition.2011.06.005
Contact: Jakub Krukar
CVMLSDetecting the Invisible - Detection and Tracking of Tiny Insects in Complex Wildlife Environments
In recent years, the number of animals in general and insects in particular has decreased dramatically. In contrast to bigger vertebrates there is however still a lack of techniques for non-invasive insect monitoring. In this project, novel visual and temporal data will be used to address the following objectives: (1) develop a state of the art computer vision and machine learning algorithm on complex multimodal wildlife recordings; and (2) evaluate the algorithm based on a challenging wildlife dataset. In particular, current algorithms for object recognition will be used, which are almost universally applicable in geoinformatics and are therefore not limited to the terrestrial remote sensing data presented in this thesis.
Contact: Benjamin Risse
SPARCExtracting spatial complexity of interior spaces from youtube videos
Contact: Jakub Krukar
SITCOMPrivacy preserving location-based services (adaptive algorithms, infrastructures, visualisations)
In order to benefit from location-based services (LBS) such as navigation support, local recommender systems or delivery services, users need to share their location with the service provider. This can have negative implications for their privacy as the service provider might learn a lot about users, e.g. movement patterns, places they frequent and inferred knowledge such as health conditions.
At the same time, service provision would also be possible if users did not share their precise location: a weather forecast app, for example, might work well enough with very coarse-grained location information. For some LBS, having access to lower-quality location information might be problematic. For example, providing turn-by-turn instructions might be impossilbe if users share coarse-grained location information.
The goal of this topic is thus to develop and evaluate approaches for different types of LBS to adapt to different levels of quality of location information. The topic can be tackled from different perspectives and therefore can serve as a starting point for several different theses projects:
- on the algorithmic level, an analysis of common algorithms used in LBS to provide certain services (such as routing) can be carried out to develop and evaluate new/improved algorithms that can better cope with different levels of location quality
- on the infrastructure level, different frameworks and libraries for the development of LBS can be analysed regarding how well they support copting with different levels of location quality; this can then inform the design and evaluation of an improved solution
- on the visualisation/user interface level, an analysis of exsting solutions to convey instructions/information to users in LBS can be carried out with respect to how well they work when provided with location information of lower quality; this can then inform the design and evaluation of improved, adaptive visualsations and user interfaces
Students interested in this topic area can have a look at the SIMPORT project and the publications listed below:
-
Linking location privacy, digital sovereignty and location-based services: a meta review. (2023) S Özdal Oktay, S Heitmann, C Kray. Journal of Location Based Services, 1-52.
-
‘Informed’consent in popular location based services and digital sovereignty. (2022) J Dreyer, S Heitmann, F Erdmann, G Bauer, C Kray. Journal of Location Based Services 16 (4), 312-342.
-
Ranasinghe, C., Schiestel, N., & Kray, C. (2019, October). Visualising location uncertainty to support navigation under degraded gps signals: A comparison study. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1-11).
-
Ranasinghe, C., Heitmann, S., Hamzin, A., Pfeiffer, M., & Kray, C. (2018, December). Pedestrian navigation and GPS deteriorations: User behavior and adaptation strategies. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 266-277).
Contact: Christian Kray