Theses

Bachelor

ifgicopterUAV / UAS (Drohnen) Remote Sensing/GIS: Vegetationsspezifische Geodatenanalyse/Workflows

Thema: Im Rahmen der gemeinsamen IFGIcopter und ILÖK UAV Initiative werden  kontinuierlich vegetationsspezifische Fernerkundungsdaten unterschiedlichster UAV-Sensoren (Drohnen) aufgenommen und ausgewertet. Besondere Schwerpunkte sind die Erfassung und Analyse von Vegetationsmustern, Vitalitätsparametern und inversiver Arten mittels multispektraler UAS Daten. In diesem Kontext spielen die Datenverarbeitung und Visualisierung (auch 3D) mittels verschiedenster geoinformatischer Werkzeuge (GIS, kommerzielle Software, Web-Tools und Eigenprogrammierungen etc.) eine große Rolle. Wer Interesse an einer interdisziplinären Fragestellung in diesem Bereich hat, wende sich an die beiden Ansprechpartner [2017].

Ansprechpartner: Torsten Prinz / Jan Lehmann

Contact: Torsten Prinz

SILVisualisierung von raum-zeitlichen SensorDaten auf der openSenseMap

Die openSenseMap ist eine Plattform für Umweltsensordaten von Messstationen jeglicher Art. Zur Zeit werden nur Rohdaten von senseBoxen gespeichert und die Daten können sich nur pro senseBox angezeigt werden lassen. Zudem gibt es die Möglichkeit sich die gesammelten Daten für einen Zeitpunkt interpoliert darstellen zu lassen.
 
Ziel einer Bachelorarbeit ist es, eine Datenvisualisierung für verschiedene Aspekte auf dem openSenseMap Portal zu entwickeln. Diese kann z.B. zum Ziel haben, Daten mehrer senseBoxen und Sensoren mit statistischen Methoden zu vergleichen oder externe Datenquellen, wie zB. vom DWD, zum Vergleich einzubinden. Auch die interaktive Darstellung, in der verschiedene Umweltphänomene aggregiert dargestellt werden, würde neue Möglichkeiten schaffen, die Daten zu erkunden. Eine Bacheloarbeit kann auch die Visualisierung mobiler Sensoren in den Fokus der Arbeit stellen. Es werden verschiedene Visualisierungen mit neusten Webtechnologien generiert und mit einer Nutzerstudie evaluiert. 

Contact: Thomas Bartoschek

STMLDeveloping a web-based forest-fire spread model for Germany

With climate change more frequent and more intense forest fires pose a serious threat to the environment and societies. In a collaboration between the Institute of Landscape Ecology and the Institute of Geoinformatics we aim to extend a web-based burn simulator (known as Ember-sim) to simulate the spread of fire across the landscape based on established fire behaviour models. The burn simulator will then be used for educational purposes, research, and training professional fire practitioners, governmental officers and volunteers from the community. 

The project can be delivered at the BSc or MSc level and the potential candidate will be in charge of extending Ember-sim originally developed for the Australian continent, for Germany. Project starting date is open until position is filled.

Are you experienced with JavaScript and have interest in climate change-related topics?

Please contact:
Prof. Mana Gharun mana.gharun@uni-muenster.de
Or
Dr Christian Knoth christian.knoth@uni-muenster.de

 

Contact: Christian Knoth

SITCOMQGIS plugin for automating the documentation of map making

Reproducibility is a core element of the scientific method.
In the Geosciences, the insights derived from geodata are frequently communicated through maps, and the computational methods to create these maps vary in their ease of reproduction.
While GIS desktop applications (e.g., QGIS, ArcGIS) are widely used by professionals and researchers in the Geosciences for map production, they may hinder map reproducibility as the details of the map making process become more challenging to document.
For this thesis, a QGIS plugin will be developed, which automates the documentation of the datasets, the spatial operations and other metadata that were used to produce maps.
The plugin should output a structured JSON file that ties together all the necessary components, steps and information for the (re)production of a map within the environment of QGIS, in order to facilitate the reproducibility of maps that were not created programmatically.
The plugin will be developed in Python (unless the student feels comfortable with C++).
 

QGIS plugins for reference: https://plugins.qgis.org/plugins/MetadataDbLinker/, https://plugins.qgis.org/plugins/mapexport/, https://plugins.qgis.org/plugins/project_report/

Structured schemas for reference: https://schema.org/Map, https://schema.org/SoftwareApplication, https://www.researchobject.org/ro-crate/

Contact: Eftychia Koukouraki

SITCOMExploring forms of interactions in the Immersive Video Environment (IVE)

 

IVE is a panoramic video footage that is displayed on large screens in a cave like environment that creates a sense of physical presence and enables people to better interact and intervene the image of their surroundings. However, the forms of interactions are very limited - yet needed - when users need to create objects on the screen as video overlays and interact with them (e.g., adding and scaling a tree, modifying a building façade etc.).

 

This study aims to explore the forms of interactions on IVE for effective creation and interaction of the overlays. The student will work with the IVE system in the Sitcom Lab at Ifgi. The study will cover the following steps: creating a video footage and overlays for IVE, exploration of the tools (e.g., smart phone, HTC Vive controllers, touch pad), design of the forms (e.g., adding, removing, scaling, rotating, placing the overlays) of interaction and the UI component.

 

Contact: Simge Özdal Oktay

SILRäumliches Bewegungsverhalten

Jeden Tag bewegen wir uns durch Raum und Zeit. Um entferntere Ziele zu erreichen, nutzen wir Karten in Navigationssystemen. Aber auch in unserer direkten Umgebung - beispielsweise beim Museumsbesuch - navigieren wir zielgerichtet entlang bestimmter Wege. Im Rahmen der Bachelorarbeit wird untersucht, wie mittels Technologie das räumliche Bewegungsverhalten von Menschen beobachtet und interpretiert werden kann. Tiefenbildkameras wie die Kinect können innerhalb von Gebäuden wertvolle Informationen über menschliche Interaktionen liefern. Im Freien kann GPS durch Informationen zur Blickrichtung und zur Umgebung ergänzt werden und so eine tiefergehende Interpretation ermöglichen. Durch Virtual Environments kann die Umgebung systematisch verändert werden und der Einfluss der Umgebung auf das Wegfindungsverhalten untersucht werden.
 
Eine Bachelorarbeit kann die oben genannten Fragen aus einer technischen Perspektive (z.B. Entwicklung eines Systems zur Erfassung relevanter Bewegungsinformationen), aus einer Datenperspektive (z.B. Interpretation von Bewegungsdaten, die durch technologiegestützte Beobachtung erfasst wurden) oder aus einer experimentellen Perspektive (z.B. Untersuchung von Wegfindungsstrategien und Bewegungsverhalten in selbst programmierten virtuellen Umgebungen in Unity) untersuchen.

Contact: Angela Schwering

SPARC3D SketchMaps: A VR tool for understanding spatial knowledge in the vertical

Sketch maps are traditionally drawn on a flat sheet of paper. However, with accessible, consumer-grade Virtual Reality devices it is now very easy to put on a VR headset and "draw in the air". This might be particularly useful in situations where you want to communicate how the environment looks in the vertical. For instance, when you want to draw a path of drone flying above some landscape, or describe to another person how to navigate a multi-level shopping mall, or explain to a friend how to get out of a metro station near your home.

The goal of this thesis is to understand the potential of 3D sketch maps in VR as a tool for communicating spatial knowledge.

The thesis can be approached from two perspectives:

From the technological perspective it is possible to design new ways of drawing 3D sketch maps in Virtual Reality, and compare them to traditional paper sketch maps. It is also interesting to explore how complex 3D sketch maps can be analysed systematically and what tools can we design to support the analysis of such (sometimes very complex and very messy) 3D drawings.

From the research perspective it is necessary to conduct user experiments to understand whether people can make good use of this new possibility offered by VR, or do they always find it more intuitive to draw on paper. Also, it is important to identify contexts in which 3D sketch maps are necessary - perhaps for most navigational scenarios a sheet of paper is sufficient? If so, what are specific cases (aviation, complex buildings) where 3D sketch maps are necessary?

One straight-forward way to study this is to ask participants to play a VR game where vertical navigation is important (virtual scuba diving, submarine simulator, flying a star wars battleship, a drone, or a passanger plane, downhill skiing) and ask them to draw sketch maps based on this experience.

Kim, K. G., Krukar, J., Mavros, P., Zhao, J., Kiefer, P., Schwering, A., ... & Raubal, M. (2022). 3D Sketch Maps: Concept, Potential Benefits, and Challenges. In 15th International Conference on Spatial Information Theory (COSIT 2022) (Vol. 240, p. 14). Schloss Dagstuhl, Leibniz-Zentrum für Informatik.

Contact: Jakub Krukar

SPARCControlling the optic flow of Virtual Reality systems

When you take one step forward, the information you see in the periphery of your visual field slightly changes. The speed and content of this change is known as "optic flow". Optic flow is crucial for human navigation - you probably don't reallise it but your mind continuously processes these small changes in the visual field in order to calculate how your location in space has changed. 
 
Virtual Reality opened up new opportunities for studying wayfinding. However, optic flow is (to various degrees) unnatural in *all* VR systems due to factors such as: limited field of view in the head-mounted display, flat computer displays, unintuitive speed of movement through the virtual world, lack of detail in the periphery of VR models. Since the optic flow is unnatural, navigation behaviour in VR systems might also be different from the real world.
 
In this thesis you will create a method to modify the optic flow of a state-of-the-art VR system with integrated walking treadmill and motion tracking sensors. You will have to understand which parts of the visual field (e.g., left/right, centre, or upper/bottom parts) are more important for a realistic optic flow, create a way to restrict them, and test your method in a user study. The thesis will be conducted in collaboration at with the Institute of Sport and Exercise Sciences.
 

Contact: Jakub Krukar

CVMLSAktuelle Verfahren des maschinellen Lernens in der Bildanalyse

Kaum ein anderes Feld der Informatik entwickelt sich so rasant das maschinelle Lernen. Mit über 100 Veröffentlichungen am Tag wird es zunehmend komplizierter einen Überblick über die für die Geoinformatik relevanten Bildanalyseverfahren zu behalten. Aus diesem Grund sollen in diesem Projekt ausgewählte Verfahren des maschinellen Lernens erarbeitet und auf diverse Bildanalyseaufgaben angewandt werden. Ziel ist es hierbei eine spezifische Algorithmenklasse des maschinellen Lernens kennen zu lernen und das Verfahren in den Kontext der existierenden Algorithmen einzuordnen. 

Contact: Benjamin Risse

ifgicopterRe-Design des Geodatenportals StudMap14

Das ZDM/IVV sucht ab sofort in Zusammenarbeit mit dem IFGI eine/n BSc-Kandidaten/in aus der Geoinformatikzwecks innovativen "Re-Designs" des Geodatenportals StudMap14 (http://gdione4all.uni-muenster.de/joomla/index.php/studmap14

Kenntnisse/Einarbeitung in die GeoServer-Umgebung und Interesse an modernen GDI-Lösungen sind Voraussetzungen für dieses BSc-Projekt (ggf. ist eine Finanzierung mit 5 SHK-Stunden für 6 Monate möglich).

Bei Interesse direkt mit Dr. Torsten Prinz in Kontakt treten!

Contact: Torsten Prinz

SITCOMOrtsbasiertes Überprüfen von Fakten (Themen-Cluster)

In den letzten Jahren hat die Verbreitung erfundener oder völlig falscher Informationen stark zugenommen (z. B. Verschwörungstheorien, Behauptungen ohne Grundlage, erfundene Ereignisse). Die breite Verfügbarkeit von generativer KI wird diesen Trend höchstwahrscheinlich nicht nur verstärken, sondern auch die Erkennung gefälschter Informationen erheblich erschweren.
 
Die Idee hinter diesem Thema ist es, Wege zu erforschen, wie Standortinformationen strategisch genutzt werden können, um ortsbezogene Informationen zu verifizieren. Viele Informationen haben einen direkten Bezug zu einem realen Ort, z. B. Fotos (oder generierte Bilder), die an einem bestimmten Ort aufgenommen wurden (oder von denen behauptet wird, dass sie an diesem Ort aufgenommen wurden).
 
Dieser Standort kann genutzt werden, um den Wahrheitsgehalt der Informationen oder Medien auf verschiedene Weise zu überprüfen.  Ein Ansatz könnte darin bestehen, vertrauenswürdige kartografische Informationen zu verwenden, um Bildinhalte oder Textbeschreibungen mit Informationen aus einer Karte zu vergleichen. Ein zweiter Ansatz könnte sich mit der Überprüfung von Fakten vor Ort befassen, z. B. über einen standortbasierten Dienst, der Faktenprüfer*innen dorthin führt, wo sie den Wahrheitsgehalt bestimmter Informationen anhand eines bestimmten Protokolls bewerten können. Dieses Protokoll sollte sicherstellen, dass sie stichhaltige Beweise für den Wahrheitsgehalt zu prüfender Informationen liefern können (z. B. über AR-basierte vergleichende Overlays). Ein dritter Ansatz könnte Möglichkeiten zur Nutzung von Standort-Sensordaten (GPS, Kompass, Kreisel, Zeitstempel) untersuchen, um eine Blockchain für Fotos zu erstellen, die an einem bestimmten Ort aufgenommen wurden. Auf diese Weise könnten Einzelpersonen vertrauenswürdige Informationen erstellen und andere die Informationen überprüfen. Schließlich ist die Verwendung zuverlässiger historischer Standortinformationen (Karten, Fotos, Datensätze) zur Bewertung des Wahrheitsgehalts neu eingestellter Informationen ein weiterer Ansatz, den es sich zu untersuchen lohnt.  Bei allen Ansätzen wäre es wichtig, potenzielle Angriffe zu untersuchen (insbesondere solche, die durch Prompt-Engineering für aktuelle LLMs oder die Koordination zwischen böswilligen Personen vor Ort entstehen).
 
Jeder dieser Ansätze kann zum Thema einer Abschlussarbeit werden.  Im Prinzip könnten diese Themen entweder als Masterarbeit oder als Bachelorarbeit bearbeitet werden.  Sie erfordern jedoch eine gründliche Hintergrundforschung und können technisch anspruchsvoll sein. Wenn Sie sich für dieses allgemeine Thema interessieren, bitte an Chris Kray wenden.
 

Contact: Christian Kray

SILEntwicklung eines Statistikportals zur Visualisierung und Auswertung von Umweltdaten aus dem Citizen Science Kontext

Die openSenseMap ist eine Plattform für Umweltsensordaten von Messstationen jeglicher Art. Zur Zeit werden nur Rohdaten von senseBoxen gespeichert und die Daten können sich nur pro senseBox angezeigt werden lassen. Zudem gibt es die Möglichkeit sich die gesammelten Daten für einen Zeitpunkt interpoliert darstellen zu lassen.

Ziel der Arbeit ist es, für die openSenseMap ein Portal zu entwicklen, in dem der Benutzer die Möglichkeit hat mehrere senseBoxen und Sensoren mit statistischen Methoden zu vergleichen und externe Datenquellen, wie zB. vom DWD, einzubinden.

Contact: Thomas Bartoschek

SPARCMeasuring spatial knowledge in VR compared to real world environments

Virtual Realtiy is known to cause distortions in our pereception of distance. Things inside VR seem closer than they would be in reality. This is an interesting problem because many wayfinding studies use VR as a substitute of real life environments. Yet, if we know that distance estimations are consistently distorted in VR, can we trust VR results using other methods for measuring spatial knowledge?

The goal of this thesis is to compare different methods of measuring spatial knowledge, in particular:

- sketch maps (qualitative and metric-based analyses)

- distance estimation tasks

- pointing tasks

- perspective taking tasks

across the Virtual Reality environment and corresponding real life environment.

The hypothesis is that some of these methods work equally well in VR and in real life environments, while others should not be used in VR. For example, our  analysis from the paper below suggests that people who explore the same building in VR and in the real world draw equally good sketch maps, even though their distance estimation is distorted in VR.

Li, H., Mavros, P., Krukar, J., & Hölscher, C. (2021). The effect of navigation method and visual display on distance perception in a large-scale virtual building. Cognitive Processing22(2), 239-259.

Contact: Jakub Krukar

SILGeovisualisierung von offenen Umweltdaten im Web

Die openSenseMap bietet live Daten zu verschiedensten Umweltphänomenen, Jedoch ist es zur Zeit schwierig diese Daten erkunden. Ziel dieser Bachelor Arbeit wäre es neue Möglichkeiten zu schaffen, die Daten interaktiv darzustellen. Interessant wären zum Beispiel live Interpolationen über Feinstaubwerte oder die Temperaturentwicklung in Innenstädten im Hochsommer. Um diese Daten einem möglichst grossem Publikum zur Verfügung zu stellen, soll in dieser Bachelorarbeit untersucht werden, welche Möglichkeiten hier neuste Webtechnologien bieten. Verschiedene Visualisierungen sollen generiert werden und mit einer Nutzerstudie evaluiert werden.  

 

 

Contact: Thomas Bartoschek

SILQualitätssicherung von crowd-sourced Sensordaten

In der Arbeit soll untersucht werden, inwieweit Qualitätssicherung von crowd-sourced Sensordaten in einem Sensornetzwerk automatisierbar ist. Dies ist ein neues und hoch relevantes Forschungsfeld: große Datenmengen erlauben die Anwendung statistischer oder machine learning-Verfahren. Traditionelle Verfahren sind häufig nicht nutzbar, da die Daten in Echtzeit vorliegen müssen. Zudem stellen crowd-sourced Daten eine spezielle Herausforderungen dar, da nicht davon ausgegangen werden kann, dass alle Daten mit korrekten bzw. konsistenten Messverfahren erhoben wurden. Schließlich haben low-cost-Sensoren selbst Messfehler, die von professionellen Sensoren stark abweichen. oder Messstationen sind von Citizen Scientists schlecht montiert. Das Ziel ist, die Einflussfaktoren auf die Datenqualität und die Messgenauigkeit der Sensoren zu erforschen, Verfahren zur automatisierten Identifikation fehlerhafter Daten und möglicher Fehlerquellen zu entwickeln sowie automatisiert Entscheidungen über Möglichkeit zur Korrektur der Daten (bspw. über Nachkalibrierung der Sensoren) oder Ausschluss bestimmter Daten zu treffen.

Contact: Thomas Bartoschek

Hybride Beteiligung an Stadtplanungsprojekten

Damit städtische Projekte den Bedürfnissen und Vorlieben der Menschen entsprechen, ist die Beteiligung der Öffentlichkeit von entscheidender Bedeutung. Idealerweise geschieht dies während des gesamten Planungsprozesses und ermöglicht die Beteiligung eines breiten Spektrums von Gruppen.
 
In diesem Zusammenhang sind verschiedene Ansätze vorgeschlagen worden. Sie unterscheiden sich nach dem Grad der Beteiligung (von reiner Informationweitergabe bis zur aktiven Entscheidungsbeteiligung), nach der zeitlichen Dimension (synchron oder asynchron) sowie nach der Verortung (vor Ort oder aus der Ferne) und dem Medium (online, mobil oder persönlich).
 
Ziel dieser Arbeit ist es, hybride Optionen zu untersuchen, zu entwickeln und zu evaluieren, die mehrere Medien kombinieren und synchrone und asynchrone Beteiligung an Stadtplanungsprojekten ermöglichen.  Dies könnte z.B. in Form eines webbasierten Systems geschehen, das über ein öffentliches Display zugänglich ist und eine synchrone/asynchrone Kommunikation zwischen Bürgern und Planern ermöglicht.
 
Die genaue Kombination der zu untersuchenden Technologien und Funktionalitäten sowie die Art der Visualisierung von Stadtplanungsprojekten (Karten, 3D, Augmented Reality) sind flexibel.
 
Die Arbeit kann sowohl als Master- als auch als Bachelorarbeit durchgeführt werden. Es besteht die Möglichkeit, den Ansatz/das System im Rahmen einer der vom StadtLabor Münster organisierten Veranstaltungen/Aktivitäten zu evaluieren.
 

Contact: Christian Kray

SITCOMAR evaluation toolkit

Augmented Reality applications are widely available now and expected to increase further in the future. They are, however, difficult to evaluate effectively as they strongly depend on interacting with their environment.  For example, to assess the effectiveness and usability of a particular user interface or overlay, it is important to consider how it interacts with what people see around them. Are users able to connect the overlay to the corresponding real-world object? Can they interact with the UI elements while being in the actual environment?

 

The goal of this thesis is to develop and evaluate an evaluation toolkit for AR applications, which allows for systematic, repeatable and low-effort evaluation.  The approach to investigate here is to use a virtual environment (such as the Immersive Video Environment at ifgi) and to “trick” an app into believing it is located at the site shown by the virtual environment.  This ensures a controlled environment and thus allows for a systematic evaluation of AR applications.

 

Contact: Christian Kray

SILEntwicklung und Evaluierung von dynamischen Lerntutorials mit Gamification-Ansatz

Die graphische Programmieroberfläche Blockly (blockly.sensebox.de) für senseBox bietet die Möglichkeit auch ohne Programmiererfahrungen schnell und einfach in die Welt der Mikrokontroller-Programmierung einzusteigen. Im nächsten Schritt sollen interaktive Tutorials entwickelt werden, die direkt in der Blockly Oberfläche absolviert werden können. Ein Reward System basierend auf "Open Badges" (https://openbadges.org/) soll integriert werden. Ein Beispiel wie eine solche Integration aussehen könnte findet sich hier: https://studio.code.org/hoc/1
 
 

Contact: Thomas Bartoschek

SITCOMSITCOM topics (general information)

Research in SITCOM generally focusses on enabling all kinds of users to solve real-world problems using spatial information.  Have a look at the group's web page for more details and example projects.

If you are generally interested in this area or have an idea of a thesis topic that falls into that area, feel free to get in touch with one of the current members of SITCOM.

Contact: Christian Kray

SILUser Centered Design for educational WebGIS

Mit WebGIS NRW (webgis.nrw) existiert ein prototypisches WebGIS für den Bildungskontext, das auf modernen open Source Technoligien basiert (MapBox GL). Ziel der Arbeit ist die Weiterentwicklung des WebGIS nach User Centered Design Prinzipien und eine Evaluation der Usability.

Contact: Thomas Bartoschek

SILExplorative Analyse von Open Source Hardware Sensorik

Im Rahmen dieser Bachelorarbeit sollen neue Sensorkomponenten für Umweltphänomene (z.B. Wind, Wasser, Radioaktivität o.ä.) für die senseBox identifiziert und in das senseBox Ökosystem aus Open Source Hardware, openSenseMap Geodateninfrastruktur, Blockly-Programmierumgebung integriert und evaluiert werden. 

Contact: Thomas Bartoschek

SPARCGeneralisation in Sketch Maps

When people draw sketch maps, they generalise information compared to the ground-truth information they perceived in the world. For example many buildings belonging to university campus are drawn as a single polygon labelled "campus",

This is a challenge for analysing sketch maps because this information is not wrong, yet a computer system for automated analysis would interpret it as such.

In the paper linked below we presented a classification of generalisation types in sketch maps. We also also have a working software prototype for analysing generalisation in sketch maps.

In this thesis you will test the impact of one (chosen) variable on the level of generalisation. Sample research questions:

- If we ask people to draw different size of an area, do they start to generalise more?

- do people generalise important streets more/less compared to less important ones (e.g., accounting for "integration" Space Syntax metric)?

- If we give people less time to draw, do they generalise or omit information?

- if we ask people to draw the sketch map with a different task (e.g., walking through a campus vs. walking near a campus vs. walking away from campus) - are generalisations different?

Manivannan, C., Krukar, J., & Schwering, A. (2022). Spatial generalization in sketch maps: A systematic classification. Journal of Environmental Psychology, 101851. https://doi.org/10.1016/j.jenvp.2022.101851

 

Contact: Jakub Krukar and Angela Schwering

SIIUX-Analyse - Geodateninfrastrukturen

Die Entwicklung von Geodateninfrastrukturen hat das Ziel, die Verfügbarkeit und Nutzbarkeit von Geodaten für verschiedenste Anwendungszwecke spürbar zu verbessern. Zwar ist in allen Strategiepapieren zu lesen, dass hierbei die Anforderungen der Anwender im Mittelpunkt stehen sollen, tatsächlich ist die Entwicklung aber in starkem Maße angebotsgetrieben, ohne systematische Berücksichtigung der Nutzeranforderungen und entsprechender Erfolgskontrollen.

Im Rahmen der Arbeit soll am Beispiel der GDI NRW gezeigt werden, wie durch eine leichtgewichtige, fokussierte UX-Analyse ein klares Bild der Stärken und Schwächen der GDI bezüglich der Anforderungen einer bestimmten Nutzergruppe (Windpark-Planer) erzeugt werden kann, und dass sich aus diesem Bild konkrete Entwicklungsziele ableiten und priorisieren lassen.

Gegenstand der Arbeit: Aufarbeitung der Grundlagen und Einwertung verwandter Arbeiten, Vorbereitung und Durchführung der UX-Analyse unter Einbeziehung von Experten-Interviews und eigener technischer Tests. Interpretation der Ergebnisse, Diskussion des Verfahrens, Empfehlungen.

 

Contact: Albert Remke

SPARCSketch Maps as a tool for learning new environments

For decades, sketchmaps have been used as a tool for measuring spatial knowledge - i.e., for estimating how well participants know and understand some areas. However, evidence from psychological memory studies demonstrates, that drawing something can also be a good strategy to memorise a set of object. For instance, if you need to memorise the setting of a room, drawing the room as you see it is a better memorisation strategy than repeating the names of the objects verbally or in your head. This thesis will test whether drawing a sketch map is a good memorisation strategy for spatial environments and how this approach can be implemented in a gamified app. The problem is relevant for situations in which people must learn new spatial environments, e.g. to become taxi/delivery drivers, or when they move to a new city.
The thesis can be completed with focus on one of two aspects:
**Computational focus:** You will design a teaching app that (a) records the user's trajectory together with a list of landmarks that were visible along the route, and (b) after a delay, asks users to draw the area that they have travelled. Here the key problem may be to select routes and landmarks that the user should be asked to draw (based on the recorded trajectories).
**Evaluation focus:** You will design and conduct an experiment to evaluate the following research question: does drawing a sketchmap help people memorise the environment better, compared to alternative strategies? This does not require creating an app, and can be conducted as an in-situ experiment, or inside our Virtual Reality lab.

Contact: Jakub Krukar

SPARCIn-and-Out: What happens when we enter and exit buildings?

Navigation is usually studied either completely outdoors or completely indoors. But our real-life wayfinding is different - we continuously enter and exit buildings without feeling that this is now a completely different experience.

The goal of this thesis is to understand what happens when people exit/enter buildings - how their navigation changes, and how technology can embrace the difference between these two contexts.

This can be approached from two perspectives.

From the technological perspective it is possible to design a prototype navigation system that---assuming indoor localisation is possible---works indoors and outdoors, and changes its behaviour depending on this context.

From the research perspective it can be investigated how human navigation changes when people enter or exit buildings. This can be studied for instance with a mobile eye-tracking, by asking experiment participants to move indoors/outdoors and analysing how eye-tracking measures vary between these two contexts.

Krukar and van Eek. (2019). The Impact of Indoor / Outdoor Context on Smartphone Interaction During Walking.

Contact: Jakub Krukar

SILKollaborative GeoGames in virtueller Realität

GeoGami ist ein ortsbasiertes Spiel zur Förderung des räumlichen Orientierungsfähigkeit: Der Spieler muss mehrere Navigationsaufgaben zu verschiedenen Orten lösen und an diesen Orten Fragen beantworten. Derzeit wird GeoGami primär einzeln gespielt. Im Rahmen dieser Bachelorarbeit wird GeoGami als Multiplayer-Version erweitert, bei der die Spieler gegeneinander antreten oder zusammenarbeiten können, um die Aufgaben zu lösen.
 
Nach der konzeptionellen Entwicklung verschiedener kollaborativer oder kompetitiver Spielformate, werden Spiele in GeoGami implementiert und evaluiert. Hier steht die virtuelle Welt in Unity sowie kollaborative Spiel
 
Mehr Informationen zu GeoGami finden Sie auf unserer Projektwebsite https://geogami.ifgi.de/ und auf github. Erfahrung mit Android-Programmierung und Interesse an ortsbezogenen Spielen sind von Vorteil.

Contact: Angela Schwering, Yousef Qamaz, Mitko Aleksandrov

SPARCThe "wow" effect of buildings: Computational modelling of perceived spaciousness in VR

Perceived Spaciousness is the subjective feeling of how large a given space is. Just think about the difference between the seminar room 242 vs. the atrium in the centre of the GEO1 building. Does the atrium feel 5x larger? 10x larger? 20x larger? Or imagine walking through a small entrance into a large temple. How much volume does it need to have to create the feeling of "wow, this is huge!" ?

Resaerch has shown that people have very distorted way of judging this and that the volume of space alone (the amount of "empty air" surrounding you) is not the only important factor. It matters what shape this empty space has (is it taller or longer?), where you are currently standing (above or below the open space? close to the wall or in its centre?), and where you entered it from (entering into a large room from a small one vs a small one from a large one).

The goal of this thesis is to understand what affects our peceived spaciousness. The thesis can approached from two perspectives.

From the technological perspective it is necessary to create new ways of studying perceived spaciousness in Virtual Reality. Researchers used various ways to ask people "how spacious does this space feel right now?" - they asked them verbally to rank it from 1 to 7, or gave them a circular dial that participants rotated when they felt more spacious. VR opens the chance to create better ways to gather this kind of data.

From the research perspective VR offers us the chance to design architectural experiments impossible in real life. For instance, we can move a person from a very small space to an extremely spacious one in a matter of seconds, and test their reactions. We can change their pathway (e.g., reverse it so that they move from a large space to a small one) and see if the reaction is symmetric. We can also systematically modify the Virtual Reality rooms by changing their shape, size, or lighting, and test how these changes affect perceived spaciousness.

Krukar, J., Manivannan, C., Bhatt, M., & Schultz, C. (2021). Embodied 3D isovists: A method to model the visual perception of space. Environment and Planning B: Urban Analytics and City Science48(8), 2307-2325.

 

Contact: Jakub Krukar

CVMLSInsektendetektion in visuellen terrestrischen Fernerkundungsdaten

In den vergangene Jahren ist die Zahl der Insekten dramatisch zurückgegangen, jedoch fehlt es nach wie vor an Verfahren für die nicht-invasive Überwachung von Insektenpopulationen. In diesem Projekt wird mittels eines neuen Insektendatensatzes, welcher verschiedene visuelle Hinweise wie Farbinformationen und Bewegungshinweise bietet die folgenden Ziele verfolgt: (1) neue Bildgebungsmodalitäten und Insektendatensätze testen; (2) die relevanten maschinellen Lerntechniken zu eruieren; und (3) angepasste maschinelle Lernmodelle zu entwickeln, um die winzigen Tiere in unübersichtlichen Umgebungen automatisiert zu erkennen. Insbesondere sollen aktuelle Algorithmen zur Objekterkennung genutzt werden, die nahezu universell in der Geoinformatik anwendbar sind und sich daher nicht auf die in dieser Arbeit vorgestellten terrestrischen Fernerkundungsdaten beschränken.

Contact: Benjamin Risse

SITCOMPrivacy preserving location-based services (adaptive algorithms, infrastructures, visualisations)

In order to benefit from location-based services (LBS) such as navigation support, local recommender systems or delivery services, users need to share their location with the service provider. This can have negative implications for their privacy as the service provider might learn a lot about users, e.g. movement patterns, places they frequent and inferred knowledge such as health conditions.

At the same time, service provision would also be possible if users did not share their precise location: a weather forecast app, for example, might work well enough with very coarse-grained location information. For some LBS, having access to lower-quality location information might be problematic.  For example, providing turn-by-turn instructions might be impossilbe if users share coarse-grained location information.

The goal of this topic is thus to develop and evaluate approaches for different types of LBS to adapt to different levels of quality of location information. The topic can be tackled from different perspectives and therefore can serve as a starting point for several different theses projects:

  • on the algorithmic level, an analysis of common algorithms used in LBS to provide certain services (such as routing) can be carried out to develop and evaluate new/improved algorithms that can better cope with different levels of location quality
  • on the infrastructure level, different frameworks and libraries for the development of LBS can be analysed regarding how well they support copting with different levels of location quality; this can then inform the design and evaluation of an improved solution
  • on the visualisation/user interface level, an analysis of exsting solutions to convey instructions/information to users in LBS can be carried out with respect to how well they work when provided with location information of lower quality; this can then inform the design and evaluation of improved, adaptive visualsations and user interfaces

Students interested in this topic area can have a look at the SIMPORT project and the publications listed below:

 

Contact: Christian Kray


Master

ifgicopterUAV / UAS (Drohnen) Remote Sensing/GIS: Vegetationsspezifische Geodatenanalyse/Workflows

Thema: Im Rahmen der gemeinsamen IFGIcopter und ILÖK UAV Initiative werden  kontinuierlich vegetationsspezifische Fernerkundungsdaten unterschiedlichster UAV-Sensoren (Drohnen) aufgenommen und ausgewertet. Besondere Schwerpunkte sind die Erfassung und Analyse von Vegetationsmustern, Vitalitätsparametern und inversiver Arten mittels multispektraler UAS Daten. In diesem Kontext spielen die Datenverarbeitung und Visualisierung (auch 3D) mittels verschiedenster geoinformatischer Werkzeuge (GIS, kommerzielle Software, Web-Tools und Eigenprogrammierungen etc.) eine große Rolle. Wer Interesse an einer interdisziplinären Fragestellung in diesem Bereich hat, wende sich an die beiden Ansprechpartner [2017].

Ansprechpartner: Torsten Prinz / Jan Lehmann

Contact: Torsten Prinz

STMLDeveloping a web-based forest-fire spread model for Germany

With climate change more frequent and more intense forest fires pose a serious threat to the environment and societies. In a collaboration between the Institute of Landscape Ecology and the Institute of Geoinformatics we aim to extend a web-based burn simulator (known as Ember-sim) to simulate the spread of fire across the landscape based on established fire behaviour models. The burn simulator will then be used for educational purposes, research, and training professional fire practitioners, governmental officers and volunteers from the community. 

The project can be delivered at the BSc or MSc level and the potential candidate will be in charge of extending Ember-sim originally developed for the Australian continent, for Germany. Project starting date is open until position is filled.

Are you experienced with JavaScript and have interest in climate change-related topics?

Please contact:
Prof. Mana Gharun mana.gharun@uni-muenster.de
Or
Dr Christian Knoth christian.knoth@uni-muenster.de

 

Contact: Christian Knoth

SITCOMAnalysing and mapping emotions for noise quality

Sustainability is a broad concept with various interconnecting aspects. As one of them citizens’ feelings and perceptions of environmenment is usually neglected in practice due to the insufficient tools and expertise. However, the improvements in the citizen science and the possibilities provided by the digital visualisation techniques allow a better understanding of people’s emotions towards existing circumstances. This knowledge leads to more accurate assessment of sustainability and better decisions at the local scale.

The thesis aims addressing the local emotional indicators for noise quality in Münster through analysing and spatio-temporal modelling of dynamic emotions.  To achieve this aim, the study suggests to replicate selected locations in Münster at different time frames by using the Immersive Video Environment (IVE) and work with the citizens to collect information about their emotions. The student is free to choose the software and language for the analysis and spatio-temporal modelling. Basic knowledge of working with Unity and one of the programming languages will be an asset for this study.

Suggested readings:

Kals, E., Maes, J. (2002). Sustainable Development and Emotions. In: Schmuck, P., Schultz, W.P. (eds) Psychology of Sustainable Development. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0995-0_6Kals, E., Maes, J. (2002). Sustainable Development and Emotions. In: Schmuck, P., Schultz, W.P. (eds) Psychology of Sustainable Development. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0995-0_6

Murphy, Enda and Eoin A. King. “Mapping for sustainability: environmental noise and the city.” (2013).

Contact: Simge Özdal Oktay

SITCOMExploring forms of interactions in the Immersive Video Environment (IVE)

 

IVE is a panoramic video footage that is displayed on large screens in a cave like environment that creates a sense of physical presence and enables people to better interact and intervene the image of their surroundings. However, the forms of interactions are very limited - yet needed - when users need to create objects on the screen as video overlays and interact with them (e.g., adding and scaling a tree, modifying a building façade etc.).

 

This study aims to explore the forms of interactions on IVE for effective creation and interaction of the overlays. The student will work with the IVE system in the Sitcom Lab at Ifgi. The study will cover the following steps: creating a video footage and overlays for IVE, exploration of the tools (e.g., smart phone, HTC Vive controllers, touch pad), design of the forms (e.g., adding, removing, scaling, rotating, placing the overlays) of interaction and the UI component.

 

Contact: Simge Özdal Oktay

SILSpatio-Temporal & Semantic Analysis of Spatial Movement

When we compare the preformance of people navigation through the environment, we usually look at the distance travel, time spent or number of times a person is lost resulting in erroneous navigation decisions. This is a rather simple analysis: using a navigation app, we can collect much more information about the actual spatial behaviour.
 
Our GeoGame GeoGami, records the behaviour of people during a navigation tasks. Besides the actual travelled trajectory, GeoGami records the orientation, …
 
A thesis could explore how to make use of the collected data and aggregate it to meaningful measures of wayfinding performance. The thesis can also explore different visualizations of individuals and of individuals in comparison to others in a group.

Contact: Angela Schwering

SPARCUsing VR to create self-adjusting buildings based on spatio-temporal data of their occupants

Buildings of the future will have to be much more flexible than they are now. One envisioned possibility is that building interiors will change their shapes depending on the current context of use, personal preference of their users, or tasks that the occupants have to perform within them at the given moment. While this may sound like a distant vision of the future, Virtual Reality equipment already allows us to study such scenarios today.


In this thesis, you will design a Virtual Reality building that participants will explore in Head-Mounted Displays. The VR system will monitor spatio-temporal data of the building user, and create the remaining (yet unvisited) parts of the building in response to this data, before the user gets there.

The specific context of this thesis can be adjusted based on your interests. One possibility would be to detect navigational confusion based on the occupant's walking trajectory, and - in response - provide a navigationally simplified space in the next room that the occupant is going to visit. Another possibility is to detect loss of attention in a virtual museum gallery, and - in response - provide the user with a more exciting space in the next room. The application should be evaluated in a simple user study.

Contact: Jakub Krukar and Chris Kray

SPARC3D SketchMaps: A VR tool for understanding spatial knowledge in the vertical

Sketch maps are traditionally drawn on a flat sheet of paper. However, with accessible, consumer-grade Virtual Reality devices it is now very easy to put on a VR headset and "draw in the air". This might be particularly useful in situations where you want to communicate how the environment looks in the vertical. For instance, when you want to draw a path of drone flying above some landscape, or describe to another person how to navigate a multi-level shopping mall, or explain to a friend how to get out of a metro station near your home.

The goal of this thesis is to understand the potential of 3D sketch maps in VR as a tool for communicating spatial knowledge.

The thesis can be approached from two perspectives:

From the technological perspective it is possible to design new ways of drawing 3D sketch maps in Virtual Reality, and compare them to traditional paper sketch maps. It is also interesting to explore how complex 3D sketch maps can be analysed systematically and what tools can we design to support the analysis of such (sometimes very complex and very messy) 3D drawings.

From the research perspective it is necessary to conduct user experiments to understand whether people can make good use of this new possibility offered by VR, or do they always find it more intuitive to draw on paper. Also, it is important to identify contexts in which 3D sketch maps are necessary - perhaps for most navigational scenarios a sheet of paper is sufficient? If so, what are specific cases (aviation, complex buildings) where 3D sketch maps are necessary?

One straight-forward way to study this is to ask participants to play a VR game where vertical navigation is important (virtual scuba diving, submarine simulator, flying a star wars battleship, a drone, or a passanger plane, downhill skiing) and ask them to draw sketch maps based on this experience.

Kim, K. G., Krukar, J., Mavros, P., Zhao, J., Kiefer, P., Schwering, A., ... & Raubal, M. (2022). 3D Sketch Maps: Concept, Potential Benefits, and Challenges. In 15th International Conference on Spatial Information Theory (COSIT 2022) (Vol. 240, p. 14). Schloss Dagstuhl, Leibniz-Zentrum für Informatik.

Contact: Jakub Krukar

SPARCControlling the optic flow of Virtual Reality systems

When you take one step forward, the information you see in the periphery of your visual field slightly changes. The speed and content of this change is known as "optic flow". Optic flow is crucial for human navigation - you probably don't reallise it but your mind continuously processes these small changes in the visual field in order to calculate how your location in space has changed. 
 
Virtual Reality opened up new opportunities for studying wayfinding. However, optic flow is (to various degrees) unnatural in *all* VR systems due to factors such as: limited field of view in the head-mounted display, flat computer displays, unintuitive speed of movement through the virtual world, lack of detail in the periphery of VR models. Since the optic flow is unnatural, navigation behaviour in VR systems might also be different from the real world.
 
In this thesis you will create a method to modify the optic flow of a state-of-the-art VR system with integrated walking treadmill and motion tracking sensors. You will have to understand which parts of the visual field (e.g., left/right, centre, or upper/bottom parts) are more important for a realistic optic flow, create a way to restrict them, and test your method in a user study. The thesis will be conducted in collaboration at with the Institute of Sport and Exercise Sciences.
 

Contact: Jakub Krukar

SPARCMeasuring spatial knowledge in VR compared to real world environments

Virtual Realtiy is known to cause distortions in our pereception of distance. Things inside VR seem closer than they would be in reality. This is an interesting problem because many wayfinding studies use VR as a substitute of real life environments. Yet, if we know that distance estimations are consistently distorted in VR, can we trust VR results using other methods for measuring spatial knowledge?

The goal of this thesis is to compare different methods of measuring spatial knowledge, in particular:

- sketch maps (qualitative and metric-based analyses)

- distance estimation tasks

- pointing tasks

- perspective taking tasks

across the Virtual Reality environment and corresponding real life environment.

The hypothesis is that some of these methods work equally well in VR and in real life environments, while others should not be used in VR. For example, our  analysis from the paper below suggests that people who explore the same building in VR and in the real world draw equally good sketch maps, even though their distance estimation is distorted in VR.

Li, H., Mavros, P., Krukar, J., & Hölscher, C. (2021). The effect of navigation method and visual display on distance perception in a large-scale virtual building. Cognitive Processing22(2), 239-259.

Contact: Jakub Krukar

SILQualitätssicherung von crowd-sourced Sensordaten

In der Arbeit soll untersucht werden, inwieweit Qualitätssicherung von crowd-sourced Sensordaten in einem Sensornetzwerk automatisierbar ist. Dies ist ein neues und hoch relevantes Forschungsfeld: große Datenmengen erlauben die Anwendung statistischer oder machine learning-Verfahren. Traditionelle Verfahren sind häufig nicht nutzbar, da die Daten in Echtzeit vorliegen müssen. Zudem stellen crowd-sourced Daten eine spezielle Herausforderungen dar, da nicht davon ausgegangen werden kann, dass alle Daten mit korrekten bzw. konsistenten Messverfahren erhoben wurden. Schließlich haben low-cost-Sensoren selbst Messfehler, die von professionellen Sensoren stark abweichen. oder Messstationen sind von Citizen Scientists schlecht montiert. Das Ziel ist, die Einflussfaktoren auf die Datenqualität und die Messgenauigkeit der Sensoren zu erforschen, Verfahren zur automatisierten Identifikation fehlerhafter Daten und möglicher Fehlerquellen zu entwickeln sowie automatisiert Entscheidungen über Möglichkeit zur Korrektur der Daten (bspw. über Nachkalibrierung der Sensoren) oder Ausschluss bestimmter Daten zu treffen.

Contact: Thomas Bartoschek

SILSketchMapia – A Research Software to Assess Human Spatial Knowledge

Sketch mapping, i.e. freehand drawings of maps on a sheet of paper, is a popular and powerful method to explore a person's spatial knowledge. Although sketch maps convey rich spatial information, such as the spatial arrangement of places, buildings, streets etc., the methods to analyse sketch maps are extremely simple. At the spatial intelligence lab, we developed a software suite, called SketchMapia, that supports the systematic and comprehensive analysis of sketch maps in experiments. In this master thesis, you develop systematic test data for a sketch map analysis method and evaluate the SketchMapia analysis method w.r.t. its compleness, correctness and performance against other sketch map analysis methods.

Contact: Angela Schwering, Jakub Krukar

SITCOMlocation-based fact checking (topic cluster)

Recent years have seen a sharp increase in fabricated or outright false information being widely distributed (e.g. conspiracy theories, base-less claims, invented events). The broad availability of generative AI will most likely not only strengthen this trend but make it much harder to recognise fabricated information.

 

The idea behind this topic is to investigate ways to use location information strategically to verify the truthfulness of shared information. A large percentage of all information has a direct link to a real-world location, for example, photographs (or generated images) taken at a particular place (or claimed to be taken at that location).

 

This location in turn can be used to check the truthfulness of the information or media in different ways.  One approach could be to use trustworthy cartographic information to compare image contents or textual descriptions to information from a map. A second approach could look into checking facts on site, e.g. via a location-based service that directs fact-checkers to where they can assess the truthfulness of some information by following a particular protocol that ensures they can provide strong evidence (e.g. via AR-based comparative overlays). A third approach could investigate ways to use location-sensor data (GPS, compass, gyro, timestamps) to create a blockchain for photos taken a particular location. This could provide a way for individual to create trustworthy information and for others to verify information. Finally, using reliable historical location information (maps, photos, datasets) to assess the truthfulness of newly posted information is another approach worthwhile investigating.  For all different approaches, looking into potential attacks (particularly those presented by prompt engineering for current LLMs or coordination between malicious people on site) would be important.

 

Either of these approaches can become a thesis topics.  In principle, these topics could be done either as Master thesis or a Bachelor thesis.  However, they will require thorough background research and can potentially be technically demanding. If you are interested in this general topic area or one of the listed examples, please get in touch with Chris.

Contact: Christian Kray

SITCOMHybrid participation platform for urban planning

For urban projects to meet the needs and preferences of people, it is essential to ensure public participation. Ideally, this is done along the entire planning process and enables a broad range of groups to get involved.

 

Different approaches have been proposed in this context. They vary according to the degree of participation (from just being informed to actively taking decisions), to the temporal dimension (synchronous vs. asynchronous) as well as to the location (on site or remotely) and the medium (online, mobile or in person).

 

The goal of this thesis is to investigate, develop and evaluate hybrid options that combine multiple media and facilitate synchronous and asynchronous participation in urban planning projects.  This could, for example, take the form of a web-based system that can be accessed through a public display and allows for synchronous/asynchronous communication between citizens and planners.

 

There is flexibility regarding the exact combination of technologies and functionalities that is investigated as well as with respect to how urban planning projects are visualised (maps, 3D, Augmented Reality).

 

Tthe thesis can be developed either as a Master or Bachelor thesis. There is potential for evaluating the approach/system in the context of one of the events/activities organised by the StadtLabor Münster.

Contact: Christian Kray

SITCOMAR evaluation toolkit

Augmented Reality applications are widely available now and expected to increase further in the future. They are, however, difficult to evaluate effectively as they strongly depend on interacting with their environment.  For example, to assess the effectiveness and usability of a particular user interface or overlay, it is important to consider how it interacts with what people see around them. Are users able to connect the overlay to the corresponding real-world object? Can they interact with the UI elements while being in the actual environment?

 

The goal of this thesis is to develop and evaluate an evaluation toolkit for AR applications, which allows for systematic, repeatable and low-effort evaluation.  The approach to investigate here is to use a virtual environment (such as the Immersive Video Environment at ifgi) and to “trick” an app into believing it is located at the site shown by the virtual environment.  This ensures a controlled environment and thus allows for a systematic evaluation of AR applications.

 

Contact: Christian Kray

SITCOMSITCOM topics (general information)

Research in SITCOM generally focusses on enabling all kinds of users to solve real-world problems using spatial information.  Have a look at the group's web page for more details and example projects.

If you are generally interested in this area or have an idea of a thesis topic that falls into that area, feel free to get in touch with one of the current members of SITCOM.

Contact: Christian Kray

SITCOMExploring eye-tracking and Immersive Virtual Environment to investigate the perception of street walkability

Walkability is a notion referring to the extent that streets foster the activity of walking, both as a mobility mode and leisure activity. The perception of walkability is dependent on the streets’ physical characteristics, e.g. wide sidewalks, presence of greenery, etc. and on the individuals’ subjective perception thereof, e.g. the imparted sense of safety and beauty.

This thesis will build upon an ongoing project exploring a unique mixture of technologies and methods to give an account on how individuals subjectively perceive the walkability of streets.

In a previous experiment, participants were invited to a so-called Cave Automatic Virtual Environment, where images of streets were displayed. Wearing an eye-tracking device, they were asked to inspect the images and report their perception of the displayed streets. Data obtained in a follow-up interview with the participants complement a rich dataset to investigate how the physical features of streets, the individuals’ eye-movement when inspecting the images, and their demographic and behavioral traits relate to their reported walkability perception.

The student will be able to explore the diversity and richness of the available dataset when addressing an independent, project-related, research question.

 

References:

Liao B, Berg PEW, Wesemael PJV, Arentze TA (2022) Individuals’ perception of walkability: Results of a conjoint experiment using videos of virtual environments. Cities, 125, 103650.

Li Y, Yabuki N, Fukuda T (2022) Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning. Sustainable Cities and Society, vol 86, 104140.

 

Contact: Tessio Novack

SILSpatial Learning Analytics

Learning Analytics is a method to collect, measure, analyze and visualize data about learners and their context. It enables the understaning of the learning process and allows an adaption of learning paths based on the collected data. It also gives feedback to the learner and teacher about the learning process. The spatial intelligence lab has developed several learning platforms (GeoGami, Blockly for programming senseBox), where data revealing information about the learning proces sis collected. The thesis will investigate how real time data on the learning process can be used to guide the learning process using learning analytics.

Contact: Thomas Bartoschek

SILFostering Navigational Map Reading Competence

The ability to orient oneself and read maps is essential to successfully navigate in unfamiliar environments. It is well known that the ability to orient oneself with maps varies from person to person. While there are numerous navigation systems to help us find our way, very few efforts have been made to use GI technologies to promote orientation and map reading skills and overcome the individual differences. GeoGami is a location-based game using digital maps to systematically teach navigational map reading competence. The thesis will investigate how to design trainings to promote people’s navigational map reading competence with digital maps. How to design trainings for specific sub-competencies of navigational map reading such as self-localization, map alignment or object recognition? How to design virtual environments to provide an optimal environment to systematically test navigational map reading competence?

Contact: Angela Schwering, Yousef Qamaz, Mitko Aleksandrov

SPARCTesting the new Taxonomy of Human Wayfinding Tasks

In their seminal paper Wiener et al. (2009) defined the taxonomy of human wayfinding tasks. The taxonomy is based on the type of knowledge possessed by the navigator. However, it did not differentiate between any subcategories of the "Path Following" task. In other words, according to the taxonomy, there is no difference between (a) knowing your route without knowing anything about the wider surrounding enviornment, and (b) knowing your route AND knowing about the wider surrounding enviornment.

Schwering et al. (2017) argued that there are substantial differences between such two tasks and that they deserve to be distinguished in an updated taxonomy.

The goal of this thesis will be to test the hypothesis that following the same route, with the same knowledge about the route, is a cognitively different task depending on whether the navigator has, or does not have, survey knowledge about the broader envionment.

 

Wiener, J. M., Büchner, S. J., & Hölscher, C. (2009). Taxonomy of human wayfinding tasks: A knowledge-based approach. Spatial Cognition & Computation9(2), 152–165.

Schwering, A., Krukar, J., Li, R., Anacta, V. J., & Fuest, S. (2017). Wayfinding Through Orientation. Spatial Cognition & Computation17(4), 273–303. doi:10.1080/13875868.2017.1322597

 

Contact: Jakub Krukar

SPARCGeneralisation in Sketch Maps

When people draw sketch maps, they generalise information compared to the ground-truth information they perceived in the world. For example many buildings belonging to university campus are drawn as a single polygon labelled "campus",

This is a challenge for analysing sketch maps because this information is not wrong, yet a computer system for automated analysis would interpret it as such.

In the paper linked below we presented a classification of generalisation types in sketch maps. We also also have a working software prototype for analysing generalisation in sketch maps.

In this thesis you will test the impact of one (chosen) variable on the level of generalisation. Sample research questions:

- If we ask people to draw different size of an area, do they start to generalise more?

- do people generalise important streets more/less compared to less important ones (e.g., accounting for "integration" Space Syntax metric)?

- If we give people less time to draw, do they generalise or omit information?

- if we ask people to draw the sketch map with a different task (e.g., walking through a campus vs. walking near a campus vs. walking away from campus) - are generalisations different?

Manivannan, C., Krukar, J., & Schwering, A. (2022). Spatial generalization in sketch maps: A systematic classification. Journal of Environmental Psychology, 101851. https://doi.org/10.1016/j.jenvp.2022.101851

 

Contact: Jakub Krukar and Angela Schwering

SITCOMAssisting map comparison with annotation tool

Maps are predominant representational artifacts in the Geosciences for communicating  research results and describing phenomena. Frequently we have to compare maps for a number of reasons: change detection, accuracy assessment, replicability and reproducibility evaluation. Comparing maps is commonly done with visual side-to-side comparison, which can be error-prone and cognitively exhausting for the reader. The aim of this thesis is to assist this comparison and to keep track of the observed differences by manually highlighting them. For this purpose, a prototype for annotating map differences will be developed and evaluated. The student has to ivestigate which annotation form is apropriate for each kind of difference and to use an appropriate structured vocabulary to decribe them.

Suggested reads:
Oren, E., Möller, K., Scerri, S., Handschuh, S., & Sintek, M. What are semantic annotations. Relatório técnico. DERI Galway, 9, 62 (2006).

Diaz, L., Reunanen, M., Acuña, B., Timonen, A. ImaNote: A Web-based multi-user image map viewing and annotation tool. ACM J. Comput. Cult. Herit. 3, 4, Article 13 (2011). http://doi.acm.org/10.1145/1957825.1957826

Contact: Eftychia Koukouraki

SPARCEye-tracking in the Virtual and Real World: What are participants (not) seeing?

Eye-tracking is a common method for studying the usability of buildings and spatial behaviour in buildings. Many of such studies are conducted in Virtual Reality: either on desktop computers or in head-mounted displays. However, this is a problem because the field of view and head movement in such set-ups is greatly restricted. This might affect what people do or do not see when navigating a building, especially if important information is visible in the periphery of their visual field.

In this thesis you will try to answer the question: What visual information in the periphery do participants of VR experiments miss when navigating a building?

You will compare two groups of people: one group navigating the real building, and the other group navigating the virtual replica of GEO1. You will analyse the eye-tracking data and compare it across the two conditions. The virtual replica of GEO1 is provided.

Contact: Jakub Krukar

SPARCSketch Maps as a tool for learning new environments

For decades, sketchmaps have been used as a tool for measuring spatial knowledge - i.e., for estimating how well participants know and understand some areas. However, evidence from psychological memory studies demonstrates, that drawing something can also be a good strategy to memorise a set of object. For instance, if you need to memorise the setting of a room, drawing the room as you see it is a better memorisation strategy than repeating the names of the objects verbally or in your head. This thesis will test whether drawing a sketch map is a good memorisation strategy for spatial environments and how this approach can be implemented in a gamified app. The problem is relevant for situations in which people must learn new spatial environments, e.g. to become taxi/delivery drivers, or when they move to a new city.
The thesis can be completed with focus on one of two aspects:
**Computational focus:** You will design a teaching app that (a) records the user's trajectory together with a list of landmarks that were visible along the route, and (b) after a delay, asks users to draw the area that they have travelled. Here the key problem may be to select routes and landmarks that the user should be asked to draw (based on the recorded trajectories).
**Evaluation focus:** You will design and conduct an experiment to evaluate the following research question: does drawing a sketchmap help people memorise the environment better, compared to alternative strategies? This does not require creating an app, and can be conducted as an in-situ experiment, or inside our Virtual Reality lab.

Contact: Jakub Krukar

SPARCIn-and-Out: What happens when we enter and exit buildings?

Navigation is usually studied either completely outdoors or completely indoors. But our real-life wayfinding is different - we continuously enter and exit buildings without feeling that this is now a completely different experience.

The goal of this thesis is to understand what happens when people exit/enter buildings - how their navigation changes, and how technology can embrace the difference between these two contexts.

This can be approached from two perspectives.

From the technological perspective it is possible to design a prototype navigation system that---assuming indoor localisation is possible---works indoors and outdoors, and changes its behaviour depending on this context.

From the research perspective it can be investigated how human navigation changes when people enter or exit buildings. This can be studied for instance with a mobile eye-tracking, by asking experiment participants to move indoors/outdoors and analysing how eye-tracking measures vary between these two contexts.

Krukar and van Eek. (2019). The Impact of Indoor / Outdoor Context on Smartphone Interaction During Walking.

Contact: Jakub Krukar

SILTinyAIoT: Ressource-efficient AI Models for IoT Sensors

Modern IoT applications often rely on sensors that run on microcontroller units and communicate via network protocols such as LoRaWAN or Bluetooth Low Energy. To operate autonomously for extended periods of time, application resource requirements must be minimized. This master's thesis investigates the development of resource-efficient IoT applications through the utilization of AI models. These models aim to save energy by reducing the computational load, camera resolution or data transmission, while maintaining the ability to perform specific tasks. You will develop, implement and compare different resource-efficient AI models with sensors such as cameras, distance sensors, vibration sensors and test your implementation in different application scenarios.

Contact: Benjamin Karic, Thomas Bartoschek

SPARCThe "wow" effect of buildings: Computational modelling of perceived spaciousness in VR

Perceived Spaciousness is the subjective feeling of how large a given space is. Just think about the difference between the seminar room 242 vs. the atrium in the centre of the GEO1 building. Does the atrium feel 5x larger? 10x larger? 20x larger? Or imagine walking through a small entrance into a large temple. How much volume does it need to have to create the feeling of "wow, this is huge!" ?

Resaerch has shown that people have very distorted way of judging this and that the volume of space alone (the amount of "empty air" surrounding you) is not the only important factor. It matters what shape this empty space has (is it taller or longer?), where you are currently standing (above or below the open space? close to the wall or in its centre?), and where you entered it from (entering into a large room from a small one vs a small one from a large one).

The goal of this thesis is to understand what affects our peceived spaciousness. The thesis can approached from two perspectives.

From the technological perspective it is necessary to create new ways of studying perceived spaciousness in Virtual Reality. Researchers used various ways to ask people "how spacious does this space feel right now?" - they asked them verbally to rank it from 1 to 7, or gave them a circular dial that participants rotated when they felt more spacious. VR opens the chance to create better ways to gather this kind of data.

From the research perspective VR offers us the chance to design architectural experiments impossible in real life. For instance, we can move a person from a very small space to an extremely spacious one in a matter of seconds, and test their reactions. We can change their pathway (e.g., reverse it so that they move from a large space to a small one) and see if the reaction is symmetric. We can also systematically modify the Virtual Reality rooms by changing their shape, size, or lighting, and test how these changes affect perceived spaciousness.

Krukar, J., Manivannan, C., Bhatt, M., & Schultz, C. (2021). Embodied 3D isovists: A method to model the visual perception of space. Environment and Planning B: Urban Analytics and City Science48(8), 2307-2325.

 

Contact: Jakub Krukar

CVMLSAdvanced Topics of Deep Learning

Hardly any other field of computer science is developing as rapidly as machine learning. With over 100 publications a day, it is becoming increasingly difficult to maintain an overview of the variety of existing and new deep learning concepts and methodologies such as self-supervised learning, transformers and NeRF models. Given the computationally heavy data analysis of geoinformatics, often requiring to process huge datasets of spatio-temproal data, a variety of machine learning paradigms are of particular importance. The aim of this project is therefore to investigate a particular state-of-the-art deep learning algorithm with a particular focus (but not limited to) geoinformatics applicability. 

Contact: Benjamin Risse

SPARCReplicability of wayfinding research

"Replication" refers to the process of re-creating an experiment published by other researchers in an effort of obtaining results pointing to the same conclusion. A "replication crisis" showed that many published research is not replicable. We can distinguish two types of replication:

- an "exact replication" is the attempt of recreating every detail of the original experiment

- a "conceptual replication" is the attempt of creating a similar experiment, with similar hypotheses, but perhaps with a different stimuli, instructions, or groups of participants.

This thesis focuses on a "conceptual replication" of navigation research.

Navigation research is usually performed in very specific spatial context (such as the city in which the paper's authors are based or the virtual environment that they have created). This introduces a challenge to generalizability and replicability of navigation research because we do not know whether classic research findings would be equally applicable in different spatial contexts (e.g., a different city).

This thesis focuses on replicating an existing wayfinding paper (to be chosen by the student) in Münster, or in a virtual environment available at ifgi.

The key challenge is finding a way to make the new spatial context (of Münster) comparable to that of the original paper.

Thesis co-supervised by Daniel Nüst (with technical support w.r.t. replicability).

 

Examples of papers that can be replicated:

https://doi.org/10.1080/17470218.2014.963131

https://doi.org/10.1016/j.cognition.2011.06.005

 

Contact: Jakub Krukar

CVMLSDetecting the Invisible - Detection and Tracking of Tiny Insects in Complex Wildlife Environments

In recent years, the number of animals in general and insects in particular has decreased dramatically. In contrast to bigger vertebrates there is however still a lack of techniques for non-invasive insect monitoring. In this project, novel visual and temporal data will be used to address the following objectives: (1) develop a state of the art computer vision and machine learning algorithm on complex multimodal wildlife recordings; and (2) evaluate the algorithm based on a challenging wildlife dataset. In particular, current algorithms for object recognition will be used, which are almost universally applicable in geoinformatics and are therefore not limited to the terrestrial remote sensing data presented in this thesis.

Contact: Benjamin Risse

SITCOMPrivacy preserving location-based services (adaptive algorithms, infrastructures, visualisations)

In order to benefit from location-based services (LBS) such as navigation support, local recommender systems or delivery services, users need to share their location with the service provider. This can have negative implications for their privacy as the service provider might learn a lot about users, e.g. movement patterns, places they frequent and inferred knowledge such as health conditions.

At the same time, service provision would also be possible if users did not share their precise location: a weather forecast app, for example, might work well enough with very coarse-grained location information. For some LBS, having access to lower-quality location information might be problematic.  For example, providing turn-by-turn instructions might be impossilbe if users share coarse-grained location information.

The goal of this topic is thus to develop and evaluate approaches for different types of LBS to adapt to different levels of quality of location information. The topic can be tackled from different perspectives and therefore can serve as a starting point for several different theses projects:

  • on the algorithmic level, an analysis of common algorithms used in LBS to provide certain services (such as routing) can be carried out to develop and evaluate new/improved algorithms that can better cope with different levels of location quality
  • on the infrastructure level, different frameworks and libraries for the development of LBS can be analysed regarding how well they support copting with different levels of location quality; this can then inform the design and evaluation of an improved solution
  • on the visualisation/user interface level, an analysis of exsting solutions to convey instructions/information to users in LBS can be carried out with respect to how well they work when provided with location information of lower quality; this can then inform the design and evaluation of improved, adaptive visualsations and user interfaces

Students interested in this topic area can have a look at the SIMPORT project and the publications listed below:

 

Contact: Christian Kray