Where are you? Self- and body part localization using virtual reality setups

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/93621
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-936212
http://dx.doi.org/10.15496/publikation-35007
Dokumentart: Buch
Erscheinungsdatum: 2019
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Psychologie
DDC-Klassifikation: 004 - Informatik
100 - Philosophie
150 - Psychologie
500 - Naturwissenschaften
610 - Medizin, Gesundheit
Schlagworte: Perspektive , zeigen , Selbst , Selbstbewusstsein , Virtuelle Realität
Freie Schlagwörter:
Bodily self
bodily self-consciousness
body part locations
body template
first-person perspective
large-screen immersive display
multisensory cues
perspective
pointing
self
self-avatar
self-consciousness
self-location
third-person perspective
viewpoint
virtual reality
VR headset
ISBN: 978-3-8325-4987-7
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Inhaltszusammenfassung:

Zweitferöffentlichung der Verlagsveröffentlichung bei Logos Verlag GmbH, Berlin, in der Reihe MPI Series in Biological Cybernetics, No. 54, September 2019, Editor Prof. Dr. Heinrich H. Bülthoff. In der Unibibliothek Tübingen vorhanden unter den Signaturen: 59 A 6570:1 und 59 A 6570:2

Abstract:

This thesis investigates where it is that people locate themselves in their bodies, as well as how accurately people can indicate the locations of several of their body parts. It is not well known, whether there is/are one or more region(s) of their bodies people associate themselves with most. To answer this question, three experimental studies were performed using several virtual reality (VR) setups where participants pointed directly at themselves with a virtual pointing stick. In the first two studies, participants were also asked, outside of VR, to indicate their self-location on pictures of simple body outlines. In the last two studies, participants were additionally asked, in VR, to point to several of their body parts. Based on body part locations as pointed out by the participants in VR, the indicated self-locations could subsequently be interpreted in terms of regions of the participants' perceived bodies, besides in terms of regions of their physical bodies (i.e. based on body part locations measured on their bodies). In studies of self-location in the body, self-localization has mostly been performed using outlines of bodies not co-located with the participants' own bodies. Results from these studies have mainly shown self-localization in the (upper) face region, sometimes combined with in the upper torso region. Studies of self-location in the body using both explicit and implicit behavioral measures, have mainly shown self-localization in both the upper face and the upper torso regions. Across these previous studies findings show a mixed picture, which has motivated this further study of self-location. For this thesis, a self-directed, first-person perspective (1PP) pointing paradigm was developed, which was implemented in several VR setups across the different experiments. This paradigm was used for self-localization, as well as for body part localization. The participant was instructed to rotate a pointer with a controller for each trial such that it was pointing "directly at you", or at one of several of his body parts. The VR setups were used in the present experiments, mainly because they provide strong experimental control and the possibility of manipulating sensory cues in ways not otherwise possible (the viewpoint in study three). Further, they make comparisons possibly between results from in- and outside of VR (all studies), as well as between different VR setups (study two). In addition to the VR tasks, a not self-directed, third-person perspective (3PP) body template self-localization pointing task was used, outside of VR. There the participant was instructed to point "directly at you" with a pen on an A4 print of an outline of a body, under the assumption that this was a picture of himself. In the first study participants performed the VR self-localization task using the Oculus Rift DK2 and the template self-localization task. VR self-localization showed a very strong preference for the upper face. This was not in line with previous behavioral studies, showing self-localization mainly in both the upper face and the upper torso. Template self-localization was mostly in the upper torso, followed by in the (upper) face. This was not in line with previous studies using body outlines, showing self-localization mostly in the (upper) face. The present template results are more in line with the previous behavioral findings (from studies outside of VR), whereas the present VR behavioral findings are more in line with the previous body outline findings. It was concluded that wearing a VR headset might make people more head-focused. To investigate whether the VR findings from study one were specifically due to the use of a headset (blocking visual access to the body), or more generally to VR, in study two the VR pointing paradigm was implemented in both the Oculus Rift and a large-screen immersive display (LSID), where no headset is worn. Further, VR body part localization was added to the VR self-localization. Both in specific clinical, as well as in healthy populations, systematic distortions in the perception and representations of one's own body have been found. This has provided additional motivation for the inclusion of body part localization in studies two and three for this thesis. In study two, VR self-localization in terms of the physical body was mostly to all regions of the body from the upper torso upwards, as well as above the head. Further, participants were able to point reasonably accurately to most of their body parts in the LSID, but much less so in the VR headset. Inaccuracies were particularly large for the body parts near the borders of the body. After rescaling the self-localization pointing to the perceived body, it was mainly to the (upper followed by lower) face, followed by the (upper followed by lower) torso. This looked much more like the results from the previous behavioral studies than it did in terms of the physical body, while the differences between the VR setups had disappeared. The template task largely replicated study one, with pointing being to the upper torso most, followed by the regions of the face. It was concluded that people mostly localize themselves in the (upper) face and the (upper) torso. Moreover, that, for the interpretation of where people locate themselves, when using VR setups, it is important to take into account the occurring inaccuracies in body part localization. In study three, an individually scaled and gender-matched self-avatar, animated by the tracked movements of the participant and seen from 1PP (co-located) and a 3PP mirror-view), was implemented in the HTC Vive to provide rich feedback about the participant's body in a VR headset. Two groups of participants performed the VR self- and body part localization tasks, before and after an avatar adaptation phase where the self-avatar was experienced from either (normal) eye-height, or from chest-height. The self-avatar as such did not reduce inaccuracies in body part localization. Changing the viewpoint did alter body part localization, though. Pointing to body parts was overall shifted upwards (more for the lower body parts) from the pre- to the post-test for the chest-height group, but not for the eye-height group. The self-avatar as such, nor changing the viewpoint, changed self-location, though. No evidence was found for experienced self-location being manipulated towards the viewpoint location. A non-significant trend towards higher self-location was present for the chest-height group on the contrary, which might be due to body parts being perceived higher than normal. It was concluded that experienced body part locations might be more plastic (influenced by viewpoint) than experienced self-location. The differences between the self-localization results from the VR and the template tasks are debated and might be due to the 3PP pointing in the template task resembling pointing to someone else or even an external object, rather than to oneself. Taken together, this thesis suggests a differential involvement of multi-sensory information processing in our experienced specific self-location and our ability to locate our body parts. Self-localization seems to be less flexible, possibly because it is strongly grounded in the 'bodily senses', while body part localization appears more adaptable to the manipulation of sensory stimuli, at least in the visual modality.

Das Dokument erscheint in: