Neural Representations of Self-Motion During Natural Scenes in the Human Brain

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/74234
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-742344
http://dx.doi.org/10.15496/publikation-15639
Dokumentart: Dissertation
Erscheinungsdatum: 2017-02
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Biologie
Gutachter: Bartels, Andreas (Dr.)
Tag der mündl. Prüfung: 2016-12-14
DDC-Klassifikation: 500 - Naturwissenschaften
Schlagworte: Funktionelle Kernspintomografie , Neurowissenschaften
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Abstract:

Navigating through the environment is one of the important everyday tasks of the visual system. This task relies on processing of at least two visual cues: visual motion, and scene content. Our sense of motion heavily relies on understanding and separating visual cues resulting from object motion and self-motion. Processing and understanding of visual scenes is an equally abundant task we are exposed to in our everyday environment. Together, motion and scene processing allow us to fulfill navigation tasks such as way finding and spatial updating. In terms of neural processing, both, regions involved in motion processing, and regions involved in scene processing have been studied in great detail. However, how motion regions are influenced by scene content and how scene regions are involved in motion processing has barely been addressed. In order to understand how self-motion and scene processing interact in the human brain, I completed a series of studies as part of this thesis. First of all, using planar horizontal motion and visual scenes, the first study of this thesis investigates motion responses of scene regions. The next study investigates whether eye-centered or world-centered reference frames are used during visual motion processing in scene regions, using objective ‘real’ motion and retinal motion during pursuit eye movements and natural scene stimuli. The third study investigates the effect of natural scene content during objective and retinal motion processing in motion regions. The last study investigates how motion speed is represented in motion regions during objective and retinal motion. Since many visual areas are optimized for natural visual stimuli, the speed responses were tested on Fourier scrambles of natural scene images in order to provide natural scene statistics as visual input. I found evidence that scene processing regions parahippocampal place area (PPA) and occipital place area (OPA) are motion responsive while retrosplenial cortex (RSC) is not. In addition, PPA’s motion responses are modulated by scene content. With respect to reference frames, I found that PPA prefers a world-centered reference frame while viewing dynamic scenes. The results from motion regions (MT/V5+, V3A, V6 and cingulate sulcus visual area (CSv)) revealed that motion responses of all of them are enhanced during exposure to scenes compared to Fourier-scramble, whereas only V3A responded also to static scenes. The last study showed that all motion responsive regions tested (MT/V5, MST, V3A, V6 and CSv) are modulated by motion speed but only V3A has a distinctly stronger speed tuning for objective compared to retinal motion. These results reveal that using natural scene stimuli is important while investigating self-motion responses in human brain: many scene regions are modulated by motion and one of them (PPA) even differentiates object motion from retinal motion. Conversely, many motion regions are modulated by scene content and one of them (V3A) is even responsive to still scenes. Moreover, the objective motion preference of V3A is even stronger during higher speeds. These results question a strong separation of ‘where’ and ‘what’ pathways and show that scene region PPA and motion region V3A have similar objective motion and scene preferences.

Das Dokument erscheint in: