How we see liquids

Datum

2018

Betreuer/Gutachter

Weitere Beteiligte

Herausgeber

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Zusammenfassung

We have great understanding of objects and materials we encounter in everyday life. This helps us to quickly identify what is predator and what is prey, what is eatable and poisonous. Despite large image differences our visual system is able to extract material properties very consistently. Liquids are a category of materials that appear to be particularly challenging, due to their volatile nature. We are able to estimate complex liquid properties such as runniness or sliminess. How are we able to do this? How is it possible that we can perceive that honey is thicker than milk? Or that water in a glass is the same material as water spraying in a fountain. Four studies were conducted to achieve a better understanding of the image information we use to estimate liquid properties.In study 1 we specifically look at the contributions of optical cues while estimating a range of liquid properties. Using the same liquid shapes, but with different optical appearances, we studied which perceived properties (e.g., runniness) are influenced by optical or mechanical cues.We can encounter liquids in many different states and contexts. In study 2 we specifically look at the constancy of viscosity perception despite radical changes in shape. How consistently do we actually perceive liquids? We simulated a range of different scenes to learn how sensitive observers are to shape changes when estimating viscosity.In study 3 we look into specific shape features underlying visual inferences about liquids. By comparing observers viscosity ratings with perceived shape features, we show how the brain exploits 3D shape and motion cues to infer viscosity across contexts despite dramatic image changes.In study 4 we estimate the perceived viscosity of an image with neural networks. Machine learning is a powerful tool and facilitates major breakthroughs with difficult visual tasks. Here we trained a neural network specifically designed to mimic human performance while estimating viscosity.Our results show that the perception of liquids is mainly driven by optical, shape and motion cues. We show great perceptual constancy in rating viscosity across a wide range of scenes. Mid-level features (e.g., spread, pulsing) are an important and reliable source to estimate viscosity consistently across contexts.

Beschreibung

Inhaltsverzeichnis

Anmerkungen

Erstpublikation in

Sammelband

URI der Erstpublikation

Forschungsdaten

Schriftenreihe

Erstpublikation in

Zitierform