CRCNS: Joint coding of shape and texture in the primate brian
Project Number5R01EY029997-02
Contact PI/Project LeaderPASUPATHY, ANITHA
Awardee OrganizationUNIVERSITY OF WASHINGTON
Description
Abstract Text
PROJECT DESCRIPTION
Collaborating Pis and Consultant
United States
Pl: Anitha Pasupathy, Dept. of Biological Structure, University of Washington, Seattle, USA
Co-Pl: Wyeth Bair, Dept. of Biological Structure, University of Washington, Seattle, USA
Japan
Pl: lsamu Motoyoshi, Dept. of Life Sciences, The University of Tokyo, Japan
Consultant: Hidehiko Komatsu, Tamagawa University, Japan
Specific Aims
Our visual system endows us with a diverse set of abilities: to recognize and manipulate
objects, avoid obstacles and danger during navigation, evaluate the quality of food, read text,
interpret facial expressions, etc. This relies on the neuronal processing of information about
form and material texture along the ventral pathway of the primate visual system (Ungerleider &
Mishkin, 1982; Felleman & Van Essen, 1991). Studies over the past several decades have
produced detailed models of how visual information is processed in V1, the earliest stage along
. this pathway (Hubel & Wiesel, 1959, 1968; Movshon et al., 1978a, b; Albrecht et al., 1980), but
beyond V1 our understanding of visual processing and representation is limited. This is
particularly true with regard to our understanding of how visual representations of form and
texture jointly contribute to object perception and recognition. The broad goal of this proposal is
two-fold-to develop an experimentally-driven image-computable model for how naturalistic
visual stimuli are processed in area V4, an important intermediate stage along the ventral visual
pathway (Aim 1) and to discover how such a representation contributes to perception (Aim 2).
Past studies have shown that V4 neurons are sensitive to both the form (Desimone and Schein,
1987; Kobatake and Tanaka, 1994; Gallant et al., 1993; Pasupathy and Connor, 2001; Nandy et
al., 2013) and the surface texture of visual stimuli (Arcizet et al., 2008; Goda et al., 2014;
Okazawa et al., 2015). But, because expertise is narrow and experimental time limited,
scientists tend to focus exclusively on the encoding of form or texture and not on their joint
coding. For example, in the laboratories of the USA portion of this collaboration, we have until
now focused on form processing by carrying out neurophysiological studies using 2D shapes
with uniform surface properties to investigate how object boundaries are encoded (Oleskiw et
al., 2014; Popovkina et al., 2016). We have modeled our data by comparing the representation
of V4 neurons to that of the units in AlexNet (Pospisil et al., 2015), a prominent convolutional
neural net (CNN) trained to recognize objects (Krizhevsky et al., 2012). At the same time, the
Japanese contingent of this collaboration has investigated the encoding of surface texture and
gloss in human perception without associated form encoding (Motoyoshi et al., 2007; Sharan et
al., 2008; Motoyoshi, 2010; Motoyoshi & Matoba, 2012). Here we propose to bring our
respective expertise in studying form and texture encoding to bear on the question of how
naturalistic stimuli with both form and surface cues are encoded in area V4 and how these
representations support human visual perception. Our specific aims are:
Aim1. To build a unified image-computable model for neuronal responses to shapes and
textures in area V4
V4 responses to 2D shapes with uniform luminance/chromatic characteristics can be explained
by a hierarchical-Max (HMax) model for object recognition that emphasizes boundary features
(Cadieu et al., 2007). Such responses can also be explained by units in artificial deep
convolutional networks, in which boundary features are not explicitly emphasized (all features
are learned from initially random weights). On the other hand, V4 responses to texture patches
can be well explained by a higher-order image-statistics-based model (Okazawa et al., 2015).
Using shape data from the Pasupathy lab and texture data from the Komatsu lab (Japanese
consultant), we will ask whether responses of V4 neurons to shapes and textures can be
Page 21
Eye Disease and Disorders of Vision; Neurosciences
Sub Projects
No Sub Projects information available for 5R01EY029997-02
Publications
Publications are associated with projects, but cannot be identified with any particular year of the project or fiscal year of funding. This is due to the continuous and cumulative nature of knowledge generation across the life of a project and the sometimes long and variable publishing timeline. Similarly, for multi-component projects, publications are associated with the parent core project and not with individual sub-projects.
No Publications available for 5R01EY029997-02
Patents
No Patents information available for 5R01EY029997-02
Outcomes
The Project Outcomes shown here are displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed are those of the PI and do not necessarily reflect the views of the National Institutes of Health. NIH has not endorsed the content below.
No Outcomes available for 5R01EY029997-02
Clinical Studies
No Clinical Studies information available for 5R01EY029997-02
News and More
Related News Releases
No news release information available for 5R01EY029997-02
History
No Historical information available for 5R01EY029997-02
Similar Projects
No Similar Projects information available for 5R01EY029997-02