How well do rudimentary plasticity rules predict adult visual object learning?
Fig 2
Model family of object learning in humans.
A. Model family. Each model in this family had two stages: an encoding stage which re-represents an incoming pixel image as a vector (), and a tunable decision stage, which uses to generate a choice by computing linear choice preferences (), then selecting the most preferred choice. Subsequent environmental feedback is processed to update the parameters of the decision stage. Specific models in this family correspond to specific choices of the encoding stage and plasticity rule. B. Possible neural implementation. The functionality of the encoding stage in A could be implemented by the ventral stream, which re-represents incoming retinal images into a pattern of distributed population activity in high level visual areas, such as area IT. Sensorimotor learning might be mediated by plasticity in a downstream association region. Midbrain processing of environmental feedback signals could guide plasticity via dopaminergic (DA) projections. C. Encoding stages. We built learning models based on several encoding stages, each based on a specific intermediate layer of an Imagenet-pretrained deep convolutional neural network. D. Plasticity rules. We drew basic plasticity rules from statistical learning theory and reinforcement learning (see S1 Table). Each rule aims to achieve a slightly different optimization objective (e.g., the “square” rule attempts to minimize the squared error between the choice preference and the subsequent magnitude of reward).