While the different sensory modalities are sensitive to different stimulus energies, they are
often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems.
The nervous systems of humans and other mammals contain sensory receptors that differ in their sensitivities to different categories of stimuli. In touch, mechanoreceptors embedded in the skin respond to physical deformations of the skin; in vision, photoreceptors in the retina respond to light (Fig 1A and 1B). Although the brain modules for processing different types of inputs are largely distinct, the internal organization of these modules is surprisingly similar. In particular, sensory areas exhibit a topographic organization , wherein nearby neurons respond to similar stimulus features. This organization is columnar in the sense that, while neuronal response properties differ along a direction parallel to the cortical surface, they tend to be similar along the perpendicular direction . In mammals, columns span the six layers of neocortex, and the connectivity within and between these layers is similar in most brain regions. These commonalities have led to the notion of a canonical circuit [2,3] that implements canonical computations. In this conception, cortical networks devoted to different sensory modalities differ only in the peripheral receptors that provide them with input and are otherwise identical or at least highly similar [4,5].
This is a powerful idea: to the extent that neural circuits perform canonical functions, we
may be closer to understanding the brain than we realize. That is, some of the more complex functions performed by sensory systems—face recognition or texture identification, for example—might reflect relatively simple computations, iterated over multiple stages of neural processing in different modalities. Although this idea was proposed long ago on physiological [6,7] and theoretical  grounds, there has been little progress in testing it over the ensuing decades .
In this essay, we compare sensory processing in vision and touch to assess the degree to
which analogous mechanisms are implemented in these modalities to solve analogous problems. To this end, we exploit recent developments that have led to algorithmic descriptions of a key function carried out by both systems, namely the processing of stimulus motion. The development of quantitative models of motion processing has yielded a reasonably clear picture of the computations carried out by the cortex in vision [10,11] and in touch . Moreover, recent advances in statistical modeling have opened up new approaches to identifying and comparing neural computations in high-level sensory structures .
We suggest that the brain regions devoted to vision and touch, despite receiving fundamentally different physical inputs, implement many of the same processing strategies. We propose that the identification of canonical computations can be used as a starting point for the development of a quantitative understanding of other brain regions. Such a convergence of ideas has important implications for both basic and applied neuroscience .‘
H/T Ralf Seliger.