Primary Visual Cortex
The cytoarchitecture of sensory and motor cortex is columnar. That is, although there are at least six layers of cortex, the cells in these layers which carry out similar functions or which represent similar areas of the visual field are stacked, just as they were in the LGN.
Almost all the cortex in the human brain is neocortex (“new cortex”) with six layers, named from the superficial nearest the pial surface on the outside (layer I) to the deepest near the white matter on the inside (layer VI). In general, inputs to cortex arrive at layer IV. The visual cortex is so complex that this layer is subdivided. We have already seen that so many axons come into this layer that it has a visible white stripe, the stria Gennari that gives striate cortex its name.
The axons of LGN relay cells arrive in a subdivision of layer IV called layer IVc. Magnocellular axons arrive in layer IVcα while LGN parvocellular axons arrive in the slightly deeper layer IVcβ.
Koniocellular axons carry color information which ultimately came from retinal cones and cone bipolar cells, and passed via color-opponent ganglion cells. These koniocellular axons are extremely metabolically active and their postsynaptic target cells stain avidly for a mitochondrial enzyme called cytochrome oxidase. Because these dark-staining, metabolically-active regions looked like blobs, they are called blobs. Between the blobs are the cells that receive information from M and P pathways; these are called interblobs. Blobs are present in all cortical layers except IV.
Because the information from both eyes is in close proximity, the opportunity exists to calculate the depth of the visual object, a process called binocular disparity. Faraway objects cast photons onto almost identical regions of both right and left eyes. Remember that experiment we did where we covered one eye, then the other, to confirm that objects in the visual world are represented by cells in both retinas? Now try this with a faraway object. The moon, for example, does not move very much between its representation in the right eye and the left eye. A closer object, something you can reach with your hand, will move quite a bit as you alternate covering each eye. This is the main method for determining how far away visual objects are. In about 3% of people, one eye or the other conveyed “bad” information early in development and was ignored. This lack of depth perception is called strabismus. For these people, there is no depth perception in the absence of other cues (for example, we know that people tend to be between 1 and 2 meters tall and can guess at their distance this way). ViewMaster® slides and stereoscopes carry no particular joy for these people.
If you look carefully at this photograph, you will see it is the same scene taken from a slightly different angle. Using the instructions found on this webpage, you might be able to free-fuse these two images and see it in stereo without special instrumentation.
Another way in which visual scenes are analyzed in the visual cortical columns (blobs and nearby interblobs) is called orientation selectivity. While receptive fields in both retinal ganglion cells and LGN relay cells tend to be center-surround, receptive fields in the visual cortex tend to respond best to bars of light that are oriented at various angles. There is a regular, steady change in the preferred orientation as we move from one column to the next.
Taken together, the orientation columns, left and right eye columns, and blobs are all arranged in a regular pattern in primary visual cortex (area 17, striate cortex).
If we zoom out quite a bit, we can also see a pattern to the mapping of visual fields onto the contralateral visual cortex. The central (macular) part of vision occupies a relatively large area, about 1/3 to ½ of primary visual cortex depending on the person. This is nearest the posterior end of the calcarine sulcus with the inferior contralateral visual field represented on the upper bank of the sulcus and the superior contralateral visual field represented on the lower bank of the sulcus.
Note that the visual field is still split between right and left. The visual hemifields are “stitched together” at a higher level of visual processing. This process is so seamless (sorry) that we don’t generally notice a line down the middle of our visual field. Certain rare lesions of the visual cortex can sometimes cause patients to perceive this, however.
From area 17, visual information begins to flow to higher-order visual processing in areas 18 and 19. The magnocellular stream information begins to coalesce in a channel called the dorsal stream (after its location in macaque monkeys, not in humans). The dorsal stream is also called the “where” pathway. It is fast; its main concern is the location of fast-moving stimuli that might cause harm or represent prey, like a rock flying towards your head or a flightless bird that might be good eating for your family.
There is a “what” pathway as well. This is the slower pathway and is also called the ventral stream after its position in the macaque. It conveys color information and is quite extensive in humans. Face recognition and body recognition regions are included in this pathway. For example, humans are quite sensitive to the movement of legs and arms in their fellow humans. It’s even possible for people to make accurate guesses about the gender of a stick figure made from using straight lines to connect the joints (ankle, knee, hip, etc.) after watching it move on a computer screen for several seconds.