VISUAL DEVELOPMENT
General introduction
The course in the Development of Vision covers a wide breadth of topics from a research-centered perspective.
Please see the course themes below to help you retain what you learn in this course. Our journey into topics in the Development of Vision is centered around a few key themes/questions. Keeping these themes in mind will allow you to retain the story of visual development for future use. Perhaps you'll be of comfort to a patient by explaining an aspect of development that is of concern? As a researcher who has worked with patients and special populations, I have experienced this firsthand.
What are the course themes?
Research that measured children and infants' visual abilities has taught us how the visual system changes as humans reach adulthood. Topics include the development of resolution acuity, color vision, face perception, and eye movements.
Vision develops primarily via experience. Researchers can not control the visual experience of developing humans to make strong inferences about how our changes the anatomy and physiology of the visual system. We will turn to research using animal models that have revealed powerful insights into how the neural basis of vision is altered via experience.
The experience of atypically developing and developed humans can link to the research with animal models by providing us with natural experiments in altered visual experience. We will discuss how researchers have used clinical populations to understand how vision develops when visual experience is abnormal (e.g., amblyopia, myopia).
All organisms develop throughout their lifespans. We will conclude the course with how researchers (both human and animal researchers) have learned about how the amazing aging visual brain works.
INTRODUCTION: Visual Development Research for Optometry Students
How do you build a visual system? It is a deceptively simple question that has drawn researchers from psychology, biology, neuroscience, computer science, and even philosophy.
A typical story is that researchers at MIT assigned the problem of solving vision as a summer project to a graduate student. Imagine developing and building a visual system from scratch; what question would you ask?
Let’s begin with a list:
How do you define vision?
What information would you start implementing encoding visual information?
How would you build your sensors, given that they need to respond to visual information because we know the retina is immature at birth?
How would you connect the sensors to process the encoded information to be processed to extract biological information that the organism needs?
What features of the visual environment are important?
How could you incorporate the probability of a failure to develop optimally (e.g. refractive error, sensor (i.e., eye)?
This list of questions is far from complete, and the exercise can quickly become exhausting to even the most dedicated of researchers.
Let’s file away how daunting, complex, and interdisciplinary the development of a visual system is from an engineering perspective. What about from a biological perspective? Researchers have come up with a list of questions that mirrors the above yet face the challenge of designing their experiments to target aspects of visual development in a controlled manner and have to deal with data that is unavoidably noisy.
There is no way around that developing a visual system is difficult. There is no way around understanding the biology, let alone translating basic science findings to the clinic is even more difficult. However, researchers have made astounding progress, which we will cover in this short course.
PART 1: Human Infants and Children
Visual development occurs in all vertebrate species. Visual systems begin in an immature state and both genetic predispositions and environmental input guide them to become fully functional.
In this course, we will tackle a few key questions:
--- REVIEW SECTION ---
Introduction:
Evolution has selected visual systems for organisms to be appropriate for their evolutionary adaptive environment, thus the vertebrate visual system undergoes a sequence of changes before maturity. The human infant develops from birth but is prepared to learn via development before the onset of visual experience.
The critical period is when atypical experience has the largest effect on a given visual ability. Clinically, the critical period is relevant because Implementing interventions results in the best outcomes for abnormalities caused by genetics, abnormal visual experience, environmental insults, infection, or individual differences.
Nature versus Nurture versus Noise?
Review: Retina to Cortex and some differences between adults and infants.
This sequence ought to be familiar:
The cornea focuses light (develops in utero).
Then the lens has an adjustable refractive power (also in utero and derived from the same progenitor cells, but differentiate in utero).
Light hits the retina, but the fovea is crucial for clear vision. The foveal pit and cones are not mature at birth. It may seem obvious, but infants do not perceive patterns well at birth and for babies enjoy simple, high contrast patterns.
Humans have three cone types. The cone response for adults is not the same as in infants. Sensitivity to short-wavelength light is better for infants.
Other difference between adult and infant retina:
Infant cones tend to be shorter and fatter. The outer segment does not guide the wave of light to the photopigment with the same efficiency as the adult cones. Early researchers were misled via their methods to conclude that infants could not see color.
There are changes in the connectivity in the visual system during development. A general principle of developmental neuroscience is that an abundance of neurons and connections are created, then based on experience, the system “prunes” or removes those that are not used in circuits that transmit useful visual information.
ON/OFF pathways are different (specifically the ganglion cells, which also undergo
“pruning”) and segregation which develops and is preserved in the nervous system.
In general, rods develop earlier than the cones and have a different developmental time course.
After visual stimuli are encoded and processed by the retina, information about the visual world travels to several subcortical areas. Still, our example today is the lateral geniculate nucleus or LGN. In the LGN (where we get segregation of ipsi- and contra-lateral retinal information, this distinction and the layers of the LGN begin before birth but only really “takes off” when visual experience occurs.
The next big player in our story is the visual cortex in the primary visual cortex or V1. Spatial vision mechanisms extract more features of the visual world – i.e., edges, contours, and stereo vision.
How are V1 simple cells (which have center-surround configuration) formed? By an ensemble of LGN cells. The formation of the simple cells requires appropriate visual input from the retina and LGN during development.
Beyond V1, more complex visual processing occurs, which requires adequate visual
input from the early circuits to develop correctly. These are “high-level phenomena”.
Color constancy is the ability to maintain consistent color perception across a range of different illuminants -- first found in V4 and is a very complex calculation and an example of a “high-level” phenomenon in vision.
Motion is a complex stimulus that requires many mechanisms along the feed-forward visual stream to perceive. For example, the photoreceptors need to detect brightness transients (photoreceptors), and contrast increments and decrements must be encoded/process (bipolar cells, RGC). Motion or temporal visual is a good example of how visual development requires many systems to develop before other, more complex stimulus processes can be extracted from the visual environment.
This image shows ocular dominance columns, a well-studied feature of the visual whose development has been investigated extensively. A technique called optical imaging generates these maps. Researchers observed a section of the visual cortex. The maps are created by stimulating the eye with stimuli of multiple orientations to each eye. By mapping the responses to stimuli, optical imaging reveals the columnar layout of the visual cortex.
Colors: how the image of a pinwheel was mapped in the brain.
R (grey) and L (white) columns
Cytochrome-oxidase Blobs (white)
There are two other features of interest in this map. Where colors intersect, lie the pinwheel centers. Cells at the center of a pinwheel center respond to a wide range of orientations. That is, they are unselective for orientation. Being unselective for orientation does not mean they carry no information in the feed-forward visual system, merely that they do not have a strong orientation preference.
Also, the cells at the pinwheel centers are notable from a developmental perspective. The visual system requires experience to arrange itself into orientation-tuned ocular dominance columns. Early in development, many cells in the cortex respond like those located within pinwheel centers. Visual experience makes cells more specialized for carrying orientation information.
Distinctions to recall or remember:
1. Magnocellular system = motion tasks.
Parvocellular = spatial tasks.
2. Ventral pathway = color and form. The "what" pathway.
Dorsal pathway = motion. The "where" pathway.
These distinctions are worth keeping in mind when considering how visual development progresses. Recall our example of motion. To perform a motion task optimally, the parvo / ventral pathways must efficiently extract the spatial objects in a scene. To extract objects, edges are required. In your own experience, where do you “see” motion? I believe you’d agree it occurs at the edges of objects (and research back up your experience).
There are many levels of visual-neural processing (summarized above) that develop at different rates in building upon each other. Both genetics and visual experience guide each level and have "randomness" or noise in their development, like all biological systems.
Many factors can derail development. A non-exhaustive list includes unnatural environmental experience, exposure, infection, injury, genetics, and epigenetics. We’ll see examples of all of these processes disrupting visual development in this course.
A clinical example of visual development disruption resulting in an abnormal outcome is anisometropic (different refractive error in the two eyes) amblyopia. The difference in refractive error in the two eyes, if untreated, eventually will alter the primary visual cortex
and disrupt binocular vision (behaviorally) and ocular dominance columns (physiologically).
--- END REVIEW SECTION ---
--- THEORY SECTION ---
Visual Development Concepts:
The nature-nurture or genetics-environment dichotomies are classic distinctions in visual development. In the end, both genetics and the environment guide the development of vision. However, the relative influence and effects of “pre-programmed” biology and environmental experience influence the heart of visual development research. While touted for decades, many genetic treatments appear to be decades away. Even if we had a genetic treatment for a developmental disorder, the environment would influence new combination disordered biology plus the effects of treatment. Thus understanding the role that experience plays in interventions (even for developed individuals) is a key application of visual development research.
Researchers can manipulate both genes (in some model organisms) and the environment to determine their influence on development. Mice are a classic model animal system for studying genetics. Comparing wild-type (typical) and mutants bred to lack a specific gene is a common genetic approach. However, it is rare for diseases, especially diseases of the eye, to be caused by a single gene mutation in humans. For example, there’s an incredibly complex network of genes implicated in myopia development. One single gene disorder, Marfan’s Syndrome, can cause the lens to detach.
We can not ethically alter human experiences during development in a controlled way. Thus controlled experiments with animal models provide key causal insight about the aspects of the environment that affect visual development. Our knowledge from humans relies on studying atypical development (e.g., untreated amblyopia), the effects of a treatment (e.g., myopia control), or correlating environmental experience with development (e.g., measuring light level exposure with a smartwatch).
The Development of Vision is an interdisciplinary research topic. Let’s look at a framework for thinking about how visual development progresses and the types of developmental trends that we see.
Richard Aslin first introduced the framework in the slides in the 80s. The slides show a simplified graphical representation of a more mathematically specified model of visual development, but it suits our purposes. Given we’re looking at development our questions center around how the visual system changes over time. The X-axis on the data that supports our concepts in this course is nearly always time. The units depend on the developmental time course of the visual ability we are discussing. For some visual abilities, the time course is rapid (days or weeks), while for others, it is slower (months or years).
The graph in the slides shows several possible developmental trajectories. The dotted line is
the onset of experience. The Y-axis is an abstract amount of development of an arbitrary visual ability (low or high).
Fully developed visual mechanisms are heavily influenced by genetics and the gestational environment. At the onset of experience, they are highly developed. These are evolutionarily programmed. However, visual experience may be necessary for their maintenance.
Partially developed mechanisms, that is, mechanisms that require visual experience to develop will be those we focus on in this course. For a partially developed mechanism, experience promotes development making the processing attuned to the appropriate input. If a mechanism does not receive the appropriate visual experience, it will maintain an immature level of development, or the system may lose visual functioning entirely.
Undeveloped mechanisms require visual experience. The onset of visual experience begins the developmental process, and within this framework, this is called induction.
Summary of Development Terms:
Maturation occurs as a mechanism that develops independently of visual experience.
Maintenance occurs when visual experience is needed to keep the mechanism functioning optimally.
Facilitation of a mechanism occurs if appropriate visual experience accelerates the development of the mechanism.
Attunement is the term for the developmental time course of a partially developed mechanism at birth but requires visual experience to adjust its response.
Induction is the term used to describe a process whereby a mechanism requires visual experience to develop.
Visual development research aims to discover a critical or sensitive period for visual development. The critical period for visual development is the time when abnormal experience has the most impact. Once a critical period is discovered, researchers use the following logic. If the visual system is most sensitive to disruption during a given time, we have discovered that the visual system is flexible or plastic during this period. If the system is plastic or open to being changed via visual experience, it is undergoing developmental change based on the environment.
The visual system differs in sensitivity to various stimulus dimensions (e.g., color, orientation, motion), and some visual abilities require more visual experience to develop than others.
See the slide “Developmental mechanisms and their sensitivity”. On the top graph’s X-axis is any conceptual visual mechanism response (e.g., the color response of the mechanisms in the visual system tuned to blue). The Y-axis is the amount of stimulation of that particular mechanism from the environment (e.g., the wavelength of light stimulating the cone).
Input A represents a visual experience that yields a fully developed mechanism. Input A represents an environment that stimulates a mechanism where it is most sensitive. In our color example, this would be a visual environment with an ecologically appropriate amount of short-wavelength light.
Input B in the plot would be a visual experience that stimulates the filter sub-optimally, resulting in less development than in Input A; in our example, this could be a blue-green (cyan) light.
Input C represents an abnormal visual experience for the mechanism. While Input C may be a rich visual experience for other mechanisms (e.g., a long-wavelength processing circuit), for our example mechanism, the blue processing circuit, Input C lies outside what is required to for development.
Aslin’s framework allows for the incorporation of the idea of the robustness of a mechanism. Depending on the processing circuit, developmental mechanisms may develop full functionality with sub-optimal visual input. Why is this important? Robust mechanisms tend to be those that lean more on genetics and/or are more basic functions of the visual system required for any visual development to occur.
The alternative is a mechanism with narrow filtering. Without the appropriate visual experience, this sort of mechanism will not develop typically or may not develop at all. One example of this is face or expression recognition, which can be impaired if a child is not exposed to the appropriate social experience. The biology of an individual can produce narrow filtering of a visual mechanism. For example, a neuro-atypical person (e.g., someone with autism spectrum disorder) may have a narrow filter and have difficulties with face recognition, even with typical experience.
The history of the visual will influence visual development and can broaden or narrow a filter with time. For example, if an organism experiences imbalanced input two the two eyes, recovery can be induced by treatment, but sub-optimal recovery of function is likely.
--- END THEORY SECTION ---
--- BASIC HUMAN VISUAL DEVELOPMENT SECTION ---
This section will travel up the visual pathway taking notes of key findings in visual development from experiments that address basic visual processes to the more complex such as face perception and eye movements. We will discuss the methods researchers use to measure visual development in humans. A few examples of how the knowledge gleaned from the experiments translates to work in the clinic. However, measuring infant visual functioning is difficult. Also, many important findings of visual development have shown resistance to robust translation into clinical practice. However, there remains promise in some areas.
Keep in mind the idea from the theory section that the development of basic visual abilities are the building blocks for more complex visual abilities. For example, if spatial vision (e.g., acuity) has sub-optimal development, object recognition, stereo, and even motion abilities are compromised.
Let’s start at the beginning. What sort of visual abilities does a newborn have? Not much. A newborn infant is not yet a "visual creature" -- how well do infants see? About as well as if you had a plastic bag over your head, probably worse.
Here is a list of things we know newborns can do:
Also, possibly orient to their mother’s or primary caregiver’s face reliably.
The optics of a newborn Infant optics are typically clear and have no large aberrations that degrade the retinal image substantially. Children are born hyperopic because their eyes are small. They begin to emmetropize as they grow. The lens and cornea grow to focus images on the retina, but this process takes years. Also, many infants have astigmatism that resolves within the first year.
The most basic function of the visual system is to sense light. What is the time course of the development of light sensitivity? The slide “The Development of Light Sensitivity”
shows the combined results of four different experiments. The X-axis shows age from four to
twenty weeks of age and adult light sensitivity as a comparison. The Y-axis is logarithmic -- higher negative numbers correspond to greater sensitivity. The greater the sensitivity, the smaller number of photons are needed to generate a response. What do the results show? At birth, an infant is about 25 times less sensitive to light than an adult. However, sensitivity progresses relatively rapidly, and by 18 weeks, the infant is only 4x less sensitive than the adult.
What is the take-away? Light sensitivity is fundamental, and this shows how much of it is needed to generate a response, in this case, the rods. One developmental take-away is that the rods and those circuits in the retina and brain required to generate a response are induced to develop early and rapidly with the normal visual experience.
Thought question: what if the infant were raised in the dark or only under very dim light? What are the possible developmental trajectories?
The human visual system is extremely sensitive to color. Each of the three human cone types is maximally sensitive at a different peak wavelength, but all the cones respond to a relatively broad range of wavelengths. To compare the sensitivity to wavelength (related to but not necessarily the same as color), researchers have measured the number of photons needed for 3 month-old children and adults to detect a spot of light.
While infant photopic sensitivity is lower overall than adult (the curves are scaled for the infant to match the peak wavelength sensitivity of 550 nm in the slide “Chromatic light sensitivity”). Three-month-old infants’ sensitivity is higher than adults at short wavelengths when represented in a scaled fashion. That is, those wavelengths detected primarily via the S-cones. Why are infants better than adults? This is due to the clarity of their optics relative to adults, as all adults have some amount of UV damage to their optics.
The puzzle of color vision. Despite being more sensitive to short-wavelengths. Infants have difficulties discriminating colors. We can look at an XY diagram (recall from color vision). The solid curved line shows the normal adult data. Despite infant color vision being relatively more sensitive in detection green and blue discrimination are desaturated compared to adults. We’ll address this puzzle, but first, we need to address a few ideas from spatial vision to help us.
Visual Acuity is a common measure of spatial visual function. It’s a quick and easy way to assess the ability to resolve fine details in humans. However, with pre-literate humans, it’s obviously more difficult. For older children, one could ask them to point to match the shapes of letters, but researchers can do better.
Optotypes (the slides use Auckland Set) are used for young children to match pictures instead of letters. First, the researchers have made all the shapes equally discriminable. Second, they’ve done this with both positive and negative contrast, which can test for asymmetries in contrast perception and the ON and OFF pathways.
However, matching remains difficult for the youngest children. The way around this is to use repeating patterns that vary their frequency (i.e., how wide the stripes are in the square waves in the slides). When we move from squarewave to sine-wave gratings, we refer to the rate of repetition as spatial frequency. Squarewaves contain a broad range of spatial frequencies that can be specified mathematically. To provide you with a sense of spatial frequency with sine-waves, the slides show a Pelli-Robson demo image. The image shows spatial frequency increasing as you look from left to right. As you look from top to bottom the contrast increases.
What does spatial frequency with sinewaves have to do with natural images in the world? Any image can be synthesized from a set of sinewaves that vary in their contrast, frequency, orientation, and position. Classic work on spatial vision demonstrates that the visual system breaks up the visual world into bands of spatial frequency from the very coarse to the very fine. That is, in the adult visual system, as we’ll see, the ability to detect lower spatial frequencies develops earlier than the ability to detect higher spatial frequencies.
To measure contrast sensitivity behaviorally one must find the amount of contrast required for an observer to detect the presence of a sinewave grating. The reciprocal of the amount of contrast required is a quantity known as contrast sensitivity. The contrast sensitivity function (CSF) has a special place in vision science as it is known to provide the best predictor of the visibility of objects at low contrast. It has a characteristic shape, where sensitivity is highest for medium frequencies and lower at frequencies above/below those medium frequencies. Mid-range frequencies have been shown to be an important driver of emmetropization and the prevention of myopia. Also, the highest spatial frequency that can be detected is related to the observer’s visual acuity in a consistent way.
Also, the human visual system breaks up the image into different orientations. Through development, as the system becomes better able to encode orientation, vision improves. We can see how the visual world looks like to a neuron in V1 that only encodes a narrow range of orientation.
How is this measured? Forced-choice preferential looking is a useful method that takes advantage of children’s preference to look at something versus nothing. However, this method is complicated and relies on the ability of the observer to view infants’ looking behavior. In recent decades, the method has been refined with technology such as eye-tracking.
However, there is a catch; observing no preference does not give any information about whether they can tell the difference between two stimuli. To be able to tell whether infants can discriminate stimuli, the habituation technique is used. Habituation takes advantage of the fact that babies (all humans really) get bored if they are presented with the same stimulus repeatedly. In the habituation technique, the experimenters repeat the same stimuli. The baby gets bored. Then the stimulus is changed to a new one. If the infant’s behavior changes to exhibit that they’ve been released from a “bored” state, then the experimenters infer that the baby can distinguish the new stimulus from the old. Any number of dependent variables can be measured from gaze position, increased frequency of sucking on a pacifier, to widening of the eyes.
Teller acuity cards are a standardized clinical test that has been used to test infant vision for many years. Unfortunately, few developments beyond Teller acuity cards have made it to the clinic. The prescribed testing distance is 0.5m in an illuminated room. A squarewave grating is presented on one side, and a blank gray field on the other side of the card. The testers will run the infant through the cards and note the squarewave frequency that corresponds to no preference. While this technique has limitations, it has the advantage that there are norms that are well established, and a grating acuity can be determined from the stripe width in a relatively short amount of time. However, testing babies is difficult -- the subject may change their state or
attention.
Let’s look at the norms collected over the years with Teller acuity cards. The slide “Acuity from Teller acuity cards” shows a table of Age in months versus Teller acuity translated into a Snellen acuity. There is rapid development from one month to 48 months (4 years). This table only expresses the mean observations. There are individual differences in the rate and sensitivity of development among those children tested. This is shown in the slide “Teller acuity cards - individual differences”. Some 4-year-olds are no better than 6-month-old babies. Is this poor development or might the development of visual acuity have a long time course or have high individual variability in development?
Several large-scale longitudinal (observers are repeatedly measured over time) studies show that visual acuity continues to develop through at least age six. The results are shown in the slide “Time Course - Teller Acuity”. Does acuity develop beyond age six? Things get tricky here because between the ages of 6 through 8 are the prime years for developing myopia.
The slide “CSF Development” shows the results of studies that measured the contrast sensitivity (recall: CS is the reciprocal of the amount of contrast needed to detect a grating reliably) function precisely up to the age of 9 years. Note how contrast sensitivity develops from one through three months of age. The sensitivity that the infants are most sensitive to increases to approximately 0.5 cycles per degree. This is a low spatial frequency. From 4 to 9 years of age, the peak spatial frequency becomes nearer to adult-like > 2 cycles per degree Sensitivity to the highest spatial frequencies measured in this study continues to develop through age 9.
Researchers continue to work with clinicians to obtain fast and reliable methods for measuring the contrast sensitivity function. Some clinician-scientists take complicated mathematical/statistical approaches or novel psychophysical methodologies. The slide “Crude but Effective” describe one method that is simple and difficult to beat. It uses three tests that are well-known in clinical practice. Measuring acuity with a tumbling E chart can get a measure of the high spatial frequency cut-off of the CSF. Measuring Pelli-Robson Contrast sensitivity at a letter size close to the peak of the CSF (for an age group) can get a measure of peak sensitivity. Then using a low-contrast Bailey-Lovie chart can provide a third point. With assumptions about the shape of the CSF and transforming the clinical data to common sensitivity units, a proxy for contrast sensitivity can be constructed. What’s the benefit? At least over acuity, even a crude estimate of the CSF can help explain vision dysfunction that is not otherwise captured by acuity alone.
A puzzle of color vision
This section returns to the puzzle we left aside to discuss spatial vision. The puzzle is this -- even though infants have clear optics, marginally greater sensitivity for short-wavelength light than adults, infant color discrimination is worse than adults at all wavelengths. The development of spatial vision and the methods used to measure infant color vision provide an answer.
Let’s look at the CSF for luminance modulated sinewave gratings and sinusoidal gratings modulated not in luminance but are red-green color. The graph on the left should not be a surprise. It’s the CSF we discussed in previous slides. Red-Green sinusoids are more difficult to detect for adults than luminance modulated sinewaves, especially at high frequencies, but contrast sensitivity is generally around 100. For 16- and 24-month-olds, contrast sensitivity for red-green gratings is lower and less dependent on spatial frequency than luminance modulated gratings. Also, very little development seems to occur in those two months. What might be causing this?
Let’s consider photoreceptor development. To detect high spatial frequency stimuli (or have good acuity), the area over which photoreceptors capture light matters. The pictures show an on-face view of the adult and newborn photoreceptors. The key difference? The size of the apertures of the photoreceptors. On the right of the slide are anatomical drawings of the developing photoreceptors from embryonic week 22 through birth and finally to their structure in the adult human eye.
Two changes are of note. First, the size of the outer segment continually narrows. This allows for a finer sampling of the world. Two everyday examples of this are, increasing the resolution on a computer monitor or the number of megapixels in a camera. Smaller photoreceptors can be more densely packed and collect light from a smaller area, thus signaling finer details from the retinal image. Second, notice how the outer segment lengthens with development; this part of the photoreceptor can act as a waveguide to capture light more efficiently.
The size of the photoreceptors and the time it takes for the outer segment to develop are the likely cause of the differences between luminance and red-green contrast sensitivity functions.
Now we have another piece of knowledge that can solve this puzzle -- the developmental trajectory and time-course of infant photoreceptors. Another clever approach was applied to the problem of infant photoreceptors, the “ideal observer”. The mathematical ideas behind the ideal observer can get complicated quite quickly, but what is important is that an “ideal observer” performs optimally (i.e., as good as math allows) given the constraints placed upon it. Banks & Bennett placed the constraint of the infant photoreceptors on an ideal observer to see if the development of the photoreceptors alone could explain infant color vision discrimination failure.
In the slide “Infant color discrimination failures”, we see the percentage of male/female infants at 4- and 8-weeks of age that fail to preferentially look at spots of light that were either red or green from a blank visual field. 4-week-old mostly fail to show a preference for either color. The 8-week old group is a mix. The 12-week-old group shows a preference for the colored spot versus the blank field. Thus we should conclude that 4-week old infants cannot see color, correct? Not so fast.
The slide “Data & Infant Ideal Observer Prediction” shows the Weber Fraction (recall the Weber fraction is the change in intensity over the intensity of the stimulus) measured in infants with different spot sizes on the left. On the right, the ideal observer predicted the Weber fraction using the parameters of the infant photoreceptors and the “ideal observer”. Recall from the previous slide most of the 4-week olds failed. We can predict what the infant’s Weber fraction would be if it were measured, instead of only preference/no preference from the stimuli and the number of successes. The predicted Weber fraction for the four-week-olds is larger than what could be produced using the stimuli in the original experiment. Now the magic happens -- these extremely high Weber fractions are predicted using physiological parameters of infant photoreceptors! If we change the stimulus size or intensity to account for the photoreceptors, infants could (and do) pass color vision tests.
Methodology: Electrophysiology
Electrophysiological methods for measuring the vision of infants and children are generally only available in specialty clinics. From a clinical perspective, it is worthwhile to know about them in the case where such specialty care could reveal visual impairment such as cerebral or cortical visual impairment, which can present in children who were not carried to full-term.
Using Electro-Encepholograpy (EEG) to measure the response of the visual cortex to visual stimuli has a long history in both adults and infants. EEG is measured as follows. Electrodes that capture the small electrical signals generated by the brain are placed at predefined locations on the scalp. Then when the infant looks at the stimulus used to test vision the signals from the set of electrodes are recorded. EEG requires dynamic stimuli (flashes, contrast polarity reversals) to generate a reliable signal.
Using EEG has advantages for testing infant vision versus behavioral measures of visual function. Infant skulls are thinner than adult skulls, which attenuates the EEG signal less and provides a more robust measure. Also, the infant does not need to provide a behavioral response or habituate to stimuli. However, even if a robust signal is present EEG alone cannot determine whether the signal measured at the scalp is available for the infant to use to perform tasks. A signal may be present in the brain but lost due to noise or incomplete development of higher-level or motor output areas of the brain.
Also, infants, in general, seem to tolerate wearing EEG electrodes well. They’ll cheerfully put on the cap or even fall asleep with electrodes on them.
What does visual evoked potential data look like when it is collected? The raw signal is a change in voltage at the scalp over time. An example of an adult signal is shown in the slide “Visual Evoked Potential / EEG - Raw Data”. For visual evoked potentials (VEP) a stimulus is changed and the electrical signal trace measured. Two quantities are used to summarize the VEP signal. Amplitude (in blue) is the difference in voltage between the peak of the signal and the minimum of the signal and phase (red arrow), which is when the peak electrical response to a signal occurs. Note that these are very small signals measured in a few microvolts. For reference, typical consumer batteries output 1.5 - 9 V.
Let’s look at how the VEP signal in response to a flash of light changes from the first few days of life (at the bottom of the slide) to about one year of age. Three babies were repeatedly measured, and their signal traces are plotted in the graph. There is a scale bar at the bottom that shows 20 microvolts. Note how much larger the infant signal is compared to the adult. Researchers summarized how both the amplitude and phase change as the infant ages. The signal amplitude increases through the first year of life -- the brain response increases to the same visual stimulus. Also, the phase of the signal becomes closer in time to the stimulus onset; that is, the visual pathway’s response to a visual stimulus becomes more rapid with development.
EEG measures are getting better at detecting signals (via cheaper amplifiers and better signal processing), which may increase their clinical utility. They have also become wireless and can connect to a smartphone app. This is an example of one such device that is a small headband.
EEG/VEP require dynamic signals, and they have been a key tool for measuring the development of motion/flicker perception. The slide “EEG – motion perception” shows a classic study that asks the question, “At what age can most infants see slow/fast motion flicker?”. The stimulus was a grating reversing in contrast (light bars switch to dark and vice versa a number of times a second). A threshold for a “significant VEP” was determined for seeing an EEG signal that is reliable.
There is a reliable EEG signal for slow flicker (3 rev/sec) in 80% of infants by 4-6 weeks of age, while it takes 6-8 weeks for a reliable signal to be seen in the majority of infants to the fast flicker (8 rev/sec). What does this tell us about development? Recall from the previous slide that the phase of an EEG signal response becomes closer to the reversal time. If the brain response is too slow to encode the changes in the reversals reliably, the EEG data will not reveal a reliable signal. In this case, the mechanisms that encode movement haven’t developed enough to encode rapid changes in the environment. Slower changes are observed earlier because the phase lag in time does not “wash out” because mechanisms that respond to slow-motion develop more rapidly.
Where might this come from infant visual experience? Consider what the infant's visual world might look like to a motion mechanism early in life. There is little spatial contrast because it is still developing. The spatial signals for motion are not present to develop the motion system. Also, the infant’s visual diet may consist of chiefly slow-moving motion signals as they spend most of their time being stationary.
Sweep VEP
An under-utilized technique for measuring the CSF is the Sweep VEP. It was developed almost 40 years ago and only now does it seem as if the technology is ready to move it out of a niche area of research or clinical utility. As it is the fastest measure of the CSF, it is worth looking at how it can measure the CSF.
The sweep VEP methodology is as follows. A display shows either a squarewave or sinewave stimulus. Once a trial begins the display sweeps through spatial frequencies from high-low or low-high because EEG can record very rapidly the sweep can be quick and a few sweeps are needed to generate a reliable CSF.
There is then some signal processing that is done to the EEG signal from the sweep. We don’t need to go into the mathematical details, but I used the same computational technique to break up the picture of Turing (the face) into different bands of spatial frequency. The “Sweep VEP” slide shows the results for three children from 18 to 29 weeks of age.
Notice the shape of the curves -- they are similar to CSF functions that are collected from behavior. The high-frequency cut-off relates to visual acuity and there is less signal or sensitivity for frequencies lower and higher than the peak spatial frequency. However, generally, a CSF measured with Sweep VEP indicates better vision than a behavioral CSF. Why? Recall that the EEG measures physiology and other sources of noise or developmental limitations beyond the primary visual cortex (where this signal is measured) cause the discrepancy.
Optokinetic Nystagmus (OKN) has been used to measure infant visual development. Measuring OKN provides an observable behavior that is a reflex. OKN experiments on the development of motion development (as we saw with VEP) agree with VEP measures.
The use of OKN has shown promise recently, and currently, groups are collecting norms of the OKN response and behavior. OKN provides a more rapid and perhaps more robust measure of visual function than acuity. Once normative data are acquired and devices approved. OKN is another technique that may make its way into commercial clinical devices soon.
--- END BASIC HUMAN VISUAL DEVELOPMENT SECTION ---
--- COMPLEX VISUAL FUNCTION SECTION ---
We’ve covered methods of measuring visual development and the development of some basic visual tasks, acuity, contrast sensitivity, and motion flicker. This section will describe the development of more complex visual tasks. Such as the ability to judge relative position, face perception, extracting information in the cluttered environment, motion and form, depth perception, stereo vision, and peripheral vision.
It is worthwhile to look for commonalities in the developmental time course of these various visual abilities. I’ll point them out as we go along, but note when different abilities are considered to be developed and their developmental time course.
Taking these results as a whole will give you a sense of how visual development is coordinated among the various systems that are all developing at the same time in an interconnected way.
Vernier Acuity refers to the ability to judge relative spatial position. Researchers will use simple stimuli, such as lines or gratings, and displace a portion of the grating to the left or right. Observers Vernier Acuity is the smallest offset they can perceive.
Vernier acuity can be measured with preferential looking, as shown in the picture. A child will more readily orient to the stimulus with a perceivable offset. The smallest offset that is reliably preferred is their Vernier Acuity threshold. For older children, a verbal or key-press response (e.g., left/right) can be used to complete the task.
What is different developmentally in resolution versus Vernier acuity? Both resolution acuity (e.g., Teller acuity) and Vernier acuity can be measured longitudinally in the same units. On the slide “Resolution Versus Vernier Acuity” we see a graph of age (8 to 22 weeks) of resolution acuity, which improves as the child develops. Vernier acuity performance starts out worse, but then at 16 to 22 weeks is better than resolution acuity. These data show an example of a lower-level visual ability needing to develop to induce Vernier or position acuity development.
Vernier acuity, when fully developed, is an example of a hyperacuity task. In hyperacuity tasks, observers perform better than predicted from an observer’s visual acuity limits. The continued development of Vernier Acuity is compared to Resolution Acuity in observers from a few months of age to 20 years of age. Generally, after a few months of age, the red dots, which represent Vernier Acuity thresholds, are smaller in minutes of arc and larger in equivalent grating acuity threshold -- that is, observers are more sensitive to relative position than their ability to resolve detail as measured by resolution acuity.
Resolution acuity, as we saw in the previous section, has a developmental time course that improves over the years (even if this is difficult to see on the logarithmic Y-axis). Vernier acuity also has a long time course, but the data show an interesting developmental finding around ten years of age. At around ten years of age, position acuities develop to be even better than resolution acuity.
The later development is an example of later cortical development. By ten years of age, observers can better integrate information across spatial frequency (from low to high) than younger children. The integration of spatial frequency information allows the visual system to judge position more finely. While you may not remember your vision improving dramatically at 10 years of age, visual development is still ongoing and can be guided by engaging in demanding visual activities (from sports to video games)
Face Perception
Humans are social animals. The information provided by faces (identity, expression, lip-reading, gaze direction) is highly salient and valuable for us as a species. Because they take a long time before they can meet their own survival needs, children rely on facial cues to survive. The research literature on how we use vision to interpret the information from faces is vast. Also, the developmental literature on face perception is extensive. Clinically, face perception can be impaired in cortical visual impairment or neuro-atypical individuals.
The most basic question of face perception is -- where do we look when we look at a face? On the left of the slide, “Face perception - an important skill to develop” shows a cartoon face with eye movement traces overlaid. Children around one month of age will fixate on the outer contours of the face and hair. From two months of age through to adulthood, there is a stereotypical eye movement pattern when presented with a face. An observer will look at the eyes (or one eye), then the mouth/nose region, and finally, the outer contours. This finding replicates across cultures and countless experiments.
What aspects of a face are important? Using behavioral and electrophysiological techniques, researchers have looked at many questions -- such as, what is it about a face that generates this typical pattern of eye movements and what allows us to identify people. Is it the individual features? Is it the pattern of contrast? Is it the outer contour? These are all interesting questions. However, we will focus on two experiments. The first experiment addresses the question of whether there is a preference for face-like stimuli that is innate. The second addresses what aspects of a face stimuli drive preferential looking and at what ages they emerge.
Do humans have innate face preferences? Researchers have measured newborn face preferences in the first hours of life, but can we get information about whether there are preferences for faces before the onset of visual experiments?
One paper used simplified three-dot face-like stimuli and inverted three-dot stimuli. These stimuli can generate preferential looking in young infants. To test whether the human fetus shows a face preference three-dot stimuli using high-power coherent light presented to the fetus. The light has sufficient power to travel through the mother’s tissue and reach the fetus. The tissue, amniotic fluid, blurs the point light sources as shown in the slide. The researchers hypothesized that if human fetuses prefer face-like stimuli, they will turn to the three-dot stimuli arranged in a face-like way. Fetal turning behavior was observed by the researchers using ultrasound.
The mean number of turns for fetuses toward and away from face-like stimuli is shown in the graph (red bars). Fetuses turn toward face-like stimuli more frequently than away from face-like stimuli. The fetuses showed no difference in how often they turned toward or away from the non-face-like stimuli. Thus, the researchers concluded that there must be some innate face preference present that develops before the onset of visual experience. These mechanisms may be crude and may have the function of helping the newborn orient toward to faces in the first few hours or days of life.
Face perception develops and changes in the first twelve weeks of life. A study by Mondloch et al. (1999) used several types of stimuli design to map out how various preferences emerge in development in one single study. This study’s methods are summarized in the slide “Newborn face perception” This experiment used several types of stimuli (a) three-dot face-like stimuli (a) in the slide. To address the changes seen in eye movements that progress from focusing on the external outline of the face to the internal features, the faces in (b) have identical contrast in the outline and internal features. Stimulus (c) addresses a bias that shows up in preferential experiments (from studies done in North America and Europe) where children preferentially look at faces over photo-negative versions of the same face. Lastly, the researchers measured grating phase preferential looking and Teller acuity, which controls the infant’s overall visual abilities/development.
In the slide “Newborn face perception - Results” we can see a few key findings of this longitudinal study and what they reveal about the development of face perception. Looking at the columns for the three-dot stimuli, we see that newborns show a preference for face-like (config) three-dot stimuli. This preference disappears quickly; neither six- nor twelve-week-old babies prefer either stimulus. This result completes the story we began with the experiment with fetuses. While three-dot faces may be sufficient to drive the face systems early in life once infant vision has a few weeks of visual experience, the visual system has developed enough that this configuration is not (presumably) driving an innate face mechanism. Perhaps this innate face mechanism disappears. The three dots do not provide enough input to drive the face recognition filter (recall the filter model) to promote development with robust input.
The “phase and amplitude reversal” stimuli are those stimuli that test whether an infant prefers the outline or internal features of the face. Consistent with the eye-movement result we saw, the preference for the high-contrast face outline switches to the internal features within the first month of life and remains the case. Developmentally, this switch represents a change to looking where the best input is for developing a face perception mechanism, again as we saw in the filter framework.
With the contrast reversed faces, there is no preference for positive or photo-negative faces with the contrast reversed faces until the baby has at least twelve weeks of visual experience. Developmentally, we can infer that the face perception mechanism has learned what faces tend to look like, and this drives preferences once enough information has been collected from the environment.
*
Visual Crowding
It is difficult to see an object in a cluttered environment, and the greater the clutter closer to the object of interest, the more difficult it is to see an object. This phenomenon is known as visual crowding. The clutter (or visual noise) crowds out the stimulus. The slide “Crowding - what is it?” shows a demo of how crowding works in the peripheral visual field. Fixate on the grey dot in the center. Notice how it is much more difficult to see the crowded A on the left than the crowded A on the right. Through visual development and experience, the visual system must learn how to separate the stimulus of interest from the background clutter. Crowding is common in natural visual experiences. Crowding affects adults in the periphery. Individuals with amblyopia have a more difficult time with crowded stimuli. We’ll look at two experiments; both address the developmental time course of the ability to separate visual signals from background visual clutter.
A simple stimulus used to investigate crowding is shown. Researchers use the tumbling E and surround it with what are referred to as flankers, in our example, the set of three horizontal or vertical lines. The subject in these experiments reports the orientation of the tumbling E. Experimenters manipulate crowding by altering the spacing of the flankers and target. The spacing of the E and the flankers needed to reduce performance is used as the measure of crowding. The tumbling E without the flankers is used as a control.
“Crowding methods” shows the method used to measure crowding for young children. In “Crowding results” we see the data for 5, 8, and 11-year-old children compared with adults. The Y-axis is expressed as “stroke-width” of the stipes of the flankers and multiples of the size of the tumbling E. Surprisingly, children from 5 through 11 years old need approximately double the separation between the tumbling E relative to 20-year-olds. The development of vision takes a long time to segregate the signal from background noise robustly. What visual experience might be driving this development?
Standard letter charts show crowding. The slide shows four examples of letter charts where the spacing between the letters is manipulated. The blue numbers represent the reduction in spacing among the letters, where 100 is the standard, and 50 is half the usual spacing. To address what visual experience might promote the development to be able to stop crowding, researchers decided to measure performance on these charts as children learned to read.
VA in LogMAR was measured for all four charts you just saw in 1st, 3rd, and 5th graders. No differences were found among the age groups for the standard letter chart spacing. As the spacing was reduced, all age groups showed crowding, which is to be expected given the task is more difficult the smaller the spacing. Developmentally, those in first grade who have fewer years of reading experience than those in the upper grade performed worse with 50% spacing than the older children. The effect of years of reading experience gets larger the more crowded the letter chart. This simple manipulation of a chart can identify students with delayed reading skills. Measuring crowding early in life could help students before they have learning difficulties in school.
The window of crowding refers to the portion of the visual field that an observer can separate the signal from noise. Perhaps experience, for example, playing sports, could provide an enriched visual experience? Lastly, both of the experiments presented here emphasized the role of environmental experience. Perhaps, an individual needs to stop growing (i.e., finish puberty) before the developmental filter finetunes the ability to precisely segregate the visual signal from the background clutter?
Visual Integration
The flip side of visual segmentation and crowding is visual integration. As mentioned earlier in the course, the visual system first breaks the natural scene into local (small parts of the visual field) edges. Later in the visual stream, the visual system must link or integrate local edges into global objects or surfaces.
Vision scientists and visual developmental scientists like to use simple stimuli. One simple stimulus is called the Kaniza square. For retro-gamer fans, this stimulus looks like four pacmen facing one another. What you ought to see in (A) in the “Visual integration” slide is a lighter square connected with “subjective” or “illusory” contours covering four darker circles. The visual system needs to link the contours of the pacmen to achieve this percept. As well see, the link depends on the spatial aspects of the stimuli and the ability to integrate across spatial frequencies.
What is the task? Researchers manipulated the pacmen in a way that results in a square narrowed or widened horizontally. As panel B in the Figure shows, the four pacmen are rotated by some angle “alpha”. The minimum angle of rotation to achieve a threshold (e.g., 75% correct) level of performance for discriminating stimuli that are narrowed or widened is the measure of interest. D and E show relatively small angles of rotation.
One additional aspect needs to be considered before moving on to how this stimulus shows how the development of human contour/edge integration occurs -- the support ratio. Support ratio is calculated from two quantities: the distance between any two pacmen and the radius of the pacmen. In a “high support” ratio Kaniza square, the pacmen are wide relative to the gaps among them. Wider pacmen and smaller gaps provide more visual support for the perception of a white square overlaid on top of four black circles, which is the Kaniza Square illusion. Conversely, narrow pacmen with a wide gap provide little support.
In short, from a scientific perspective, the support ratio measure allows researchers to describe their Kaniza squares with a single number (support ratio = l/r) instead of reporting both the radius of the pacmen (r) and the width of the gap (l).
Why is this important for integration? Looking at the stimuli in the “Support ratio…” slide in rows a and c, it should be easier to see the contour the larger the support ratio. That is, if the pacmen are bigger, it’s easier to see. High support ratios produce better discrimination and smaller angle of rotation thresholds. The final stimulus D has a contour drawn in and serves as a control.
The slide “Contour integration: results” shows how rotation angles change with age for the control stimuli and low and high support ratio stimuli. A few features of these data lead us to finds about how the ability to link contours develops. From ages 6 to 12, contour integration improves for all stimuli. For older children (9 and 12 years of age). Lastly, 6-year-olds are bad at this task and show no difference between high and low support ratio stimuli.
Developmentally, this means that developmental progress between 6 and 9 years of age improves the edge integration processing to exhibit a sensitivity to the support ratio of the stimuli.
How does this relate to integrating spatial frequencies and combination information from those early systems that segregate the visual world and then reassemble it? There is a triangular version of the square we just saw. This classic illusion is shown on the left. If I spatial frequency filter the stimulus, there is a low spatial frequency contour in the stimulus. This contour in the low spatial frequencies must be linked to the edges in the higher spatial frequencies in the pacmen edges. The data from the experiment shows that this is difficult for six-year-olds, and their visual development requires more experience to link contours efficiently.
Integrating information from the two eyes
Resolution acuity depends on whether it is measured binocularly or monocularly. During development, a child’s visual acuity, as measured by Teller cards, will improve. The slide “Binocular summation of grating acuity” shows that monocular acuity for the better eye and binocular acuity are the same (with some variability marked by the dashed lines) until approximately 5 to 6 months of age.
Is a measure of better binocular acuity evidence that the child has begun to coordinate vision between the two eyes? Perhaps, but a quirk of how probability works leaves us with some doubt. Take the example of flipping a coin. If you flip a fair coin once, the chance of seeing a “head” is one-half. What if you flip it twice? What is the probability of seeing at least one head? It works out to 0.75 (exercise: trying to work out for yourself why this is true). The mere chance that a child uses both eyes, instead of only their better eye, would lead us to predict better binocular acuity without combining the information from both eyes.
That said, we can use data from developmental research where the combination of information from the two eyes is necessary (e.g., stereo vision). Suppose the developmental trajectory of stereo vision matches the estimate from the binocular acuity experiment. We can infer that the child is beginning to combine visual information from the two eyes. The following section will help us solve this puzzle.
--- END COMPLEX SPATIAL VISUAL FUNCTION SECTION ---
--- MOTION, DEPTH, AND STEREO SECTION ---
Measuring motion vision behaviorally
To isolate the development of motion from spatial vision, researchers have used random-dot stimuli. As the video in the “Motion coherence - Random Dots” shows, random-dot stimuli contain dots that move in a coherent pattern of motion and dots that move randomly on the screen. The observer must reliably distinguish between the side of the screen where the dots are moving coherently versus the random side. The percentage of dots that are required to see the motion is the measure researchers use.
Unfortunately, our puzzle is not addressed by the development of motion when we measure behavior. The development of motion coherence and speed discrimination both have developmental trajectories that are very long as measured via behavioral tasks. The data show that our ability to extract motion information from the world takes a long time to develop.
Perhaps random-dot stimuli are too complex for children? Recall our discussion of crowding, which also had a long developmental time course. Perhaps the random-dot task is similar to the developmental trajectory of crowding? The developmental time course is very similar. As Atkinson & Braddick showed (see “Atkinson & Braddick - Form and Motion”) using a clever stimulus made of line segments, the development of form perception with random stimuli is very similar to that of motion perception.
As we learned previously, Visual Development experimenters have multiple methods to measure vision. They can use electrical signals (EEG/VEP) or reflexes. Perhaps these methods could address the puzzle of binocular integration?
Depth and Stereo
The slide “Perception of Depth: the visual cliff -- Gibson & Walk (1957)” shows a classic visual experiment that takes advantage of the reflexive behavior of the crawling infant. Infants will avoid danger if they can perceive it. The apparatus used by Gibson & Walk (1957) used a table with a patterned side and a glass side. Underneath the glass is a pattern that is consistent with a sharp drop. Infants that can use depth cues will not put themselves in danger, while infants that can not perceive depth will crawl across the glass. The video clip shows an infant that passes the “visual cliff test”. Generally, babies with more crawling experience or about six months of age will refuse to cross the cliff.
Is there evidence that can help us determine whether the improvement of binocular acuity at six months reflects integrating spatial information from the two eyes? Bela Julez created random-dot stereograms. If you are at the correct distance from the display, you can converge or diverge your eyes and fuse the two fields of random dots in a single image. If you try this demo, you ought to see two surfaces, one a square and the other a square with a hole removed in depth.
Random-dot stereograms have been combined with red/green stereo glasses and Visual Evoked Potentials (VEP) to measure an infants’ physiological response to a random-dot stereogram. Recall our discussion on how VEP is measured -- electrical signals at the scalp. The slide “VEP measures the onset of binocular function” shows VEP measures of the age when a reliable VEP signal emerges for three babies (the different hash marks on the bars). The measure was when a reliable (i.e., one can be distinguished from the noisy background brain signal) could be detected. From the data, the median of 12 weeks (3 months). Recall that, unlike behavioral measures, VEP measures physiological function and not behavior, which means that VEP might underestimate when stereo information is available for behavior.
Preferential looking can be used to measure stereo with a clever stimulus configuration. Two screens show gratings to each of the two eyes (see the Figure in the slide). When a horizontal and a vertical grating are shown together, one of two percepts can result, either rivalry or fusion. Using the looking techniques we discussed earlier, we can measure whether the infant looks longer at a stimulus that creates rivalry or fuses into a grid (fusion).
The slide “Early stereo vision” shows preferential looking data from nine different children (one in each graph) at a range of ages. Let’s consider each of the curves. If the curves are low, as is typical before 12 weeks of age, the child prefers the control stimulus versus the binocular stimulus. At approximately 12 weeks, allowing for individual differences among the children, the children switch from preferring rivalry to preferring a stimulus that shows fusion.
One way to think about the experiment shown in the slide "Early stereo vision" is that the “right-screen” column in the diagram is like the “blank side” of a Teller acuity card. I’ll refer to this stimulus as the “control stimulus.” Let’s call the left side of the screen the “rivalry or fusion” stimulus (aka binocular stimulus, but that's a confusing term to use because both stimuli are binocular).
Depending on the infant's age and the stage of binocular development, the “rivalry or fusion” stimulus could produce a rivalrous or fused percept. Depending on whether binocularity has emerged, the “rivalry or fusion” could produce either percept. We can only measure infant preferences, not their perception. The data in [slide 109] shows a preference toward the “control” stimulus versus the “rivalry or fusion” stimulus early on in life (the “frequency of preference measure). The interpretation (shown in the diagram) comes with the assumption that a rivalrous stimulus is aversive (or anti-preferred) relative to the “control”. Once binocularity develops, the “rivalry or fusion stimulus” becomes preferred.
The slide “Stereo and fusion” slide shows fusion and stereopsis tasks from two different labs plotted as the frequency of infants that prefer stimuli fused or generate a stereoscopic percept. The red dashed line shows the age at which most babies show preference in both the fusion and stereopsis tasks. We see that the estimate is approximately twelve weeks or three months.
The graph in the slide “Fusion, stereo, and Motion " has several informative features. Notice that the red dashed line estimates a reliable percentage of babies prefer fusion over rivalry, the stereo correlogram (as measured by VEP), and a moving stimulus (again measured with VEP). Recall that in random-dot motion tasks, we saw a long time course of development that spanned years.
These data present us with another puzzle -- does the VEP merely show that the signal for motion is present in the brain but cannot be used to generate robust percepts at this stage in the developmental time course. The answer is probably, the visual world, especially for infants, is dominated by slow-motion (e.g., think crawling). At this stage, they have not acquired the robust visual input to form their filters for using motion to distinguish and react/prefer more complex motion stimuli.
Ocular dominance columns - Introduction
We will discuss ocular dominance columns in the next section as their development as researchers have studied their development extensively. On the right of the “Ocular dominance columns?” is a diagram produced from optical imaging data that ought to be familiar to you (Exercise questions: What do the colors mean? What is represented by R versus L? What are the blank ovals called in the diagram?)
Does the developmental trajectory and formation of ocular dominance columns in the primary visual cortex explain the consistency in the estimates of development that we saw in the previous set of studies? Translating animal research suggests the developmental milestones we have ve noted (3 - 4 months) are when one would expect facilitation of the development of the ocular dominance columns given the change in the infant’s visual world from the development of motor skills (e.g., able to self-locomotion).
--- END MOTION, DEPTH, AND STEREO SECTION ---
--- THE DEVELOPMENT OF VISUAL FIELDS SECTION ---
The extent of the visual field develops as the child ages. As we will see, the extent of the visual field and its development depends on the stimulus used to measure infant peripheral vision. The different estimates of the extent of the visual field with stimuli reveal how peripheral processing is developing with experience.
The slide "Visual Fields" shows a bar graph. The dependent variable is the extent of the visual field measured for children as a percentage of an adult visual field. The three different stimuli used with the children were a static white on white stimulus (similar to Humphrey's visual field), a hybrid stimulus where the stimulus moved from the periphery to the central visual field, and a White Sphere Kinetic Perimeter.
Researchers analyzed the data from the static perimeter in two ways. They computed the value where the visual field extent where 50% of the infants reliably detected the stimuli (in either the nasal or temporal visual fields) or the mean of each individual infant's threshold for detecting the spot of light.
If we only knew the static and hybrid data, we would conclude that 3.5-month-old infants cannot use peripheral information whereas seven and 9-month-olds can. However, perhaps these stimuli are poor inputs to the visual developmental filter used by younger children. The hybrid (with motion) shows a larger visual field but markedly reduced from 7 to 9-month-olds.
The White Sphere Kinetic Perimeter utilizes bright, high-contrast, flickering lights. On these tests, the 3.5-month-old children show better performance. The difference between the static stimuli is an example of a poor visual input that cannot drive the visual filter for young children.
When we look at the data for “toddlers” (roughly defined as children 11 to 30 months of age), we see a marked improvement in their visual field extents. The measures indicate that they have approximately 80% of the visual field of an adult. These data provide another example of visual development driven by the changing experience as the child moves and interacts with the environment.
The two slides on “Visual Fields and Flickering Stimuli” reveal a difference between the visual fields of adults and children when measured with a flickering stimulus. First, notice that the visual field of adults (black squares) does not depend on whether the target in the periphery flickers. The measured visual field of children is dependent on flicker.
What does this mean for development? Peripheral vision is good at detecting transients, that is, flashes of light in the periphery. The limited abilities of children to detect flicker slower and faster than 10 Hz (cycles per second) indicates that visual filters are tuned to develop more rapidly to a flicker of 10 Hz. Physiologically the flicker dependence could be produced by any temporal mechanism in the visual pathway that responds to changes over time.
Only in the last five years have researchers attempted to measure the visual fields of infants. While this is technically difficult to accomplish, the benefit comes from translating an easy test into clinical screening for cerebral visual impairment or developmental delays. The infant perimeter uses a camera and algorithms to measure infant peripheral vision. The device has undergone some validation but is a long way from a reliable clinical device.
--- END THE DEVELOPMENT OF VISUAL FIELDS ---
--- EYE MOVEMENT SECTION ---
Not all animals move their eyes. For example, the owl is an example of a vertebrate that cannot move its eyes. For humans, eye movements enable the fixation of the visual scene on the fovea.
Eye movements fall into one of three categories. Fixations hold the eye steady on an object of interest. Saccades are ballistic in that they are quick movements from one point in visual space to another. Smooth pursuit eye movements track objects moving through the field of view.
We will discuss each of these eye movement categories in turn, but this slide lists a few key points about each.
The slide “Fixational eye movements” shows a plot of infant (top, 96 days) and adult (bottom) fixation stability—the graphs plot vertical and horizontal position in degrees of visual angle. Note the greater spread of points in the infant plot versus the adult plot. This result is typical. Infant fixation is less accurate and more variable to a fixation target. Note the circles that enclose the points (which represent individual fixations) in the infant graph. The radius of this circle is a standard measure of the variability of eye movements.
To learn about the developmental time course of infant fixational stability, researchers measured the mean radial distance of 4 through 15-year-olds. The Y-axis of the plot shows the radial distance of fixation variability. The development of fixational stability has a long time course. The fixation becomes less variable from 4 to 15-years of age. Consider the implication of this for measuring vision. If the eyes are less stable, fixational instability will blur objects on the retina. Fixational stability is another example where the development of one visual ability can affect another. Recall that the development of high spatial frequency contrast sensitivity required a long time course. If fixations are unstable, the dark and light bars of a low-contrast grating will blur and become invisible.
The slide, “Saccades – Aslin & Salapatek (1975)” shows a classic experiment with one-month-old infants. On the right is a picture of the experimental apparatus. Experimenters place the infant in a comfortable position with its head secured. A mirror was used to present stimuli to induce eye movements via a mirror. The experimenters used types of stimulus presentation. For replacement trials, a fixation point was placed at the center of the screen. A second stimulus, the target, is then presented in the periphery and the fixation point removed. The fixation point remained in the infant’s field of view when the target was presented for addition trials.
How often do infants make an eye movement in replacement versus addition trials? The first panel of graphs “results” slide shows the probability of an infant eye movement for replacement (filled circles) and addition (open circles) as the distance between the fixation point and the target are increased. The four panels show horizontal (left-right), the two diagonals (right-up/left-down and left-up/right-down), and vertical (up-down).
Replacement trials (filled circles) are more likely to produce eye movements. The data reflect the preference for looking at something versus nothing, as we saw with the preferential looking technique. Addition trials (open circles) are less likely to produce eye movements. Finally, note the X-axis, which shows that eye movements are less likely as the target stimulus is presented further out in the periphery. This is consistent with the reduced visual fields of infants that we saw in the visual fields section.
The take-away message is that infants can make saccadic eye movements as long as the stimuli are present in the visual field and are more likely to move their eyes when no other stimuli are present in the visual field. That saccadic eye movements are present in one-month-old infants does not mean that infant saccades are developed. Neuro-typical adults can make saccades quickly and accurately throughout the visual field under a wide range of visual conditions. Recall our developmental filter model, that infants have limited eye movement ability to acquire the necessary visual experience as they develop with visual experience.
Another feature of saccades is that they take time to initiate. Research with adults shows that once an observer decides to move their eyes, eye movements must be programmed and trajectory (recall they are ballistic) calculated before being initiated. The time to program a saccade is called saccadic latency. Saccadic latency is fastest for visually guided saccades (filled circles). Young children can make saccades in approximately 250 ms. By age 15, saccadic latency to visual targets is just over 200 ms for most observers. One type of specialty saccade are saccades that rely on memory. Memory-guided saccades take longer to program and initiate and have a long developmental time course because they rely on higher-level cognitive skills. Response inhibition, also known as, anti-saccades are a special type of saccades that we will discuss because they are a useful diagnostic tool for many disorders, such as schizophrenia, traumatic brain injury, and cerebral visual impairment.
The animation in the slide “Anti-saccades (Response Inhibition)” shows an example of how anti-saccades are measured. First, a fixation point appears, then a target stimulus appears. The green dot shows the observer’s fixation. The observer must not make a saccade to this target. Instead, once the target disappears, the observer must make a saccade in the opposite direction. In this example, the observer must make a saccade in the opposite direction to complete the trial correctly.
The slide “Anti-Saccade Development - Results” shows both how difficult this task is even for adults. Notice that adults fail the anti-saccade (black dots labeled AS) approximately 20% of the time. That is, they make an eye movement toward the target on one-fifth of the trials. It is difficult to inhibit saccades to targets in this task. Children find the AS task even more difficult and fail on nearly half the trials at age seven.
The anti-saccade task taps a fundamental visual and cognitive ability -- the ability to inhibit behavior. Inhibitory processes in the brain are vulnerable and take a long time to develop. Their long developmental time course is the result of being built on a large number of visual processes. As such, they are vulnerable and require robust visual experience to exhibit normal development.
Brain imaging, called functional magnetic resonance imaging, is used by neuro-ophthalmologists and researchers to determine what brain areas are activated when doing an anti-saccade task. First, notice the R/L in the top left; this shows where the right and left sides of the brain are located. The top row shows slices of the brain looking down at the top of the head. The second row shows slices viewed from the side. The third row shows slices from top to bottom looking from behind. The bottom row shows a zoomed-in slice similar to the top row.
Each column shows the activation (highest activation in yellow) in children, adolescents, and adults doing an anti-saccade task.
Notice how there is more activation as one looks at the brain scans from older individuals. The increased activation is a maker of the activity required to inhibit the saccade and program the anti-saccade. The labels are not very informative, but they are all visual areas involved in eye movements (Key: FEF = Frontal Eye Fields, PEF = Parietal Eye Field, Sup Coll = Superior Colliculus, Lat Cer = Lateral Cerebellum, DLPFC = dorso-lateral prefrontal cortex). Some of these regions are lower in the visual stream while others are further up the visual stream.
Eye-tracking with children continues to advance as video-based technology that uses image processing algorithms improves. Let us look at a fun video from the Tobii corporations who develop systems for eye-tracking for tasks from research to gaming. Eye-tracking systems are getting smaller, more accurate, and easier to use, increasing their utility as a clinical instrument.
We have looked at fixational and saccadic eye movements and their development. We will wrap up our discussion of the development of eye movements with smooth-pursuit eye movements.
Smooth pursuit eye movements involve voluntary foveation of a target. Distinct brain circuits produce smooth-pursuit eye movements from saccades. Smooth pursuit behavior requires long-range and multi-system connections in the visual brain. Tracking objects in the environment has utility in development. For example, tracking how caregivers, friends, or other moving objects allows the child to respond appropriately to their environment and avoid threats.
A classic finding that shows how immature smooth pursuit eye movements are early in development comes from placing babies in a snug cradle and observing how they track moving stimuli in a rotating drum.
Unlike saccades, one-month infants do not make smooth-pursuit eye movements. Instead, they make many short saccades to track moving objects, as shown in the left graph on the slide “Smooth Pursuit Eye Movements”. This pattern of behavior is remarkably different from adults. Adults smoothly move their eyes, tracking the target. Clinically the absence of smooth pursuit eye movements can indicate several clinical conditions (schizophrenia, traumatic brain injury, and more). It is easy to observe smooth pursuit abilities in a standard comprehensive eye exam as no special equipment is necessary.
Experimentally, the ability to make smooth pursuit eye movements depends on the stimulus. If the stimulus moves either too far or too fast across the visual field, it is difficult for younger children to track. In the graph of the data from “Von Hofsten et al. (1997)”, the filled symbols show objects that move across 20 degrees of the visual field, while the open symbols are stimuli that move only 10 degrees. With the 10 degree stimuli, two-month-old infants can track objects, especially if they are moving slowly. Limits based on the ability to perceive motion (i.e., for fast-moving stimuli) or those covering the visual field takes longer to develop - 5 months or more.
The smooth-pursuit eye movement development combines what we have learned about motion perception, visual fields, and saccades. These systems must provide robust visual input that allows for reliable stimulus tracking with eye moments.
--- END EYE MOVEMENT SECTION ---
--- GENERAL PRINCIPLES & OVERVIEW OF HUMAN VISUAL DEVELOPMENT ---
General trends of visual development can be established from the body of research on the developmental abilities we have covered.
One framework that uses the organization of the brain divides the systems into dorsal from ventral pathways. Dorsal and ventral pathways are also called “where” and “what” pathways. Notice that in blue for the dorsal pathway, several visual abilities are listed. These abilities map onto our discussions of the development of motion, vernier acuity, and eye movements. The ventral pathway relates to the spatial visual abilities we discussed, such as detailed spatial vision (contrast sensitivity, color, and resolution acuity). The image provides an easy-to-remember general framework. This framework allows us to make the general statement that the ventral visual pathway (e.g., basic spatial vision) supports the development of the dorsal pathway. While this is true to a first approximation, remember that the brain works as a coherent system, and the function and development of both pathways co-occurs and influence the other.
The overview slide summarizes many of the abilities we have touched on (and a few we did not). As an exercise, it is worth considering how this chart reflects the visual development work we have talked about and how it shows that the visual system is interconnected during development into adulthood.
--- END OF LECTURE REVIEW & HUMAN DEVELOPMENT ---
Part 2: Anatomical and Functional Development of the Visual System
--- BASIC ANATOMY DEVELOPMENT ---
Thus far, we have discussed methods for measuring behavior and neural correlates of the development of vision. Our focus was on mechanisms influenced by experiential processing, such as attunement and induction. In this section of the course, we will focus on mechanisms that are primarily but not only maturational.
Let us begin with a rough sketch of the basics of the Human Brain and Eyes. The human central nervous system begins with the formation of the neural tube. Central nervous system development begins similarly across vertebrates. The development of anatomical complexity and neural density of the human brain outpaces many animals, but the commonalities across species allow us to link animal developmental research to humans. The human brain increases in size and complexity by increasing the surface area of the cortex. The size of the skull constrains all that complexity. Through evolution, more complex brains address this limit via folding the cortical surface. The “valleys” in the folds are called sulci or fissures and “hills” gyri. In humans, extensive folding can be observed within the first 20 weeks after conception.
Three anatomical markers are worth tracking, the Sylvian, Parieto-occipital, and Calcarine fissures.
The Sylvian fissure is a prominent brain structure, and we have seen it in the diagram that distinguishes “what” and “where” pathways in the brain. Located ventrally from the Sylvian fissure is the fusiform gyrus. The fusiform gyrus contains neural circuits that underlie the expert visual processing of objects, particularly faces.
The Parieto-occipital fissure is a handy marker for dividing the brain area chiefly concerned with visual processing (occipital lobe) and areas that are not chiefly involved in visual processing.
The Calcarine fissure divides the upper and lower hemifields of the visual world.
Note how early these three fissures emerge in development. In these slides, we are observing maturational processes that highlight the anatomical organization of the human brain. Given the selection pressures from evolution via natural selection, we can infer that this organization is tuned to the inputs it will receive at the onset of experience from the environment.
Focusing on the eye, we can observe a developmental trajectory that moves from a simple organization to a more complex one. We can also observe how folding and reorganization of tissue enables the anatomical organization to move from a simpler to a more complex state.
At 20 days, a structure called the optic stalk begins to extend from the neural tube marking the beginning of the eye development. Then by 24 days, the optic stalk receives a signal from the epithelium to fold inward, which marks the beginning of the formation of the eye.
By 28 days, we can observe familiar ocular structures (RPE, retinal, lens, cornea). Two types of tissues are involved: the neuro- and surface ectoderm. A simplified view of ocular anatomy development can be found by noting the structures that emerge from these two embryonic tissues. Note that different tissue categories end up as the back/front of the eye depending on where the cells were located within the optic stalk early in development.
Neural development follows a general pattern that is true not only for the development of the visual system but across the brain. This easy nine-step recipe is as follows.
1. Stem cells multiply rapidly.
2. Stem cells migrate away from where they are generated.
3. A stem cell will begin to differentiate into neural stem cells and continue to divide.
4. Proto-neurons grow by sending out axons and dendrites to form connections.
5. Dendrites and axons themselves grow and extend forming connections.
6. Synapses and junctions form between axons and dendrites further shaping the neuron.
7. Synapses start to develop, send, and receive, signals from one another, even before the onset of visual experience.
8. Circuits are established based on the principle that “neurons that fire together, wire
together” but the process is accelerated with visual experience.
9. If the circuits are not complete or are not stimulated with visual experience then prune them back.
Evolution has developed a curious solution to the challenges presented by developing a flexible and efficient brain. As a general rule, the system over-generates neurons/axons/synapses and builds circuits overabundantly early on. The system then pruned back connections and circuits if they are unused later in development. One hypothesis is that this solution optimizes metabolic energy expenditure and maximizes the brain's flexibility after the onset of visual experience.
The slide “Neural Migration” shows the layers of the primary visual cortex (V1). A horizontal line represents each neuron. The age of each of the cells was determined and is listed along the top. Note how in the inner layers (e.g., layer VI), neurons first appear developed by the researchers’ method for estimating age relatively early in development (E45). Neural migration of stem cells proceeds from the inner to outer layers (e.g., layers I and II), and finally, we find the youngest cells differentiating late in development (E105).
How do neurons move during migration? Pseudopods (or growth cones) accomplish the task of migration and virtually climb guiding networks (glial fibers). The growth and migration of neural precursor cells move in a structured way and facilitate the columnar organization of the visual cortex (recall: the optical imaging diagram). Listen to Rakic as he describes his research on the role of neural migration in development in two video clips.The final piece of the picture of how neurons migrate comes from nuclear rotation.
Once a neuron migrates, how does a neuron find and connect to neighboring neurons to form the visual circuits that process visual input? The exploration process is not random but contains an element of randomness, as shown by the distributions in the “Making Connections” slide panels A-C.
If neurons explore only a narrow range, they could miss connections and fail to connect; the cartoon neuron in panel A shows this outcome. If the angle of space a neuron explores is too narrow, it will miss connections. However, as shown in panel C, exploring in all directions is metabolically expensive and would slow development. Panel B shows the trade-off; neurons explore a limited range, minimizing the range explored but sometimes at the expense of missed connections.
Synaptic pruning
Once neurons are differentiated and connected, development switches strategies to editing the circuits formed in the cortex. In the slide “synaptic pruning” we can see two developmental trends. The first trend in the data seen via the open circles and the red curve is that the volume of the primary visual cortex increases then remains roughly constant from approximately four months of age (Q: Where have we seen this age before?) until an advanced age. Synaptic density per unit volume increases until about one year and then declines until adulthood.
Not only does the graph in “synaptic pruning” reveal the strategy of making neurons and then pruning back synapses, but it provides a tantalizing clue behind what occurs for tasks that have a relatively short developmental time course (circuits need to be built) and those that have a long developmental time course (the circuits need to be pruned, tuned, and shaped).
Dr. Joan Stiles provides a handy summary in a video of pruning.
The “Pruning is essential” slide lays out key ideas in neural pruning. Note that, as Dr. Stiles states, there are cycles of overproduction and pruning. There is overproduction before neurons are connected and afterward. In normal development, pruning happens constantly. If cells fail to differentiate or migrate, they are pruned. If cells fail to connect, they are pruned. If a circuit fails to transmit useful information from the environment, it is pruned.
--- END BASIC ANATOMY DEVELOPMENT ---
--- DEVELOPMENT OF THE RETINA ---
The development of the retina is an example of the general developmental concepts we have seen thus far. We can trace the development of retinal circuits from migrating stem cells that are precursors to the photoreceptors and other cells in the retina. The slide “Migration in the retina” shows a detailed picture of how the retina follows the pattern we discussed in other areas of the central nervous system. Stem cells are generated in the inner layers and then migrate outward.
The slide “Differentiation in the human eye” shows a snapshot of a 42-day old embryo. Previously we saw a classic anatomical drawing of the development of the eye at 28 days. Notice that all the structures we noted at 28 days are more adult-like at 42-days via maturation, and the optic nerve is beginning to form.
Detailed studies of the retina have pinpointed when cells differentiate in the Macaque monkey. Note that the timeline of the development of the macaque is faster than human development. The gestation period of the macaque is approximately six months versus nine months in the human and macaque neonates. Importantly, macaque monkeys have a developmental trajectory that is faster than human infants. Comparing humans and macaques is our first example of a cross-species comparison of milestone ages in visual development. A handy yet imperfect translation of macaque development is that it is a factor of four faster. Macaque lifespans are approximately 20 years. In the final section of Part 2, we will see how the critical period is compared across species.
Keep in mind the final layered organization of the retina and our following discussion of the ordering of the development of retinal cells. (Thought question: How does the development of the retinal layers differ from what we saw in the slide that showed V1 cell “age” and migration? Does the retina show the same timeline of the development of the inner and outer layers?)
Not only has the timeline of retinal cell differentiation been mapped, but the role of lateral inhibition and signals from differentiated cells on retinal progenitor cells as shown in the slide “guided cell differentiation in the retina”. Consider the retina as an array of receptors, cells that process information and support cells. The eye must produce a clear and undistorted retinal image. The optimal retinal organization to prevent distortions from the retinal layout is an evenly spaced grid. Biology is rarely that exacting, but using signals to inhibit a cell’s neighbors from becoming the same type of cell enables the retina to achieve a regular spacing.
An example of the role that genetics plays in the structure of amacrine cell spacing. Researchers used the mouse model where the “wild-type” has no genetic mutations and a mutant mouse whose amacrine cell differentiation cell mechanisms are altered. The wild-type amacrine cell spacing (shown by the white dots) is much more regular than the mutant, and the spacing between amacrine cells is more random or disordered.
The development of the macula begins before birth and the onset of visual input; however, its development takes a qualitative leap forward in the first months of life. Around one year of life, there is another phase of rapid development. Recall from the human behavior slide “Time Course - Teller Acuity Cards” that at approximately one year, the development of resolution acuity slows from a rapid improvement to a more gradual improvement as the child ages.
Once cells are differentiated, circuits begin to form. The slide “Developing circuits in the retina - after birth” shows a cartoon of retinal circuits at 3- and 5-months-old. There are two differences between the age groups. First, connections form, and then another round of differentiation or specialization occurs.
We can observe retinal specialization in retinal ganglion cells (RGC), which come in two types mono- and bi-stratified. Mono-stratified RGCs are more specialized than bi-stratified RGCs. Visual experience drives the specialization of cells from bi-stratified to mono-stratified cells.
To investigate the role of visual experience on development, researchers have reared animals in the dark, thus disassociating maturation after birth from the role of visual experience. The paradigm is simple -- turn off the lights.
However, before we get into what dark rearing teaches us about RGC specialization, there are a few words of caution as we will be revisiting dark rearing experiments several times in the course. First, the dissociation of maturation and experience is not the only effect on the animal. Being born into a world of darkness is a very abnormal visual environment. Systemic effects, such as disrupting circadian rhythms (the sleep/wake cycle) occur with dark rearing. Given that there are cells that contain "clocks" in the eye, among other systems. Dark rearing also systematically changes stress hormone levels. Dark rearing changes melatonin production because circadian rhythms are disrupted. Stress hormones (e.g., cortisol) are also increased. Lastly, if the animal is reared in the dark for long enough, the cortex thins (Question: what neural development process might be being activated to cause cortical thinning in dark-rearing?)
With those three points of caution out of the way, look at the slide, "Dark rearing blocks the maturation of ganglion cells", and the data that it presents. First, look at the 10-day-old animals. The percentage of bi-stratified (white bars) and mono-stratified (black bars). They have roughly equal numbers of unspecialized (bi-stratified) and specialized RGCs. The lack of RGC specialization indicates that these animals are at an early developmental stage.
Now compare the percentages of mono- and bi-stratified cells for the 30-day-old animals reared in the light (P30L) and the dark (P30D). The P30L animals show ¾ of their cells becoming specialized mono-stratified RGCs, but the P30D animals remain as immature as P10L animals with a roughly equal proportion of mono- and bi-stratified. Clearly, this early visual experience is critical for this animal, which happens to be the mouse.
"Retinal waves" are a phenomenon that informs us how the retina matures and prepares for the onset of visual experience. Retinal waves are spontaneous waves of activity in the retina that occur before visual experience begins. By characterizing retinal waves before visual experience, researchers have learned about how maturation prepares the visual system.
First, retinal waves produce spontaneous activity in the retina but only locally. A retinal wave will only activate a small portion of the retina. The activity produced by retinal waves is consistent with that produced by visual experience. Consider a retinal image. There are many regions of the retina that will be similarly active in response to a scene. That is, nearby points in the visual input are correlated and produce correlated retinal activation.
The wave does not consist of action potentials (neural firing) but the slower, more diffuse mechanism mediated by calcium ions. The long time course of retinal waves is shown in the top row of the slide “Retinal wave recording” where you can see snapshots of a single wave that lasts for over four seconds. The bottom row of the slide shows several waves plotted in false color on the images; they show the local characteristics of retinal waves.
A second important point about retinal waves -- the activity they generate travels up the visual pathway and can be measured in the visual cortex. The retinotopic map of the primary visual cortex is revealed by retinal waves, which indicates that the visual system builds itself with the expectation of a visual world with a specific input. The visual system's anatomy and physiology assume that visual experience will take on specific statistical characteristics (e.g., spatial frequencies, orientations).
Late-stage retinal waves play a role in the development of RGC stratification. The diagram in the slide “Retinal waves and ON/OFF development” depicts two points. The first point, that mono-stratified ON and OFF center-surround RGCs take their inputs at different layers in the retina. The second point is that if retinal waves are blocked (here in the cat), RGC stratification remains immature.
Taken together, dark rearing experiments and investigations into retinal waves demonstrate the role of both maturation and attunement required to produce robust ON/OFF pathways (Recall: an important visual distinction).
--- END DEVELOPMENT OF THE RETINA ---
--- OPTIC TRACT, SUBCORTICAL STRUCTURES, VISUAL CORTEX ---
After visual information is processed in the retina, it proceeds up the optic nerve. Depending on what portion of the visual field (nasal versus temporal), the information may cross at the optic chiasm. How are the fibers of the optic nerve guided? In a process chiefly genetically controlled and dependent, the genes “slit 1 / slit 2” need to operate to guide and order the optic tract fibers. Two slides show this in cartoon form and the actual data obtained via optical imaging.
Before moving on to the lateral geniculate nucleus (LGN), we will introduce a manipulation that we will encounter again during our myopia and emmetropization lecture -- optic nerve sectioning. Sectioning the optic nerve cuts off output from the eye to the rest of the visual pathway. If we section the optic nerve of the left eye in a normal cat, then the neural activity in layer A of the LGN will no longer be related to the visual input.
Albinism in both cats (and humans) affects the orderly connection of the optic nerve fibers. Notice how the temporal fibers do not remain ipsilateral at the chiasm but also connect contra-laterally. Sectioning the optic nerve in albino cats has shown that the LGN develops abnormally for animals with this genotype.
The LGN is a structure that contains six layers, some of which carry information from the parvocellular (layers 4 - 6) and magnocellular (layers 1 - 2). Odd-numbered parvocellular layers and magnocellular layer 2 carry ipsilateral information. The other layers carry information from the contralateral visual field.
The LGN is a structure grouped within the thalamus. The slide “Development of the LGN” shows the embryonic development of the LGN in the monkey from day 68 to 144 (birth is ~180). Both migration and then differentiation shape the layers of the LGN by the time of birth.
In the cat, we can see how the output from the LGN develops. The slide “Development of the LGN in the Cat” shows how two neurons produce an overabundance of dendrites and axonal processes from embryonic day 40 through 53. By day 63 (of 67 cat gestational days). Processes have pruned the superfluous terminal. Cats do not open their eyes at birth. (Question: What sort of input from the eye could be guiding the pruning of these neurons?)
LGN receptive fields have a center-surround organization. ON cells increase their firing/output when stimulated with light in the center and reduce their firing/output when stimulated off-center in the surround. OFF cells do the reverse.
The slide “Center-Surround LGN development” shows data from recordings of LGN light responses in the LGN of the ferret. The figures in the slide show the LGN responses to light shortly after eye-opening (which is four weeks after birth) for four cells. Two features are worth noting. First, notice how as we look across each row, the center-surround nature of the cells becomes more apparent. Also, in figure 2c (post-natal day 47-50) to 2d, the cell can switch its preference from ON to OFF center. This switching indicates a notable instability in the cells’ signaling, and clearly, other developmental processes must be at work to stabilize the cell. The following few slides show how the development of inhibition proceeds and explain how cells are prevented from flexibly switching what they signal when adulthood is reached.
What developmental process “freezes” the LGN and removes this flexibility from remaining throughout life? The brain contains as many, if not more, feedback connections as feed-forward connections. Feedback connections can provide descending inhibitory information from higher-level cortical areas (recall: the development of inhibition in anti-saccades). The classic model of development tends to consider feedforward input chiefly. A feedforward model simplifies the study of the development of vision but has been shown to miss important aspects of development.
One such aspect is the stabilization of the receptive field structure in the LGN. The classic model only considers feedforward processes. A model incorporating feedback from layer VI from the primary visual cortex has recently been developed and tested in developing mice. After V1 develops, it sends connections to thalamic neurons (TRN and LIN) that inhibit the LGN center-surround relay neurons. The descending input shapes and stabilizes the LGN; this shaping is signified by the open and filled connections from the retina to the relay neuron.
One experimental test was to section the descending output from V1. After sectioning, the shaping of the LGN relay neurons was disrupted. Also, mice of different ages were tested on days 10 and 20. Like the early development of the ferret, mouse LGN relay cells were more unstable early in development at day 10 but by day 20, the receptive field properties of the LGN stabilized.
We return to the optical imaging map, which shows ocular dominance columns, orientation tuning, and cytochrome oxidase blobs. Cytochrome oxidase blobs are drawn in a regular pattern in the cartoon to correspond to the observed anatomy. Let us look at the effect dark-rearing has on the organization of the cytochrome oxidase blobs. It may be difficult to see in the raw data presented in the slide, “Cytochrome Oxidase Blobs - Dark Rearing” but the dark spots are more regular in the light-reared ferret.
Optical imaging maps are created by showing each subject several stimuli (i.e., via their eyes) and measuring the response of the visual cortex. In this case, stimuli of different orientations are presented to the unconscious ferret’s eyes, and the response of the primary visual cortex is recorded. Each color in the bottom row of the slide, “Ferret dark rearing – optical imaging,” comes from the presentation of a different orientation. A black and white image is recorded, and then the grey level is turned into different colors to create a map.
In our discussion of mono- and bi-stratified RGC, we learned that dark-rearing keeps the animal in an immature state. It lacks the visual experience to attune or induce visual development. Let us compare the optical imaging map from the normal ferret and the dark-reared ferret. We can see how the optical imaging map does not show the orientation selectivity of the ferret reared with normal environmental experience.
If dark-rearing keeps the system in an immature state, what does lid-suturing do? Lid-suturing provides disordered and abnormal experience (light leaks through the eyelids. We can see the effect of this abnormal experience on the optical imaging orientation map. The signal from the cortex is weaker, and perhaps the input cannot support the activation of circuits that are then pruned back. Also, there is very little selectivity for orientation. Abnormal experience experiments have added to our knowledge of visual development by mimicking disease processes experienced by humans (e.g., dense cataracts).
A classic way of measuring the development of functional orientation processing in the visual cortex is via electrophysiology. In this case, the electrodes are placed within the context, and the activity (spiking axons) is measured. If the orientation selectivity index is high, then a neuron responds only to a narrow range of orientations. Note how electrical activity and optical imaging measures do not precisely agree on the time course of orientation selectivity development. That said, the estimate of cat orientation selectivity development between the two measures is quite close at approximately 40 days.
The slide “Daughter Cells” returns us to the idea of differentiation but within the context of orientation tuning in the visual cortex. Researchers were able to label precursor (or mother) cells so that their descendants (or daughters) could be tracked through development. Two cells are clonally related if they share the same mother cell. The top row of graphs shows, in red, cell counts of clonally related cells. The x-axis shows the orientation that the cell prefers. In the bottom row are cells that are not related. Note how the clonally related cells cluster around the same preferred orientation—an example of how genetics before differentiation can influence development.
Recall how we saw that color vision was limited by immature spatial vision mechanisms (i.e., the spacing and sensitivity of the cones) in the human development section. The development of the outer segment of the cones was a key contributing factor in limiting color vision. The same approach can be applied to different stages along the visual pathway. The LGN is more sensitive, by a factor of two, to contrast early in development than V1. Can we use the development trajectories of contrast sensitivity as recorded with electrophysiology and the anatomical development of the retina to see how the development of each area limits behavioral contrast sensitivity?
The slide “Monkey Contrast Sensitivity” shows the CSF measured behaviorally from as early as five days of life to 100 days of life. As with humans, with age, monkeys become more sensitive overall, there is a shift in peak contrast sensitivity to higher spatial frequencies, and the highest resolvable frequency increases.
In the slide that shows the main result from the paper “Factors limiting the postnatal development of visual acuity in the monkey,” we can see that while the spatial limits imposed by the photoreceptors develop. At least some of the improvement of monkey spatial vision comes from the development of the photoreceptors, but they are not the key limiting factor as was the case with infant color vision.
Next, there is the development of CS and spatial resolution of the LGN. It also develops, but if the LGN were the limiting factor, one would predict higher resolution limits than observed behaviorally. That said, the LGN is immature and imposes information limits early in development. Lastly, note how at approximately 100 days, the behavioral measures are roughly equal to the limits measured electrophysiologically at V1. In the big picture, the graph shows how limits are imposed and the relative contributions of the development of early spatial vision to monkey contrast sensitivity.
--- END OPTIC TRACT, SUBCORTICAL STRUCTURES, VISUAL CORTEX ---
--- CRITICAL PERIODS IN ANIMALS AND HUMANS ---
The critical period was introduced in the first section. The critical period is defined as the period during which an animal is most sensitive to an abnormal experience. We have seen examples of abnormal experiences earlier in this section (e.g., optic nerve sectioning, dark rearing, and lid-suture). Also, recall that the logic that links the critical period to developmental activity is that the system is most sensitive to abnormal experience when visual development occurs rapidly via experience.
What does the critical period entail from a neural perspective? The pre-critical period occurs before cells begin forming visual circuits in an experience-dependent way; this phase of development is characterized by maturation and differentiation. The term closure refers to the time when the critical period ebbs and visual experience does not modify the system. We saw an example of how the LGN develops inhibitory feedback connections to stabilize the receptive field organization of ON and OFF center-surround neurons.
Different visual abilities have different critical period windows. Different species have different critical period windows. If we take a visual ability with a critical period located at approximately 40 months in humans, we can compare this visual ability to other species.
One way of introducing abnormal visual experience to measure the critical period for visual acuity is monocular deprivation. The methods used are as follows. One eye is patched (monocularly deprived) for a time (the duration depends on the species being studied) while the other is provided with a normal visual experience. The slide, “Example: Visual Acuity and Monocular Deprivation (MD)” shows the data. The solid line shows the visual acuity for the non-deprived eye normalized across species. Visual acuity steadily improves for the non-deprived eye. The critical period is determined by comparing the acuity of the deprived eye to the non-deprived eye. Note that for acuity, there are large effects for humans and monkeys early in life.
One quirk of the plot -- cats and rats do not open their eyes at birth. This accounts for the lack of MD effect early on in their lives (not how the dashed line representing the critical period is low).
Patching ought to be familiar to you from the perspective of amblyopia treatment. Natural estimates of the critical period can emerge from the careful longitudinal observation of emerging amblyopia.
A table makes the cross-species comparison for acuity but adds in a marker that is useful for considering when critical periods may close, puberty.
Because the critical period is defined experimentally, how deleterious an abnormal visual experience is depends on both the task that measures a visual ability (e.g., the ability to judge motion) and the method that interferes with visual experience (e.g., lenses worn to create blur). This complexity speaks to how difficult it is to isolate a single period during the lifespan when development is occurring.
A classic study looked at ocular dominance columns in the cat visual cortex. The cats received one week of monocular deprivation at either ten days, one month, or three months of age. The relative number of cells that encode input from the deprived/non-deprived eye is on the Y-axis. If this measure is 0.5, there is a balance between the two eyes; if it is close to 1, then the number of cells responding is weighted heavily to the non-deprived eye.
Note that in all conditions (normal development is plotted in filled circles as a control), there is a shift in ocular dominance toward the non-deprived eye. Deprivation at both ten days and one month have a substantial effect, but the recovery from MD is more rapid for the younger animals. There is very little shift to the non-deprived eye for the three-month-old cats, presumably because the system has undergone normal development, and the MD intervention has little effect on ocular dominance.
From a treatment perspective, if we want to nudge the development of a disease process through experience, it is most beneficial to intervene early when the critical period window is open
The slide, “Direction deprivation v. Ocular Deprivation” provides an example of two deprivation experiments. The manipulation of direction deprivation (squares) consists of showing the animal motion in only one direction. After this abnormal experience, cells quickly begin to respond only to the direction that the animal is experiencing (e.g., rightward motion). The abnormal experience is then switched from the first direction, in our example, rightward motion, to a second direction leftward. The age of reversal is marked on the plot with a horizontal line.
The second point of this graph shows that one can alter the development of both ocular dominance and motion selectivity. A later change to the abnormal visual experience (in this case, a switch of the eye of deprivation at seven weeks) takes longer to affect ocular dominance. We can see this in the length of time it takes for the open circles representing monocular deprivation to drop below 20%.
A variation of the monocular deprivation paradigm was used to look at the development of layer IV alpha and beta in primary receive input from the LGN. Layer IV sublayers receive their input from either the magnocellular (layers 1-2 LGN) or parvocellular (4-6 pathways). The deprivation paradigm had one eye (represented in white) open for three weeks, then the deprived and non-deprived eyes were switched. The ocular dominance of the two sublayers was then examined via a staining technique. The figure shows that the eye opened first dominates layer IVC alpha (white) while the second eye dominates layer IVC beta. Abnormal experience from the magnocellular pathway via the LGN initially drives the development of the input layers of the primary visual cortex.
Recall that we looked at the sequence of infant development of scotopic light sensitivity, photopic increment sensitivity, and then contrast sensitivity. A clever experiment that compared these abilities of deprived and non-deprived eyes in primates shows the different critical periods for these abilities.
The X-axis is the age at which deprivation. The Y-axis is the ratio of sensitivity of the non-deprived eye and deprived eye. This axis is a log scale, and high values indicate that the non-deprived eye is more sensitive while values near zero indicate no difference between the eyes. Scotopic sensitivity is decreased with monocular deprivation up to four months of age, after which there is little effect. Recall that scotopic light sensitivity did not show improvement in humans until the age range of 12 to 18 weeks. This age estimate is within our rule of 4 for the rate of human/monkey development.
Photopic increment sensitivity can be altered via MD up to six weeks of age, giving an estimate of the critical period to around 24 weeks for photopic increment sensitivity.
Finally, for spatial resolution and contrast sensitivity function, monocular deprivation has a substantial effect until eight to twelve months in the monkey. Question: How does this compare with human infant spatial resolution and CSF development?
Our discussion has assumed that dark-rearing is an abnormal experience that maintains the system in an immature state. If the system is immature and not developing, then we expect that the critical period will shift later in the animal’s life. The data from dark-reared cats supports this hypothesis. If we look at the effect of dark-rearing has on how ocular dominance shifts (expressed as the percentage of ipsilaterally responding cells) we see that MD can affect ocular dominance in cats at twelve and sixteen weeks; an age where light reared cats are not very susceptible to the dark rearing manipulation.
In the previous slide, we saw that MD is not very effective at shifting ocular dominance in the cat at 12 weeks. However, peripheral development is delayed relative to foveal development. If instead of measuring ocular dominance in the areas of the visual cortex that represent the fovea, we can observe shifts in ocular dominance in the periphery.
What about humans? We can learn not from monocular deprivation but from unfortunate traumatic injuries to the eye that lead to the formation of a cataract. The data are messy, as patient data can be, but note that there are substantial losses in distance visual acuity if the injury occurs before ten years of age. The losses in decimal acuity are larger the earlier the injury occurs.
Congenital bilateral cataracts lead to poor visual outcomes, and the acuity loss is not strongly associated with the time of deprivation in months. However, for unilateral cataracts treated early in life and treated with patching therapy, the outcomes are better (green dots).
We will end on amblyopia and how its treatment has informed the visual development literature. Patching therapy is a standard treatment known to have success in improving the acuity of the amblyopic eye. However, this first slide shows that patching therapy provided later in life, outside the critical developmental period for visual acuity, is less successful. This graph shows subjects who, from the age of approximately eight, were provided with vision therapy. The data are not conclusive, but this was an early suggestion that enhanced or enriched visual experience might improve outcomes by providing robust visual experience to recover function.
Does the age of presentation of amblyopia predict the final best-corrected visual acuity? If it did, younger presenters (40 or 50 months of age) should have better visual acuity (6/6 or 6/5). However, the presentation of amblyopia from anywhere from 40 through 90 months of age can show improvement but not predict how much improvement will be seen.
--- END CRITICAL PERIODS IN ANIMALS AND HUMANS ---
Part 3: Special Topics
--- DEVELOPMENT AND ATYPICAL VISUAL INPUT ---
At the end of Part 2, we concluded by looking at examples of how disease processes such as cataracts and amblyopia inform our understanding of the critical period. As was done in the first two sections, the following material is presented as the previous sections. We will discuss examples of atypical input to visual mechanisms along the feedforward stream of the visual pathway. The special topics that will be covered are amblyopia, myopia, and aging.
First, to connect this section to our discussion of the critical period, we will look at development with atypical input. Atypical input is defined within the filter theory of visual development. Recall that the visual filter theory of development asserts that any input that does not provide robust input to a developing visual mechanism will promote development poorly. Thus, in developmental filter theory, any input that does not activate the developmental filters optimally or over-stimulates said mechanisms are, by definition, atypical.
As the next paragraphs will describe, we often see a pattern with atypical experiences that mirrors normal development. Too many connections are added, which are then pruned back to stabilize the response. However, keep in mind that this section focuses on atypical experiences of the generalized neural developmental mechanisms (over-production and pruning). However, in atypical experience, the developmental filters will not be provided optimal input, and the system will develop atypically. From the pattern of atypical development, we can infer how development proceeds.
Consider clinically common conditions such as astigmatism or amblyopia. Recall the claim that it is challenging to produce accurate, critical period estimates from atypical development. Therefore, it is informative to look at the developmental literature derived from animal models and atypical visual input from clinical conditions.
Astigmatism is an orientation-dependent blur. The astigmatism slide shows simulated examples of astigmatism. Curiously, infants tend to be born with astigmatism that resolves via maturation in the first year of life.
The slide "Astigmatism - normal development" shows the classic finding -- each line segment represents the astigmatism of an individual child measured longitudinally via streak retinoscopy. A general trend appears in astigmatism to decline from the earliest measures to later measures at two years of life.
Two findings emerged twenty years ago; first, large samples from researchers from China and the US investigated ethnic subgroups. Chinese children were less likely to present with early astigmatism, while Native American or First Nations people were more likely to be affected. The reason for the difference remains unclear in the literature.
What about the development of those individuals whose astigmatism does not resolve? Of course, astigmatism can be addressed via optical correction, but does the atypical experience they experience affect the endpoint of development and visual performance? Measures of the contrast sensitivity function show that it does. The plots in "Astigmatism – After correcting optics" show carefully measured contrast sensitivity functions for two observers, both fully corrected via subjective refraction.
CSFs from the astigmatic axis (i.e., the orientation of the gratings used to measure the CSF are aligned with the angle of astigmatism) showed reduced sensitivity and a lower high-frequency cutoff. Note that the difference between the CSFs measured is found in adults. That a difference persists into adulthood reveals the effect of atypical visual experience produced by astigmatism even when it is corrected. (Thought question: What does this suggest about how one ought to proceed in providing appropriate visual correction for astigmatism during development?)
A manipulation that will play a prominent role in our discussion of emmetropization and myopia development is the introduction of monocular defocus. An animal (in this case, the cat) wears a lens (of either positive or negative power) to introduce myopic or hyperopic defocus. Careful manipulation of lens power has revealed the role defocus plays as the eye grows and develops.
Defocus, both hyperopic and myopic, can (as in the example in the slide "Monocular defocus – animal & amblyope") induce large changes in contrast sensitivity between the two eyes. If the image is focused in front of the eye, eye growth is slowed, and the image is focused behind the eye growth is promoted. (Thought Question: what would be the effect of the atypical experience caused by anisometropia?)
The atypical experience produced by anisometropia can lead to strabismus if not addressed. If anisometropia persists, it leads to abnormal development of the ocular dominance columns in the primary visual cortex. The effect is similar (but less pronounced) to monocular deprivation, where one eye is over-represented in the cortex.
Atropine affects the mechanisms that control pupil constriction/dilation. Clinically, at low dosages, atropine is used as a treatment for myopia. The atypical experience delivered by atropine blur provides an atypical experience to slow the progression of myopia inducing blur. Monkey model animals were reared with low (panel A) and progressively higher doses of atropine delivered to one eye (panels B-D filled circles). As the dosage of atropine increases from A - D, contrast sensitivity functions of the eye treated with atropine become less sensitive. Thus, atropine is used to slow myopia progression in the clinic. Treatment is done with low dosages, but it is used with caution because the atypical experience it provides via increased pupil size does not agree with work with animal models of emmetropization.
Infants are born hyperopic, and their eyes will grow to match their optics. However, even with typical experience, some individuals will either remain hyperopic for longer or their eyes emmetropize normally.
Localized retinal lesions are a relatively rare phenomenon in humans. While rare, neuroscience methods using primates have provided us with a detailed picture of how the retina develops to compensate for this atypical visual experience. The retina retains the plasticity to develop new connections should a portion become damaged.
In primates, lesioning the retina leads to changes in the visual cortex. After a retinal lesion, the visual cortex's retinotopic map is deprived of input but after a period of reorganization (or re-development). As a result, cortical cells map to new locations in the retina.
When a circuit within the retinal for processing a portion of the visual field is damaged, we can see how developmental processes progress with atypical experience at the axonal level. This is shown in the slide "Retinal lesions - Axon change (monkey). Using two-photon microscopy (high-powered microscopes that can image axons). The five images at the top show images of axons in a small portion of the retina, from prior to lesioning and 0.25 through 28 days after the lesion. The time of the images corresponds to the values on the plot below the images.
The image on the left shows the axonal structure before a lesion is induced (-7 days before lesion). The next images show axonal connections 0.25, 14, and 28 days after the lesion. Finally, the rightmost image shows the final state of the axonal process after the experiment.
Beneath the image, there is a plot of axonal density on the Y-axis and days since the lesion. In less than a day, in yellow, the proliferation of axons that are created. In both the image and the plot, we see many axons are added, as indicated in yellow. The grey line in the plot at day 0.25 is lower than the baseline measure indicating the number of removed axons via lesioning with a laser. After two weeks of visual experience, fewer axons are added, and pruning of axons (shown in red) begins. Pruning continues to increase at 28 days, and the number of axons continues to increase.
Strabismus, while a relatively common condition, provides atypical experience during development. If the eyes are not aligned, double vision or suppression of one eye occurs. The absence of binocular correspondence provides an atypical experience, and the visual system copes by pruning connections. The pruning has an outsized impact in the periphery because connections are relatively sparse compared to the fovea.
Recall the notion that there is a pre-critical period predominantly guided by maturation mechanisms rather than experience. For example, the atypical experience induced by strabismus does not affect stereo vision until we enter the critical period for stereo.
In the slide, "Stereopsis: normal & strabismus" we see two groups of infants typically developing infants in green and the percentage of babies that pass a stereo test using preferential looking. Next, compare the typically developing infants (green) to the infants with intermittent strabismus (red). The line in red follows the development of typically developing children until four to six months of age. Then the atypical experience of the anomalous binocular correspondence begins to take effect, and most infants fail stereo tests.
Albinism, a pigmentation disorder, comes in two forms: tyrosine-negative (the usual cultural depiction of albinism) and the less obvious form of tyrosine-positive individuals (see the two pictures). Individuals with albinism have a higher incidence of strabismus. In addition, as was mentioned with Siamese cats, there is abnormal development of the LGN, which is also true for humans with albinism.
Optokinetic nystagmus (OKN) is a measure we discussed in the infant vision section. We saw that with astigmatism, behavioral markers of atypical experience persist into adulthood even when an individual is corrected to abnormal with lenses. The motion OKN is a more specialized term for OKN.
For adults, OKN is shown in both the nasal and temporal directions. That is, an OKN response can be observed irrespective of the direction of motion. In infants, prior to 3 to 6 months, the OKN response shows an immature pattern. If OKN is measured monocularly and the direction of motion in the direction of temporal to nasal the OKN response matches the adult pattern. For nasal to temporal motion, the OKN response is usually absent. If a child has strabismus early in life, the undeveloped pattern of nystagmus is observed, even into adulthood.
The slide, “OKN asymmetry & the development of strabismus”, shows the effect of the abnormal experience of strabismus (esotropia) depends on the age it is observed. The chart shows the frequency of OKN asymmetry (Y-axis), measured in adults, plotted as a function of age when strabismus is observed (X-axis). Note that as the age of the observation of intermittent strabismus is observed later in development, OKN asymmetry is less frequent. By the age of 12, OKN asymmetry is observed no more frequently than what would expect in those without strabismus.
Note: the higher rate of OKN asymmetry agrees with the idea from the critical period closure and puberty. That is, most visual abilities begin to show closure at the onset of puberty.
Electrophysiology has revealed a critical neural correlate in adults of the atypical experience of infantile esotropia/strabismus. Flickering stimuli can be used to produce EEG signals, for example, a grating where the contrast of the white/black bars reverses. Unlike VEP, where a single stimulus is used, EEG uses stimuli that repeat over time; this rate is expressed in Hertz (Hz, on the X-axis). The rate at which the contrast flips from white/black or vice versa is the F1 on the graph. Typical visual development creates mechanisms that respond to flicker at a rate (F2) double the flicker of the stimulus, which is thought to be generated by a dorsal (where) pathway mechanism that encodes motion.
In the slide, “Motion mechanisms & Esotropia” in the top right corner (labeled INFANTILE ET) is we see that there is a response to the F1 (the counter-phase flicker frequency of the stimulus). The F1 response is no larger than the background noise for EEG in adults with typical visual experience (bottom graph labeled NORMAL ADULT). Both groups show a sizeable F2 response, indicating the where pathway has a broad developmental filter that develops despite the atypical visual experience of esotropia.
If we move back to young children, we can see the same pattern of responses at F1 versus F2 at seven years of age. The slide, “Binocular: VEP critical period (from post-op esotropia, robust)” shows that even if esotropia is corrected, via surgery, before seven years, the electrophysiology is abnormal. The observed EEG implies that the critical period for motion mechanisms is open at seven years of age. The top and bottom graphs on the left show the EEG responses to flickering gratings for esotropes (top) and children with typical visual experience (bottom). One difference between this slide and the previous slide is the top left panel, which shows F1 (2.5 Hz) and F2 (5 Hz). Note that for the corrected esotropes, no F2 is present. We can infer from the absence of the F2 response that the motion mechanisms remain only partially developed at seven years of age.
As we saw in Section 2, monocular deprivation is a method for delivering an atypical visual experience. We can explore this classic method in more detail. As we saw previously, MD affects both ocular dominance and orientation selectivity, indicating that it disrupts cortical development. MD has proved useful as a model for both amblyopia and emmetropization (myopia).
The slide “Monocular deprivation” states a few facts about MD. A new concept we will consider is how atypical experience with MD occurs in a fixed sequence described by Stryker and colleagues. MD does not abolish the responses to the deprived eye, one way in which it is similar to the atypical experience of amblyopia.
The effect and time course of the influence of MD depends on what portion of the brain we are observing. Critically, MD deprivation delivered early in an animals’ life does not affect connections from V1 to V2. However, as we will see, this does not mean that visual areas downstream from V1 are unaffected.
What is known about how connections are changed with monocular deprivation? The slide “Deprivation & VEP” shows an experiment that discovered how the atypical experience affects the brain at the synaptic level.
Researchers simultaneously recorded VEP (via response to flashing lights to each of the deprived and non-deprived eyes) and could trace the axonal connections along the feed-forward visual pathway in the cat. In panel a, we see the circuit that researchers investigated. Note that from the blue eye, the optic fibers cross at the chiasm while the fibers from the left eye do not cross. Researchers recorded VEPs in the cortex. In the first column of row b, we see the pre-MD response; this area of the brain responds to the contralateral eye (blue bar for normalized VEP larger than yellow). Row c shows the connections in layer 4 (IV) of the cortex. Given that the VEPs are generated by axonal firing, the greater number of terminals for the contralateral (blue) eye generate a larger VEP response when stimuli are presented to the eye represented in blue versus yellow.
After 3d of monocular closure (represented by coloring the blue eye grey in the figure), the response to visual stimuli, as measured by VEP, decreases because axons are pruned back quickly. Finally, after seven days of monocular closure, the VEP response flips to the ipsilateral eye (yellow), which is the first stage of the anatomical reorganization from the atypical visual experience from MD.
Can the effects of short-term MD deprivation be observed? In the anatomical and functional development section, we saw that long-term deprivation affected cell counts responding to the non-deprived eye when long-duration MD was applied. Our focus was on the input layer (layer 4) up to this point. However, modern versions of classic MD experiments could distinguish changes in cell responses to the deprived versus non-deprived eye in the layers of the primary visual cortex with only 24 hours of monocular deprivation. In the slide, “Cats 24hr monocular deprivation,” we see the effects of short-term deprivation on the layers of the primary visual cortex.
On the Y-axis of these four panels, the black oval represents those cells that respond only to the deprived eye (1), and the eye (7) represents cells that respond only to the non-deprived eye. The grey and black bars represent the percentage of cells counted that respond to either eye or somewhere in between (2 - 6). Given that we have looked at longer duration experiments and cell counts (the X-axis in panels A through D), the results in panel C (layer IV) ought not to be surprising; 24h of deprivation does not alter the ocular dominance of cells in layer IV.
In the other layers, for the non-deprived eye, we see a different pattern in layers I, II, V, and VI. In 24 hours, we see a larger percentage of cells responding to the non-deprived eye. In the deprived eye, there is little change among the six layers. The increased cell counts in layers other than layer IV are consistent with the developmental strategy of creating connections first. Finally, putting together this idea leads us back to the general development concept of creating then pruning. In the previous slide, we saw the pruning of axons after three days of monocular deprivation (MC in the slide MD in our class) but remember that those data were measured with VEP, which does not distinguish among the layers in the primary visual cortex and the deprived eye was compared to the non-deprived eye.
Thus, we now have a more detailed picture of how MD works. First, within the first day, an overproduction of connections/cells in layers other than IV. Then, as the eye is deprived for three days to a week, the connections that represent the non-deprived eye are pruned back.
Thought question: the effects of MD happen quickly in animal experiments, but patching therapy requires much more time to improve the vision of those with amblyopia. What might be driving the difference in the time course? If one could do so ethically, how would you measure the effect of short-term deprivation in a sample of amblyopic children?
Up to this point, we have discussed how MD affects visual development. An advance on the method was to provide atypical experience to one eye for a period of time then switch the deprived eye; this technique is called reverse occlusion. As we saw in our discussion of the critical period, the time when an atypical visual experience is encountered can alter the system. However, one question left unanswered was the flexibility of the system to reorganize after a period of atypical experience. Recall that the critical period ends with closure. If flexibility in the development of vision persists, closure is not complete.
The panels A-D in the slide “Reverse occlusion” show changes in cell counts on the left, and on the right are corresponding diagrams of MD paradigms on the right. The data in graph A from the MD paradigm ought to be familiar. The paradigm is plotted on the right in A, where from top to bottom is the developmental age of a cat with one eye deprived, shown in black, from eye-opening until four weeks of age. We see the typical shift of cells representing the open eye (shift to 1-2, which are cells that represent the open eye).
The diagram for B is a depiction of a reverse occlusion paradigm. The initial phase of monocular deprivation is the same, but the deprived eye is switched at four weeks. Note that the initial atypical visual experience is the same as in A. The data in the graph for B show a dramatic shift from the eye initially opened to the eye that received normal visual experience from four to six weeks of age. This indicates that the developmental filter that corresponds to the critical period is still open and very sensitive. Instead of showing the pattern in A, the reverse pattern is shown, and cells respond to the deprived eye early (categories six and seven).
Note that the duration of atypical experience differs between paradigms A and B. Graphs and MD paradigms answer two questions. Does the duration before the reverse occlusion affect visual development? If the occlusion is again reversed (i.e., the deprivation is switched back), do the cell counts switch again? The answer to both questions is yes but with one caveat. C shows that a reversal of the MD at six weeks (compared to graph A) can shift to the eye with typical visual experience. Note that the sift is not as extreme as there are cells that respond preferentially to the occluded eye. D shows the same pattern; despite the reverse occlusion being applied within the critical period, as in B, the pattern is different.
What have we learned from reverse occlusion? Previously we saw that atypical experience could map out the critical period by looking at the effect of MD between different animals. Reverse occlusion (and other variants of delivering reversed atypical visual experience) can map the critical period by changing when the atypical experience is delivered within the context of other atypical visual experiences.
Also, atypical visual experiences for humans that present in the clinic are rarely as clean and controlled as MD or other experiments that use atypical visual experience. Multiple issues will present within the same clinical case. Atypical visual experience in humans can be unstable as the developing patient experiences the changes that come with the reorganization of the brain. Thus, knowing the history of a patient as they develop with abnormal experience can be crucial for reasoning through visual complaints.
Before we leave our discussion of variations on monocular deprivation, we can discuss the effect of part-time reverse occlusion. That is, reversing occlusion within a single day. Researchers used this paradigm to investigate the effect of delivering periods of atypical visual experience for short durations to collect evidence on short-duration patching as a potential treatment for amblyopia.
The Y-axis shows cat visual acuity (measured behaviorally) versus the hours per day that cats experience reverse occlusion on the X-axis. All animals began the experiment at six weeks of age, within the critical period for visual acuity. In the context of patching therapy, which is a penalization treatment, success is measured by reduced acuity for the penalized eye, but the penalization ought to not be so extreme to cause the reorganization we saw in the previous slide. From the graph, for 6 to 7 hours of occlusion, we see significant reductions in acuity for both eyes. This paradigm would be an inadequate treatment for the patient; this treatment regimen would reduce visual acuity overall. Surprisingly, shorter durations of occlusion, from three to five hours, show little influence on visual acuity. This treatment would be ineffective as it does not penalize the eye. Zero hours of occlusion serves as a control condition for the experiment. Four to five hours of penalization therapy seem promising, as the penalization shows an effect, yet the open circles lie within the normal range of visual acuity. Taken together, this shows promise for short-duration patching influencing the development of amblyopia without affecting the fellow (or fixing) eye.
Restricted rearing is a general term for animals raised in an environment that provides an atypical experience. One example of restricted rearing is rearing with restricted orientations. This classic experiment was performed to look at the effect restricted rearing has on orientation tuned cells in the brain. Recall our discussion of the effect of astigmatism; orientation restricted rearing is similar to a very severe astigmatism.
Restricted rearing can be done with any visual dimension (the color of the lights). However, researchers looked at orientation coding for rearing environments that contained only horizontal or vertical orientations in this classic experiment. The two plots show the orientation tuning of cells in the cat visual cortex. On the left, the cat reared in an environment with horizontal orientations develops to have cells tuned to horizontal orientations. The reverse is found if the cat is reared with vertical orientations.
What if an animal is reared in a motion restricted environment? Given the data from orientation restricted rearing, what would we likely observe with an atypical experience of motion? The results shown in this slide are consistent with what we observed for orientation. If a cat is reared viewing only rightward motion, then most of the cells in the visual cortex will encode rightward motion, as shown in the diagram on the right, which is unlike the pattern observed in control animals.
What if we provide an enriched environment? Promising vision therapies focus on enriching visual experience to promote visual development. Here we see evidence of how cells tuned for orientation in the ferret can be induced with enriched visual stimulation of motion early in life. First, the ferret’s cortex is measured, and cells that respond to vertical orientations but not motion are found. Then the ferret receives motion training in the rightward direction, and the cells are measured again. With a short duration of rightward motion training, this enriched experience is enough to promote orientation-tuned cells to respond to rightward motion before they would otherwise become motion selective. In the context of the filter theory of visual development, this input enhances the developmental filters for motion and induces the development of motion mechanisms.
This section on atypical visual experience will close with two examples that relate to amblyopia development. The first comes from an animal model where strabismus is induced via surgery. The graphs are presented in a familiar way, cell counts are plotted on the Y-axis and the amount of ocular dominance plotted on the X-axis. After surgery, the animal receives an otherwise normal visual experience. For typically developing control animals (those without surgery) we see an abundance of cells that do not have a strong ocular dominance preference. These are the cells that will tend to develop into cells that signal binocular correspondences. On the other hand, induced strabismus breaks binocular correspondence and cells develop a strong ocular dominance preference.
Induced strabismus, with a horizontal eye turn, also disrupts orientation tuning and particularly for binocular cells. The reasoning is as follows: a horizontal bar in the visual field will not be aligned between the two eyes. This atypical experience disrupts the development of orientation tuning. Furthermore, if the edges in an object do not fall on the expected corresponding points of the two eyes, then binocular cells will not signal depth appropriately. (Thought question: how would orientation tuning of horizontally tuned cells in the visual cortex be affected if a muscle section induced an upward eye turn?)
We learned that connections from V1 to V2 are not affected by monocular deprivation. Developed amblyopes do not show this pattern when their brains are scanned. Using patterns that stimulate V1 (the circular patterns A and D on the left of the slide) versus a pattern (B) that stimulates areas of the brain that perform contour integration and a control pattern in panel C. (Note: this task taps the development of the same brain areas that Kaniza squares with different support ratios stimulate from the child visual development section.)
We can see the difference in brain activity between the amblyopic eye and the fixing (or fellow) eye. The brain scan on the right (A) is a heat-map, note how there is little activity overall and when compared to the fixing (fellow) eye in the brain image (B), there is almost no activity. However, when the fixing eye is stimulated with pattern B, there is high activation (shown in red) in areas outside of V1. Like monocular deprivation, the circuits in the visual cortex that correspond to the V1 and V2 connection of the amblyopic eye are unchanged. However, there are substantial differences when we look beyond V2, even when the amblyope is wearing their best correction.
The atypical experience of amblyopia can be addressed in the clinic if the outcome measure of visual health is taken to be visual acuity. However, the abnormal experience during development that comes with amblyopia disrupts the developmental progress of the brain.
--- END DEVELOPMENT AND ATYPICAL VISUAL INPUT ---
--- AMBLYOPIA RESEARCH ---
When a suspected amblyope presents in the clinic, the goal is to treat them and recover visual function. However, recovery from the atypical experience associated with amblyopia is rarely complete. Visual function recovered depends on the measure and what the group norms are for that measure.
The track that amblyopia research has gone down has been two fold. First, researchers have characterized the effect that atypical visual development has had on individuals. Second, using the knowledge about the effect amblyopia has on development, researchers have devised novel treatment strategies based on the evidence.
As we have discussed, amblyopia has parallels with monocular deprivation experiments in that the two eyes do not receive the standard binocular input. Researchers take the patients they can get and are in a bind. Generally, the duration of atypical experience, presentation of the disorder, and treatment effects are intermingled. Amblyopia is a “natural experiment” -- the disorder provides an atypical visual experience. Despite the relative lack of control versus animal experiments, research into amblyopia provides insights into the development of human vision.
The “types of amblyopia” slide is a basic breakdown of amblyopia. Note that, like any clinical population, there is heterogeneity in the population, and sometimes the type of amblyopia is not discernable (idiopathic amblyopia).
When one encounters an amblyope in the clinic, one might run into several visual complaints. They are listed here. These complaints become the basis of research questions.
The slide “Amblyopia - questions” lists questions that experiments have investigated. This section of the lecture will follow these questions, and we will see how the evidence stacks up.
The slide, “Acuity in amblyopia - Patching” shows how with age (in years), patching therapy promotes gains in acuity (in green) in the amblyopic eye (Y-axis). This finding ought to be familiar as we have seen that patching therapy can improve acuity for amblyopes. However, researchers asked whether the atypical input to the deprived fellow eye affects resolution acuity. The plot in red is the loss of visual acuity that occurs in the patched fellow eye. Consistent with our discussions of critical periods and atypical input, there is an effect on the deprived eye in that there is a reduction in visual acuity, mainly if the atypical visual input from patching is delivered early on in life.
Another question that measures the development of amblyopic vision is whether visual acuity measures are related. Visual acuity measures may differ marginally among those with typical visual vision, but in a developing amblyope, one observes substantial differences depending on the method used to measure acuity.
In the graph, two measures of visual acuity are plotted: Teller Card acuity and acuity measured with Landolt C stimuli. If the two measures agree, then the points in the graph would fall along the diagonal line plotted in the graph. However, as highlighted by the red ellipse, Teller Acuity overestimates the acuity of strabismic amblyopes relative to Landolt C acuity. Our measures of acuity ought to be consistent. However, here we have an example where they are not further emphasizing the point that acuity may not be the best measure to track the development and treatment of amblyopia.
In the human infant vision section, we discussed that fixational stability plays a role in developing visual acuity. If fixation is unstable, then the motion of the eye itself will impair visual acuity. An observer with normal fixation will have some fixational instability as shown in the picture on the left in the slide, “Amblyopia: Resolution Acuity loss from unstable fixation,” where the red circle shows the variability of fixation, while fixation instability is present it is not biased and randomly distributed among all directions. In amblyopes, this is not the case and is shown in the middle and right images. In each case, fixation instability is greater and is biased along the horizontal meridian. The result of amblyopic fixational instability in these images would impair visual acuity, especially along the horizontal meridian (Thought questions: how would this affect the contrast of the gratings used in Teller Card? Which of the Landolt C stimuli would it affect the most?)
As we saw in the development of children’s vision, Vernier acuity is a key marker in visual development in that it measures the ability to judge position. Again we see a limitation of visual acuity. On the Y-axes of the two graphs, we see two measures of resolution acuity, Snellen Acuity and Grating Acuity. For Snellen acuity, the ability to judge position (Vernier acuity) relates to the measured Snellen acuity. In general, Vernier acuity is four times worse (when measured in minutes of arc) than Snellen acuity. However, when the stimulus is changed for amblyopes, this relationship does not hold; this difference is highlighted by the red oval. Vernier acuity is less than one would expect from the measured grating acuities. For both research and establishing norms in the clinic, this is a problem. Our measures of acuity are related in a stimulus-dependent way to visual function.
The position acuity judgments measured by Vernier acuity depend upon localizing the relative position of at least two stimuli. If an observer is uncertain of the position of one stimulus, this will impair their ability to judge relative position; this is referred to as “spatial uncertainty”. Given the disagreement between resolution acuity measures and position acuity measures, the question of whether spatial uncertainty impairs the ability of amblyopes in developing position judgments was asked by Hess & Pointer -- they used a stimulus where both the spatial frequency and the relative spacing of the grating elements would reveal deficits in amblyopes. Gratings that are large have a spacing (shown in A) while smaller (B) gratings have a larger spacing.
The graphs (Y-axis is accuracy and the X-axis spatial frequency) in the slide “Spatial uncertainty – Aniso versus Strabismus” show the surprising result anisometropic amblyopes do not show increased spatial uncertainty (note how the symbols for anisometropic amblyopes overlap). In contrast, strabismic amblyopes’ alignment accuracy is reduced (higher values represent a reduced ability to align the gratings) when the spacing (as shown in B in the previous slide) was increased. Also, note that the effect of spacing did not depend upon spatial scale/frequency (or spatial frequency). The lack of spatial frequency dependence indicates that the amblyopes did not have increased spatial uncertainty when locating the bars within a grating but in judging the relative position between gratings.
This slide is interesting because it shows there are differences among the type of atypical experience an amblyope receives as their vision develops. Strabismic amblyopes, because of the misalignment of their eyes, develop with greater spatial uncertainty; that is the corresponding stimulation on their retina does not reliably correspond with locations in the visual scene.
The periphery is less sensitive than the fovea in normal observers. In the far periphery, there is a monocular region encoded by one eye. Given that amblyopia is a binocular disorder and the monocular periphery could receive typical visual experience during development?
The slide “Amblyopia - no differences in monocular visual field” shows CSFs for the fellow eye (open squares) and amblyopic eye (filled squares) for an individual with anisometropia. The individual shows the typical pattern -- sensitivity is reduced when it is measured in the periphery. Consistent with the prediction made from the ideal that while amblyopes receive atypical binocular experience but more typical monocular experience, sensitivity does not differ in the monocular periphery (right panel open/filled squares overlap).
An open question that we have not addressed is that can training improve contrast sensitivity in amblyopes? The two graphs show two amblyopes that have had the atypical visual experience of amblyopia and patching therapy. To be specific, they are typically developed. Perceptual training can be thought of as an enriched visual experience. Importantly for research into amblyopia treatments, we can see (circles versus triangles) that contrast sensitivity improves in the amblyopic eye during training. Recall that in the development of infant vision, we saw that the development of the contrast sensitivity function continued to develop long into adolescence. It is promising that enriched experience may open up the window of development of the amblyopic eye long after typical development is complete.
The atypical experience of amblyopia affects the development of spatial uncertainty for strabismic amblyopes. The sensitivity of the binocular periphery is reduced for anisometropic amblyopia because of their atypical experience during development. Spatial summation refers to the ability of the visual system to integrate information. In the example shown in the slide’s animation, we see that the size of a spot of light in the periphery can be changed to obtain a measure of sensitivity. In typical development, spatial summation is tracked via the development of the size of the visual field (recall the development of the visual field in children). What does the atypical experience of amblyopia change for the development of the periphery?
In the slide, “Spatial summation impaired in amblyopia,” on the left, we see a graph that tracks the minimum size of the center of the stimuli (those shown in the last slide) for a large number of observers (both typical and amblyopes) to detect a spot of light in the periphery (X-axis). From this graph, researchers track the typical observers’ ability to summate information from the periphery (as expressed as a visual acuity on the Y-axis). Amblyopes (both anisometropic and strabismic) are worse than typically developed observers (open diamonds). One interpretation of these results is that the atypical visual experience of amblyopia affects ON center-surround early in the visual system (e.g., retina or LGN) impairing summation and causing it to deviate from a known psychophysical law (i.e., Ricco’s Law).
Generally, the development of vision and vision is tracked via performance (e.g., acuity tasks, contrast sensitivity), but a pair of classic experiments investigated whether amblyopes see the world differently. If we look at the slide, “Classic: Visual distortions, normal CSF,” we can see two contrast sensitivity functions that overlap for both amblyopes. Their contrast sensitivity was measured with gratings (both horizontal and vertical) with both their amblyopic and fellow eyes. From these data, we could conclude from their performance that they have been treated successfully, but is their vision typical?
From the drawings the amblyopes make (shown above the contrast sensitivity functions for each frequency) we can conclude that despite the lack of performance differences, the vision of the amblyopic eye of this observer is atypical. Furthermore, the drawings this observer makes become more scrambled as the frequency increases. Another amblyope (shown in the following slide) also shows normal vision but draws their perception of gratings with portions missing. Both of these observers show normal performance yet have non-veridical perception, leading to complaints of poor vision in the clinic.
A later study followed many more observers, some with impaired performance, and tracked whether the amblyopes drew gratings veridically or non-veridically. Several intriguing findings emerged. The slide, “non-veridical drawings” shows CSFs in the right column. Note that, unlike the previous study, each of these observers has impaired contrast sensitivity. Again, we see that the drawings show that the vision is impaired in a way that is dependent on spatial frequency. One observer includes regions where the grating is missing. Two others show the introduction orientation and jagged lines in the grating.
The slide “Non-veridical Percept Types” provides examples of the classes of non-veridical perceptions drawn. Using a large sample of observers, researchers characterized how the atypical visual experience that drives amblyopia leads to abnormal vision. The world can be distorted, which could lead to anomalous binocular correspondence. Abrupt onsets in grating can be introduced, leading to increased Vernier acuity or spatial uncertainty. Inconsistent orientations can be introduced, which could be a perceptual marker like we saw with animals with induced strabismus. Finally, fragmented distortion and scotomas could lead to impaired spatial summation.
Also, this slide and the following slide show the drawings made with gratings at high contrast (very visible at levels an amblyope would encounter during their daily visual experience). However, the amount of distortion decreases as we decrease the contrast. This highlights what is difficult to measure in both the lab and in the clinic -- the ability to perform tasks in daily life above the limits of performance.
Amblyopia is development with atypical binocular vision. The slide “Amblyopic interactions” shows the testing setup of a research group used with children (of around ten years of age) to determine whether children with amblyopia experienced more interaction between the two eyes than anisometropic children without amblyopia or typical children. The stimuli tested were tumbling E letter acuity, contrasts sensitivity, and alignment (Vernier) acuity. The non-tested eye was presented with a black square.
This experiment was presented stereoscopically to allow for the testing of the amblyopic/worse/dominant eye and the other eye so that input to one eye interferes with the other. The two measures on each task allowed researchers to obtain two measures of performance when one of the two eyes had the appropriate input for the task and the other an inappropriate input. The data are presented as an interaction index - defined as the difference between the “good” eye and the atypical (or non-dominant) eye.
In the results slide, “Lai et al., 2011 – Results” in panel (a), we see the Acuity interaction index. For acuity, we see that the amblyopic eye is worse than the good eye (higher interaction index) than either anisometric children without amblyopia or normal children. For contrast sensitivity, both normal and amblyopic children performed better with their fellow/dominant eye. However, anisometric children showed no interaction, which researchers attributed to a lack of development of eye dominance. Finally, we see those normal children, but neither group of patients showed an improvement depending on the eye tested (note: because these data are expressed as an interaction, overall differences between patients and controls are not shown).
Taken together, what do these data say about binocular interactions and development? For acuity, occluding the amblyopic eye with a black square improves performance. That is, the amblyopic eye was not interfering in the acuity task. However, for alignment or Vernier acuity, occluding the amblyopic eye does not improve the performance of the fellow eye; this fits with what we saw with acuity losses in patching (the fellow eye shows deficits at this stage). Patching is reducing the alignment sensitivity of amblyopes.
Developing the ability to perceive stimuli in crowding was introduced in the first lecture as a key skill associated with the development of reading. Crowding is a pervasive phenomenon and can be shown with various stimuli from letters to objects and drawings. As we saw with the non-veridical drawings at (especially) high contrast and will see in subsequent slides, the fovea of the amblyopic eye can be impaired, an atypical visual experience that has consequences for development.
Amblyopia often presents during the years just before or during when children are introduced to reading. The data presented in “Amblyopia - Crowding and Reading” shows the reading speed (Y-axis in words per minute). The experimenters manipulated the letter spacing of a text (X-axis); they tested individuals who have undergone successful patching therapy against a typical control group. First, let us consider the typical pattern of the control group. In the fovea (black open symbols), both for their dominant and non-dominant eyes, reading speed is relatively fast from 200 to 800 words per minute, save for when a small spacing is presented to the non-dominant eye. For stimuli in the periphery, reading speed is slow and requires a large spacing (> two or three degrees of visual angle).
How do typical controls compare to amblyopes? The first difference appears when reading speed for the non-amblyopic (fellow) eye. Note the similarity between the curve (open circles) for amblyopes and the non-dominant eye of the controls. Recall the slide that showed that patching therapy induced a visual acuity loss. The decreased reading speed of the fellow eye is a marker of that loss.
The most striking effect appears when reading speed with the amblyopic eye is compared to that of controls (marked by the arrow). To read at a speed close to normal controls, the letter spacing of the text must be over five times that of the fellow eye. Reading with the amblyopic eye is only marginally better than reading in the periphery. This change in reading has an impact on educational outcomes. From the perspective of amblyopia research, it pointed to a novel question, “Does the fovea of the amblyopic eye function like the periphery of controls?”
The hypothesis that the fovea of the amblyopic eye is similar in its endpoint of development was tested by Hussain et al. (2012) in a crowing experiment similar to the one we saw with children in the first part of this course. Panel A in the slide “Amblyopia: Foveal crowding?” shows a letter crowding experiment where an uncrowded (top) is easier to identify than a crowded letter (bottom). The experimenters then compared resolution acuity when the letter was presented uncrowded or crowded (i.e., with flanking letters). Graph B compares crowding in the amblyopes’ fovea for their fellow eyes (open circles) and their amblyopic eyes. The diagonal line shows what one would predict if the flankers did not affect resolution acuity. In the fovea of the fellow eye, the expected result of crowing having very little effect holds. However, flanked acuity is on average five times worse than unflanked acuity in the fovea of the amblyopic eye.
The graph in panel C shows the results from a group of normal controls tested with the same stimuli but placed in the periphery. The filled and open triangles represent the upper and lower visual field, which provide approximately the same result: when the letter is flanked acuity roughly doubles.
To further explore the similarities between the amblyopic fovea and the periphery a perceptual training paradigm was used. On the four graphs in the slide, “Crowding: training the amblyopic fovea” the two in the left column show the improvement with “bin”. The label bin refers to a bin of 200 trials. Extensive training is needed to show an effect, but notice that both the amblyopic fovea and the periphery can be trained to become more robust in the presence of crowding. As we will see in the research on amblyopia treatments, the time to improve amblyopic vision is lengthy and the training intensive to obtain good outcomes, especially after the critical period has ended.
We saw that distortions appear in grating stimuli for amblyopes. Distortions pose another challenge to the development of vision under the atypical experience of amblyopia -- matching corresponding points in visual space on the retinas of the two eyes. An example of how flawed the amblyopic visual system can be at matching points in visual space is shown in the slide “Subjective binocular distortions in Amblyopia”. The experiment that reveals amblyopic binocular distortions is relatively simple. Red-green anaglyph stereo glasses are used, and a grid of points is presented to one eye while the observer fixates on the center of the screen. The observer’s task is to click on where a point is perceived.
The graph shows where one amblyopic observer clicked in visual space (the center of the graph is the center of the visual field) when viewing the stimuli with their fellow eye (open circles) and amblyopic eye (filled circles). The filled circles, representing the localization of points in visual space by the amblyopic (esotropic) eye, are shifted, generally to the left, compared to the fellow eye. (Thought question, which eye might be esotropic in this individual?) The question mark symbols indicate where the observer reported that they could not make a match (or localize a stimulus) with their amblyopic eye.
Distortions in localization (both monocular and binocular) are the result of the development of vision with the atypical visual experience associated with amblyopia. The drawing experiments establish their existence and how the distortions relate to basic visual properties remains unclear.
The remainder of this section on amblyopia research will describe an experiment looking at whether distortions can be observed in performance (recall they were observed in perception in the drawing task) and more complex tasks (i.e., dorsal or stream tasks).
In previous slides, we saw examples of observers with normal CSF drawing distorted gratings. There are two ways that distortions can be introduced, either via mechanisms related to orientation or position. Given the research on binocular cell count changes that arise from muscle sectioning or increased position uncertainty, either mechanism could be a candidate. What mechanism might limit performance?
The experimental stimuli are shown in the slide, “Amblyopia – why distortions?”. Instead of measuring performance in a single task, researchers measured performance in a dual-task paradigm and compared the performance of the amblyopic and fellow eyes. Observers had to detect a grating, thus equating the sensitivity of the two tasks. In the second of the two tasks of the dual-task paradigm, observers also discriminate between its orientation (Experiment 1 - top panel) or its phase or relative position (Experiment 2 - bottom panel).
There is no difference between the contrast sensitivity for gratings at either orientation for the fellow eye, a result that is found in observers with normal developmental experience. Surprisingly, given the misperceived orientations introduced in drawings, there is no difference in detection or discrimination at different orientations (filled symbols) for the amblyopic eye either even in the presence of reduced sensitivity overall (i.e., the filled symbols are lower than the open symbols). Thus, the developmental mechanism that limits the amblyopic eye does not appear to be linked to mechanisms that encode orientation.
For Experiment 2, we see a different picture when we look at the amblyopic eyes. The detection of gratings with either a light or a dark bar in the center does not differ (marked with the arrow), but discriminating gratings that differ in phase is difficult for the amblyopic eye.
Suppose we assume phase or position encoding mechanisms are the limiting factors for amblyopic vision. In that case, we can infer that the developmental mechanism that underlies position encoding is more vulnerable to atypical experience during development. Also, phase/position encoding limitations would increase spatial uncertainty. Finally, not encoding the phase leads to anomalous binocular correspondences.
With the notion that phase encoding differs in amblyopes, we can predict that they will worsen with contour integration (which we saw in the work with brain scans looking at V1 and higher visual areas). Another simple task that requires precise position or phase encoding is a simple shape discrimination task shown in the slide, “Amblyopes – distortions in shape discrimination”. Amblyopes, because of anomalies in their phase encoding mechanism, can have difficulty detecting distortions in shape.
Phase encoding is a basic spatial vision ability because it is compromised; it can have downstream effects because of the atypical experience and development of spatial vision mechanisms. Children with amblyopia are known to have difficulty using structure to create a motion percept. Two examples of structure from motion are shown in videos in the slide deck. It should be apparent that the pattern of motion of the dots (black & white dots moving in a pattern consistent with a cylinder). The stimulus can be varied to make it easier/more difficult to discern the depth of the cylinder.
The graph in the slide “Amblyopia & Structure from Motion” shows that amblyopes have difficulty with structure from motion tasks as predicted by the atypical phase discrimination development. Also, note that amblyopes are less sensitive to this task with their amblyopic eye and their fellow eye. That the difficulty extends to both eyes indicates that the impaired development of phase perception is not providing the complex visual mechanisms that extract structure from motion the robust visual input they require to develop typically.
Amblyopia can also affect hand-eye coordination. In a reaching and grasping task, we can see the dramatic effect of amblyopia on grasping an object. In the data for the control group, we can see a trace of the size of the grasp aperture (e.g., the gap between the thumb and the index finger when picking up a pen) and time. After a reaching motion is initiated, the size of the grasp is increased to be wider than the object (Phase 1) then the grasp width is adjusted to match the size of the object (Phase 2) until the final phase when the grasp encloses the object (Phase 3). The dashed lines mark the time each phase of the grasp begins. Note that the time it takes for normal controls to grasp an object is rapid (~500 ms from when a reach is initiated).
When we look at the time course of grasping for amblyopes, we see a dramatic difference in the time course for grasping an object. All three phases of adjusting the grasp aperture take substantially longer, particularly Phase 3, where the final grasp occurs. Lastly, the entire time course is much longer, up to two seconds versus 500 ms, despite the initiation of the reach beginning at approximately the same time.
Finally, research reveals a wide array of developmental endpoints for individuals diagnosed with amblyopia. One notable difference is between those amblyopes that can recover some stereovision (as revealed by standard stereo vision screening tests) and those who fail stereo vision tests. Compare the yellow symbols versus the grey symbols in the graph on the slide, “Amblyopia: more stereo better acuity”. At all levels of optotype acuity, we see better grating acuity for amblyopes with some stereo vision function.
This observation that patients with some stereo vision emphasizes a goal of research into the treatment of amblyopia to find methods that lead to the recovery of stereo vision function. Consequently, despite the recovery of visual acuity from patching therapy, it is a monocular treatment that can have deleterious consequences on the fellow eye. Developing and translating research into treatments that go beyond patching is an active field of research that has led to insights into development from treating patients.
--- END AMBLYOPIA RESEARCH ---
--- AMBLYOPIA TREATMENT RESEARCH AND DEVELOPMENT ---
Let us contextualize amblyopia treatment research. What is the scope of the problem? From the table across many countries (US, UK, Netherlands, Sweden, and Australia), we can see that the prevalence of amblyopia is approximately 2.5%. In our discussion of Amblyopia Research, we have seen a pervasive impact on the development of the visual system that extends to crucial daily tasks such as reading and hand-eye coordination. Given the prevalence of 2.5% of the population, this represents a substantial burden to society.
Given the number of patients in the studies, we can portion the numbers based on the age of diagnosis for strabismic versus anisometropic amblyopia (or thor combination). Strabismic amblyopia is found earlier in life than anisometropic amblyopia. Part of this may be that strabismic amblyopia has a more straightforward presentation or perhaps that anisometropic amblyopia emerges later once the emmetropization process begins.
The possibility that anisometropia amblyopia may be associated with atypical emmetropization comes from data that shows that its prevalence increases with age toward when myopia, in general, is diagnosed.
We have discussed monocular deprivation and its effect on development in detail. However, depriving one eye was used as an amblyopia treatment long before monocular deprivation was studied in detail. As we have seen, MD has a dramatic effect on the development of the visual system. Applying this treatment can dramatically alter the visual system by penalizing the fellow eye, shifting the development toward improving the functioning of the amblyopic eye. Patching can be paired with atropine treatment (dilating the pupil).
This portion of the lecture focuses on how standard patching therapy falls short of restoring normal visual function. One issue is that it has been shown that as many as 20% of patients that undergo patching therapy fail to recover. Also, consistent with our discussion of the critical period, if patching therapy is applied later in development, it will lead to poorer outcomes (approximately seven years). All told, this has led researchers to explore what other treatments could be applied to restore function other than, or addition to, patching therapy.
Also, patching therapy, even when successful, can not prevent the recurrence of the presentation of an amblyogenic factor. Recurrence rates are particularly concerning for esotropes treated early in life, where more than half were found to require follow-up patching.
OCT (Optical Coherence Tomography) measures the retina; it measures amblyopic and fellow eyes. Also, it was used to compare the retina with control groups. Structurally, the retina of those developing with an amblyopia factor does not differ.
An inspiration for amblyopia treatment has come from restricted rearing experiments. One example was an experiment that used restricted rearing and cats. One kitten was allowed to move freely while another was passively moved in a way paired with the active animal's movements. The active animal showed a reduced effect of restricted rearing compared to the passive animal. The question for amblyopia treatment then becomes whether it will be more successful to use passive patching therapy and active vision therapy.
We saw that with patching therapy, gains in resolution acuity of the amblyopic eye came with losses in resolution acuity in the fellow eye. One method is to use binocular tasks where one eye is penalized with red filtered glasses. The benefit of this approach is that it is easy to do outside of a clinical setting and takes advantage of natural active play activities of children.
Unfortunately, to date, the evidence from large randomized control trials for active vision therapy being a valuable addition to patching therapy is weak. However, given the inclusion criteria for these studies (i.e., that the patients must show some stereo before beginning treatment) makes their generalizability more limited than it otherwise could be.
In amblyopic observers, we saw that research showed that they could be trained to improve their ability to recognize crowded letters. One promising avenue of amblyopia treatment was revealed by training amblyopic adults to perform a stereo vision task. In this task, observers had to fuse two stimuli to create a binocular percept where one of two gratings was perceived as being closer in-depth than the other. The crucial difference between this study and others was that the contrast of the stimulus delivered to the fellow eye (or dominant eye DE) was reduced. Similar to patching therapy, this provides the fellow eye with a penalization but not one as severe as removing visual input. Also, both eyes receive stimulation which is more promising than monocular patching therapy.
Surprisingly, adults with amblyopia were able to learn how to do this task. Recall that spatial phase mechanisms that are required for doing the stereo matching in this task (aligning the light/dark bars) are poorly developed in amblyopes. The improvement in this task is shown with the stereo threshold (Y-axis, how much difference there must be between the closer grating and the other grating) versus the number of trials. Amblyopes start worse and gradually improve with a large number (thousand) of trials in this task.
While an extended treatment duration may seem daunting to apply in the clinic, keep in mind that this training was done with adult amblyopes who have developed beyond the critical period. More daunting is the inability of the stereo training to transfer to a Vernier acuity task. The Vernier task was used because it relies on the spatial phase or position mechanisms that this task trains.
After this result was introduced to the literature, work began transferring it to the clinic, as shown in the stimulus in the slide “Dichoptic Motion Training”. This task is similar to the random dot motion and structure from motion tasks that we have seen in this course. While more work needs to be done, this task, restructured to suit children who are still within the critical period for binocular vision, may translate to improved outcomes for amblyopia.
Along with the research that showed impaired hand-eye coordination, active treatments that exercise hand-eye coordination may be worthwhile. One device called the Wayne Saccadic Fixator simultaneously trains hand movements and saccadic fixation.
Given advances in the technology of 3D movies and virtual reality systems, researchers interested in developing new amblyopia treatments have turned to these devices as possible ways to deliver content that encourages the development of the amblyopic visual system toward more typical outcomes. However, these systems remain in their infancy and have not reliably produced treatments that can reliably recover stereo for amblyopes.
Can we take inspiration from animal experimentation rather than technology to consider novel treatments for amblyopia? For example, one ongoing line of research comes from the dark-rearing literature; perhaps dark-rearing can promote plasticity in the amblyopic visual system and improve patient outcomes. However, in one small clinical trial, the evidence was weak, thus making it difficult to justify using dark-rearing therapy.
Serendipity often plays a role in discovering new therapeutic approaches -- observational data from children with epilepsy and depression has shown that children undergoing treatments are known to change the plasticity of the central nervous system. Whether this is an avenue of research that ought to be pursued or is worth pursuing is a separate question.
The final slide of this section, “Amblyopia treatment aims - difficult,” outlines the challenges that any research program aims to develop improved treatments for amblyopia faces. For example, traditional patching therapy treatment is inexpensive and relatively easy, but it has limits, as the research has shown in this part of the course. Also, as we saw with monocular deprivation, patch therapy provides its source of atypical input, altering the developmental path of vision.
--- END AMBLYOPIA TREATMENT RESEARCH AND DEVELOPMENT ---
--- EMMETROPIZATION ---
Emmetropization is defined as the development or growth of the eye to a length that matches the optical power. As shown in the graph, the frequency, or prevalence, of hyperopic refractive error is expected in the early stages of development.
However, in recent decades the failure to emmetropize is becoming more common. Myopia, commonly caused by an eye that is too long for its optics, is becoming more common worldwide. The term “myopia boom” has been used to describe the increasing prevalence of atypical emmetropization.
While there is an influence of heredity on the failure to emmetropize, the rapid increase in prevalence and estimates of heritability (approximately 25% if one parent is myopic) does not explain the rise in myopia. Thus, there exist myopigenic factors present in the visual experience of modern life that guide the eye away from emmetropia.
The rapid rise in the prevalence of myopia presents us with a puzzle. What is atypical about the visual experience provided by modern life that leads to the failure to emmetropize for a greater number of people? To provide a spoiler alert, while the prevalence of myopia is well documented, the atypical visual experience that causes myopia has not been firmly established. There are candidates and theories, but there are inconsistencies between animal models with restricted rearing. Human work often highlights failures of treatments to control the progression of myopia.
Let us look at how the prevalence of myopia is increasing in the past and predictions for the future. In the slide “The prevalence of myopia in the past,” we see the data for 7 through 18 years of age from 1983 through 2000. The take-away from the slide is that the researchers used cross-sectional estimates of myopia. We can see that myopia prevalence is lower for the earlier cohorts in the 1980s. Still, as we move to 2000, myopia prevalence is higher at earlier ages and becomes more common in the patients observed in the study.
In the slide, “The distribution of refractive error,” the data from the last plot is broken down into the twelve-year-old group (recall puberty and pre-critical period closure) and eighteen-year-olds (a group that has reached stable refraction). In the 1980s, forty to fifty percent of the sample (Y-axis) was emmetropic, whereas, in the 1995 and 2000 cohorts, 0 to -3 diopters of myopia (X-axis) are increasingly common. For eighteen-year-olds, 0 to -3 diopters is not uncommon (note the difference in scale on the Y-axis from the twelve-year-olds), but moderate and high myopia becomes common.
What would we expect to see if these trends continue? The data presented in the slide show that “ prevalence and predictions for the future” ought to be viewed somewhat skeptically because predictions are difficult (especially about the future). This graph has two Y-axes. Because the absolute number of myopes depends on the population studied, the numbers are meaningless without context (left Y-axis). However, the right Y-axis agrees with the data presented in the previous slide. The data on the X-axis are categorized by age in bins of four years. The black bars represent the cohort of myopes in the year 2000. The black line rises until age 20 then levels off at a high prevalence. The grey bars contain the assumption that the rate of myopia will increase (note how the grey bars are higher than the black bars), but we see an increased number of myopes as we extend the data to 2050. The grey line depicts a lower population (assuming a population increase).
There are two assumptions used to create the 2050 prediction. The first assumption that the rate of myopia will continue to increase, which is not a given. The second assumption is that the population will increase.
Why is an aged population of myopes an issue? Myopia, particularly high myopia (6D or greater), increases the likelihood of developing eye diseases such as glaucoma or age-related macular degeneration later in life. Both of these diseases are costly to society and resistant to treatment. The increased risk of glaucoma is approximately equal to the increase in the incidence of lung cancer, given that an individual is a smoker.
Many explanations and theories have been proposed from animal models and developmental work with children with myopia. The slide, “Why the boom? Ideas?” lists some promising ideas and findings. As mentioned previously, genetic factors are unlikely; however, epigenetics or the activation of genes in the presence of factors in the environment can not currently be ruled out as a cause.
Many studies have measured the time children spend outdoors today relative to the past. Children across the world spend less time outdoors today than in the past. This observation has led to the idea that reduced exposure to bright light may be behind the myopia boom. A bright indoor environment is approximately 1,000 lux, whereas the outdoor environment is commonly 10,000 lux. The difference in light levels reduces the activation of the photoreceptors, which may not provide the appropriate environmental input for the maintenance of emmetropization mechanisms, or high light levels are required to induce emmetropization mechanisms to slow eye growth.
One recent randomized control study from China (myopia progression is of concern in China and South Korea) placed children either in glass classrooms or a typical classroom environment. While this is robust evidence, this one trial can not specify which of the environmental factors of outdoor lighting is the ultimate cause of the increased prevalence of myopia. However, this study points to the influence of natural light as a robust developmental input to the emmtropization system.
Lastly, the indoor environment can be considered an atypical visual experience akin to a restricted rearing experiment. Indoor scenes lack the varied detail of natural outdoor scenes, and the range of motion indoors is limited.
Puzzlingly, a lack of time outdoors in low light levels (e.g., a dark sky with starlight) and dark rearing can disrupt eye growth and promote myopia.
Another theory is that increased near-work (e.g., reading and using computers) provides an atypical experience that promotes myopia. Increased near-work places atypical accommodative demands on the developing eye. One population study (particularly one sample of males in a rabbinical school in Israel) supports the near-work theory. However, accommodation theories have the weakness that screen time has increased substantially since the beginning of the 21st century. While accommodation likely plays a role in the failure to emmetropize, the evidence does not appear strong enough to be the sole factor. Also, late-onset myopes tend to have tonic accommodation reduced, which is inconsistent with the near-work theory.
The modern world is atypical in that there is often light pollution at night, which can disrupt circadian rhythms. The eye contains mechanisms, particularly in the choroid, that track the day/night cycle. The choroid thickens and thins depending on the time of day. Disrupting circadian rhythms with light at night disrupts the choroidal rhythm.
Scleral remodeling is associated with myopia. However, whether remodeling reveals a causal link in myopia or a consequence of dysregulated development of eye growth remains poorly understood.
The slide “Eye growth and visual guidance” summarizes the challenges faced by the emmetropization system during development. Given the role of the environment and difficulty controlling and tracking visual input in humans, we will discuss several animal models and methods.
One consistent recommendation that both researchers and clinicians can make given the available data. Children vulnerable to dysregulated emmetropization, particularly those with at least one myopic parent, ought to spend more time outdoors.
The graph in the slide “Sunlight” shows the normalized power spectra of fluorescent, tungsten, and sunlight. The data are normalized such that the radiant power of sunlight is set at 100. Note how the spectrum of sunlight is constant with wavelength. Fluorescent bulbs (like those in our classrooms and offices) vary in their output across the visible spectrum. Note the peaks and troughs in the solid line. Spending time under fluorescent light deprives the eye of robust input wherever it falls below the dotted line of the sunlight. The dashed line shows the spectrum of a tungsten bulb. Note the lack of short wavelengths and abundance of long wavelengths. Despite their pleasant yellowish hue, Tungsten bulbs deprive the eye of short wavelengths and emphasize long wavelengths.
Another difference between the indoor environment is the difference in the light spectra of artificial light and natural sunlight. Sunlight has a broad range of wavelengths of light from the short to long, providing a robust environmental input to the developmental mechanism that controls eye growth. Indoor lighting, particularly soft indoor lights, are deficient in short wavelengths, which give soft light its yellow hue.
Lastly, the indoor environment can be considered an atypical visual experience akin to a restricted rearing experiment. Indoor scenes lack the varied detail of natural outdoor scenes, and the range of motion indoors is limited.
The light indoors and outdoors differs by orders of magnitude, and the visual system must adapt. For example, if one used a light meter to compare the photons from most white surfaces indoors with a black surface outdoors, the number of photons of a black surface outdoors is much greater. The difference in light levels creates different patterns of retinal activation. The developing visual system must develop to account for indoor/outdoor lighting differences. However, in the context of emmetropization, the robust signal from outdoor light is known to activate biochemical mechanisms more efficiently (e.g., dopamine).
Effectively, reducing the amount of time spent indoors is similar to a natural experiment in restricted rearing. Spending more time indoors does not provide a robust input to the developmental filters that modulate eye growth.
In the chicken, a classic experiment was used to demonstrate the influence of light levels on eye growth. The chickens wore a diffused (frosted lens) over one eye. A diffuser reduces the contrast and sharpness of the retinal image. The eye is monocularly deprived. The effect of the diffuser can be compared to the control eye, which does not wear a diffuser. This classic experiment shows, in all vertebrates, that the eye that receives the atypical monocularly diffused visual experience will grow to become too long for its optics.
We have seen how monocular deprivation experiments have been a building block for knowledge about the visual system's development. A classic experiment in MD stumbled upon the finding that the monocularly deprived eye, in a lid-sutured paradigm, grew much longer than the non-deprived eye. Nobel prize winner David Hubel was a careful scientist who was interested in the primary visual cortex. However, he took note of this unexpected finding to touch off work into emmtropization.
For MD monkeys, the deprived eye was longer, but the growth rate of the deprived eye was increased even after the deprivation was removed. Thus, the atypical visual experience of lid-suturing induces eye growth, and the induction persists even after the atypical experience was removed. The MD eye has been set on an altered developmental trajectory.
Hubel and his co-workers then set out to determine if the changes in growth rate in the sutured eye resulted from having typical experience in one eye and atypical deprived experience in the other. When suturing both eyes, the increase in the rate of eye growth persisted, which was the first hint that the developmental mechanisms that underlie emmetropization are functionally and anatomically early in the visual pathway.
Finally, to test whether it was the atypical experience through the sutured eye altering the growth rate, lid-sutured monkeys were reared in the dark. These results touched off a research subfield of the developmental process of emmetropization in animals.
We have already encountered the chicken model of emmetropization. There are advantages and disadvantages to the choice of any animal model system. For example, aside from any ethical concerns, rearing primates is an expensive and time-intensive process even before the cost of experimentation is taken into account. Other animals, such as small mammals, such as mice, rats, or cats, develop quickly. However, there are limitations, and they either have poor optics (mice), eye anatomy very unlike humans (e.g., the tapetum in cats), or are dichromatic and have only two cone types (unlike 3 for humans). That said, as we saw with the other developmental research with animals, it is a balancing act to translate our findings from an animal model to humans.
Chickens are an animal model system that researchers at NECO use. They are friendly, easy to work with, and fundamental work on emmetropization has been done with chickens. Also, unlike most mammals, chickens are not dichromats. They are tetrachromatic (they have a cone sensitive to UV light), which is important for work investigating the differences among visual inputs to determine why sunlight guides emmetropization effectively.
A lens experiment places either a lens of positive power (myopic defocus) or negative power (hyperopic defocus) in front of one eye while the other eye receives a control lens with 0D power. The slide, “Lens Experiments: Compensation for defocus,” shows a cartoon summary of the effects of positive and negative lenses. The positive lens is drawn in blue and the negative in orange-red. In panel A of the cartoon, we see an object in the visual world is focused in front of the retina with a positive lens and a negative lens focuses the image behind the retina. Note that in the cartoon, three structures are listed: sclera (blue), choroid (purple), and retina (orange).
In response to positive lenses, the eye will slow its growth, shown in panel B. Also, the choroid will thicken (note the increased thickness in the purple of the cartoon eye). The choroid is an essential support tissue for the eye, and it is a highly vascularized support tissue for the retina. The choroid also shows an intrinsic circadian rhythm, which may explain the increasing prevalence of myopia. In panel B, note the choroid thins in response to defocus behind the eye (hyperopic defocus from negative lenses). The choroid will also thin in response to negative lenses.
Thought questions: You are prescribing the first pair of glasses to a child who presents with myopia in the clinic. Their eyes are too long for their optics. Is it reasonable to expect that changing the developmental input from defocused to clear focus will have no effect on development? What if in a focused image, for the developmental filters of a myope, is interpreted by the emmetropization mechanisms as myopic defocus? In this thought experiment, would one expect myopia to progress?
The slide “Monocular lens paradigm” shows the canonical result for lens experiments. On the X-axis, we have the power of the lens that is applied to the treated eye from -10D to 30D. The two types of symbols (open versus filled) are not relevant for our purposes. However, the experimenters measured the axial length (Y-axis) difference between the treated and control eye with two techniques (ultrasound and calipers after the eyes were enucleated after the animal was sacrificed). The two measures largely agree. Thus we will focus on the general trends.
Note that the Y-axis is the axial length difference between the treated and control eyes. If the Y-axis value is positive, the treated eye is longer than the control eye. If the value is negative, the treated eye is shorter than the control eye. Thus, if the treatment lens has a negative optical power, then the treated eye will have a longer axial length, but if the power of the lens is positive, the treated eye will be shorter than the control.
The slide “Lenses - Monkey” shows the data for the monocular lens paradigm with the monkey model rather than the chicken. Also, note how both refractive error (first column of graphs) and axial length (second column) are both measured, allowing us to relate the two measurements.
The first row (panels A and B) shows the data for animals in the control group. The open and filled symbols are data from each of the two eyes. As the eye emmetropizes, the refractive error measurements decrease (Y-axis, left column), and axial length increases (Y-axis right column in millimeters). The second row shows the results from a monkey wearing a 3D lens on one eye for most of the first year of life. Note how the treated eye (open symbols) is more hyperopic and shorter than the control eye. Also, a positive lens affects both eyes (Thought question: is this consistent with our discussion with local control of eye growth in later slides in the lecture? If not, how is it inconsistent?) causing both eyes to be more hyperopic and shorter than controls (+4D and 10 mm versus +2D and 10 mm). With the negative lens (panels E and F), the effect of treating one eye with a negative lens causes the treated eye to become myopic both in refractive error and axial length.
The slide “Marmosets show the same pattern with -/+ lenses” makes the same point but in a different primate animal model. The left graph shows vitreous chamber depth (a measure correlated with axial length) and refractive error. The Open symbols show data consistent with what we saw in the previous slide. This experiment used a novel manipulation. Instead of using a positive or negative lens on one eye, one eye received a positive lens and the other a negative lens. Note how these animals developed longer eyes and more myopia than the negative lens and 0D (Plano) lens group. The atypical visual experience delivered by restricted rearing with +/- lenses is akin to development with anisometropia. (Thought question: if we looked at the brains of these developing monkeys, what might we expect to find changed in the ocular dominance columns?)
Taking a step back, we can think about how lens experiments fit with evolutionary theory. Given that, in general, all vertebrates show similar effects from lens rearing experiments, we can draw some inferences about the mechanisms that underlie emmetropization. Given the consistent results across species, we can infer that emmetropization mechanisms are evolutionarily old and seem developmentally tuned to the expected visual diet/experience that the animal will typically receive.
With the idea that evolution has preserved emmetropization mechanisms across vertebrates, researchers were nudged toward looking for mechanisms within the eye that guide emmetropization. The thinking goes, if the emmetropization mechanisms are old, let us look at the systems of the eye itself given that many functional and anatomical structures are preserved across vertebrates.
The slide “Local growth” shows a key result that determined that there are mechanisms local (that is within) the eye guiding the growth of the eye. The experiment again used a lens-rearing procedure. The middle column of panels A and B show the standard positive (green) and negative (red) lens rearing paradigm results. Positive lenses produce more hyperopic refractive error.
Now compare the graphs where the lenses affected either the nasal or temporal retina. We see that the blur created by the lens only affects the deprived portion of the eye. We see the classic effects on refractive error and axial length in the deprived portion of the eye. In contrast, the non-deprived portion of the eye is relatively unaffected.
Local deprivation has a local effect, which is the first of two results establishing the local control of emmetropization. The second again used a partial deprivation paradigm (shown in the slide “Local control of eye growth after sectioning”) but also sectioned (lesioned) the optic nerve. These animals present as blind in the sectioned eye. However, the sectioned eye will continue to show visually guided emmetropization. Also, if the visual signals that guide emmetropization are disrupted and either the temporal/nasal retina receives atypical experience, the eye will respond despite being sectioned.
These two experiments demonstrate that visual experience affects an emmetropization mechanism within the eye and not other signals from the central nervous system.
The development of the eye can be disrupted, but what about recovery from disruption? If it were the case that emmetropization did respond to correct eye growth when typical visual is restored, then the enterprise of clinically controlling myopia progression would be hopeless. Emmetropization is an active process, and the rate of eye growth is adjustable via the developmental system. The slide on “Active emmetropization - Recovery” shows control eyes (dashed line) which are emmetropizing typically. The solid line shows the eyes of animals exposed to visual form deprivation via diffusers(frosted lenses). All eyes became myopic after less than two weeks of form deprivation. However, once the diffuser was removed, the emmetropization system uses the visual input to signal the eye to slow its growth. In the graph, this can be seen from the solid lines shifting from approximately -10D refractive error to values close to the control group (approximately 2D).
The tree shew is another animal model system used to study the development of the eye. We have seen in this lecture that dark rearing can prevent the development of myopia in animals with bilaterally sutured eyes. Also, in general, we have focused on how dark rearing affects development by keeping the visual system in an immature state. However, in the slide, “Dark rearing and myopia,” we can see that dark rearing does not affect the development of the eye in a way that is similar to what we’ve seen for the primary visual cortex.
In the graphs in panels A through F, we see the refractions of tree shrews from the day of the onset of visual experience through day 60. The greyed region corresponding to days 12-22 in panels A, C, and E and days 6 through 16 (panel B) or 20 through 28 (panel D) represents days where animals were dark-reared. Notice that in all of these animals, negative refractions resulted, indicating that dark-rearing disrupted development, leading to increased eye growth; however, the age at which the atypical visual input is delivered matters. To see how it matters, we will return to the concept of the critical period.
The researchers used data from this paradigm to establish a critical period for emmetropization in the tree shrew. In the slide “Tree shrew - emmetropization,” we see the age of the animals plotted in weeks versus the elongation rate. The open circles show the normal elongation rate of a tree shrew. Note that early in life, the eyes are elongating (growing) rapidly, but the rate slows to near zero after sixty days of visual experience.
The black dots in this slide show the elongation rate's susceptibility (or refractive error) to be altered by dark rearing. If dark rearing is applied early in development (before two weeks of visual experience), the development of myopic refractive error will be minimal. As we have learned, this pattern of data corresponds to the pre-critical period. Also, suppose the animal is placed in a dark-rearing environment after approximately ten weeks of age. In that case, there is again a modest effect of the atypical visual input because the emmetropization mechanisms have begun the period of closure. However, between 15 and 45 days of visual experience, dark-rearing can disrupt emmetropization leading to myopic refractions. When the ages of the tree shrews in this experiment are translated into human years, the result is a critical period well within the window of when patients present with myopia in the clinic (approximately 7 - 15 years).
Recall that one of the effects of dark-rearing is that it disrupts circadian rhythms. That dark rearing disrupts emmetropization provides evidence that circadian rhythms are required for normal emmetropization.
Accommodation was on our list of possible factors that influence emmetropization. Near-work is challenging for the developing accommodative system, and the accommodative changes that arise from increased near-work may induce myopia. The role of accommodation has been explored in those who present relatively late in life in the clinic with myopia. The slide “Low Tonic accommodation – late-onset myopia”.
Tonic or the resting state of accommodation was measured in a sample of individuals (N = 62). The graph on the left shows the observed frequency of measures of tonic accommodation. The distribution is centered around one diopter. To reveal the role that tonic accommodation might have on the development of myopia, the researchers grouped their sample into late-onset myopes, early-onset myopes, emmetropes, and hyperopes. Early-onset (childhood) myopes did not differ in their tonic accommodation. Hyperopes have a high level of tonic accommodation, presumably to compensate for their reduced axial length. Late-onset myopes showed a reversed pattern where tonic accommodation was relatively low. Thus, for late-onset myopia, accommodation plays a role in the developmental mechanisms that underlie myopia.
The choroid is a highly vascularized tissue that supports the functioning of the retina. The slide “Changes in the chick model” shows a cross-section of the chicken eye in panel A and the retina in B. In the cross-sectional images, notice how much thicker the choroid is in the eye and retina images on the right. This difference is the typical response to positive lens wear in myopia and is known as choroidal compensation. In response to focus in front of the eye, the chicken’s choroid compensates for the defocus by thickening. The reverse is the case for an image focused behind the retina.
Choroidal compensation may be an intermediate step that accelerates eye growth during emmetropization. First, the choroid compensates for defocus by thickening or thinning; then, the choroid will return to baseline thickness after the sclera changes.
It is worth noting that accommodation, choroidal compensation, and altering the rate of eye growth are all methods of compensation for retinal defocus. Accommodation is a fast-acting mechanism, whereas choroidal compensation operates over longer time scales. Changing the eye growth rate is a longer-term compensatory mechanism.
The slide “The effect of the choroid on compensation?” shows a model within which we can think about the role of choroidal responses during development. As the organism develops (the X-axis is an arbitrary time in days), defocus could be abruptly induced by lens defocus. Periods of defocus could occur during development if the eye fails to grow toward emmetropia. That the choroid can thicken or thin produces tolerance to periods of defocus.
Let us look at how choroidal compensation could guide development. In the slide “Integration of defocus signals?” We see two alternatives represented in two graphs A and B. As an animal develops with time, there may be periods of defocus. A shows what the blur signal would be in an eye without defocus compensation the signal from the atypical experience of blur would accumulate over time. In B, we see a blur signal with a choroidal compensation mechanism added. In B, the accumulated blur signal is present for brief periods and then lessens with the compensation. One hypothesis is that the changes to the choroid activate a signaling mechanism; thus, with each round of choroidal compensation, the eye receives instructions on whether to increase or decrease its growth rate.
Our discussion of the choroid leads us back to the idea that circadian rhythms are involved in the development of the eye as it emmetropizes. In the graph, we see the change in choroid thickness during six-hour periods throughout the day. The dark bars show how the choroid has a normal rhythm of thickening and thickening where it is thickest during the evening and thinnest in the early morning hours (12 am - 6 am). The white bars represent a simple manipulation that the experimenters introduced; they added a period of light in the evening hours to simulate the artificial lighting we experience during the evening hours (especially in the winter). The addition of light at night changes the choroid rhythm. Focus on the changes in the white bars in the 6 pm - 12 am and 12 am - 6 am groups; with light at night, the pattern of thickening and then thinning is reversed. The choroids are thinning in the evening and thickening in the early morning hours. The circadian rhythms of the eye are dysregulated with light at night.
Before we move on to exploring mechanisms of emmetropization other than the choroid, it is worth noting a few facts gleaned from lens rearing experiments. These are listed in the slide “Non-linear effects of lens rearing”. The first is that if we rear a chicken with alternating positive and lens wear (of the same power) the effects of the blur signal do not cancel. Even if the negative lens is as much as five times the power of the positive lens (e.g., 2D lens versus a -10D lens) the eye will still be guided to slow its eye growth. (Thought question: Why might this be the case? Could accommodation play a role?)
Another key finding is that if the positive/negative powered lens is removed in a lens rearing paradigm, the pattern of development is changed. If even brief periods of typical visual experience are provided, they reduce the effect of lens wear on eye growth. (Recall: the effects of intermittent patching in monocular deprivation.) One possible explanation for this effect is that the eye during the typical visual experience expects a wide range of focus. Providing clear vision for even a few hours or minutes allows the system to develop in a more typical way.
Lastly, positive defocus is a less natural stimulus than negative defocus. During development, the hyperopic defocus is expected by the emmetropization mechanism perhaps because the eye is too short for its optics. Thus, the eye expects to receive some input that is myopically defocused.
Following up on this last point, we see how a few hours of unrestricted vision can reduce myopia across a few vertebrate species. With two hours of unrestricted vision with typical visual input, the effect of eye growth with positive lenses is abolished.
What is the mechanism within the retina that detects defocus and provides the developmental signals to guide emmetropization? The detection and discrimination of blur rely on cortical mechanisms that encode low-spatial frequencies. Given that we have seen that emmetropization mechanisms are present within the eye, an anatomical/functional mechanism within the eye must exist that can signal blurred vision. Surprisingly, the cells in the retina that are the best candidates for blur detection were discovered in the last five years.
A cell called the ON-delayed retinal ganglion cell (OND RGC) was discovered to have the characteristics of a defocus detector. The researchers used electrophysiology in the retina to measure the firing of the OND RGC and compare it with the signaling of known RGC. The slide “On-delay firing and spatial frequency” shows the stimuli used to discover OND RGCs and their firing pattern.
The stimuli they used were textures that contained either very fine detail (mostly high-spatial frequencies, pattern in row A on the far left) or were manipulated to appear blurred (far right pattern in row A). Row B shows the pattern of firing of OND for each of the patterns. In row B, the first dashed represents when the stimulus was presented and the second when the stimulus was turned off.
There are two aspects of the firing of OND RGCs that allow them to signal blur on the retina. The first is that they change their response to patterns with a coarser scale from mostly low spatial frequencies. This firing pattern can be seen in row B, where the pattern does not produce a response. As the texture is made more coarse, the response becomes more robust. Note the delay between the pattern onset and peak firing (circled in red on the slide). Such delayed firing is atypical of RGCs as they typically respond quickly to the onset of a stimulus.
Also, OND RGC has a preference for spatial frequencies, unlike classic ganglion center-surround ganglion cells. As we see in the slide “On-delayed compared w/other RGCs,” most ganglion cells have a preference for textures with similar spatial frequencies. The OND RGC has a significantly different tuning (bar OND in the graph). Thus, the OND RGC satisfied the two properties researchers were looking for in a retinal blur detection mechanism that could complete the picture of the local control of eye growth in response to defocus. First, they change their signaling in response to blur, and second, they do so in a way that is distinct from other ganglion cells that are known to be involved in spatial vision.
The final slide on OND RGC shows the ON-delayed firing pattern compared with a classical ON center RGC in A. Panels C and D show how natural and blurred patterns differ in their appearance and spatial frequency content. Lastly, Panel B shows how the OND RGC circuit is designed to produce the properties required.
Let us walk through each panel. Panel A represents the stimulus presentation with the shaded yellow region. Note again that there is a considerable delay in the OND RGC firing, which changes based on the spatial frequency content of the pattern (as represented by each row of spike traces). Panel C shows a rainbow plot (or heat map) of the natural image and is called a Fourier Amplitude Spectra. It is created using Fourier analysis (Recall: The image of Alan Turing’s face broken into spatial frequencies in the first section). Note the difference in the Fourier Amplitude Spectra in C versus D, C contains a wide range of spatial frequencies, as can be seen by a large amount of the plot being read (or hot in heat map terms). The rainbow plot in D contains only low spatial frequencies.
Now turn your attention to the curves in the two rainbow plots. Spatial frequencies outside this range will cause the OND RGC not to fire, but frequencies within this range will cause firing and alter the delay.
Lastly, panel B shows how the OND RGC operates. The OND RGC takes input from an array of spatially distributed cells (here labeled AC1 and AC2). The spatial distribution of the input creates the preference of these cells for low spatial frequencies. Also, the early cells inhibit OND RGCs; this inhibitory input causes a delay in the firing of the OND RGC.
For an emmetropization mechanism to guide development, it must discern whether the eye is too short or too long for the optics. Defocus from positive and negative lenses can create the same blur circle on the retina. Thus the emmetropization system must utilize additional cues. Thus our developmental question becomes, “How does the eye determine the sign of defocus during development?”
One candidate cue is longitudinal chromatic aberration (LCA). The slide “LCA” walks us through how the retina could use a longitudinal chromatic aberration cue to determine whether the eye is too short or too long for the optics. We can see in the cartoon eyes labeled “Hyperopic defocus” and “Myopia defocus” the effect of LCA upon the wavelength of light that is in focus on the retina. With hyperopic defocus, short-wavelength (blue) light is focused while long-wavelength light is defocused, and the reverse is true for myopic defocus.
How could the differential focus of wavelengths in white light allow emmetropization to signal whether it should increase or decrease the growth rate of the eye during development? If we add the concept of cone contrast, that is, the contrast signaled by each of the short- (S), medium- (M), or long-wavelength (L) sensitive cones, then we can see how LCA affects the signal on the retina when light is defocused.
In the graph, the X-axis is defocused in diopters, and the Y-axis the relative contrast signal from LCA for each of the S-, M-, and L-cones. At 0D of defocus, we have a pattern of S-, M-, and L-cone contrast that could serve as a target for the developmental filters guiding eye growth.
Now consider the defocus at the points labeled 1 and 2, compare the cone contrast for S- and L-cones to M-cones. The blue curve, representing S-cone contrast, is higher than the M-cone contrast, while the L-cone contrast is lower than the L-cone contrast. Lens-rearing with a negative lens would then produce a pattern of cone contrast that signals that the emmetropization should increase the eye growth rate. The converse is the case when we look at the points labeled 3 and 4. The S-cone contrast is much lower than the M-cone contrast, and the L-cone contrast higher for these points.
Returning to our discussion of sunlight, we can see how lights that contain wavelengths of light that are unlike sunlight could disrupt the LCA mechanism. For example, a tungsten light will produce more L-cone contrast via increased power in the long-wavelengths.
One piece of evidence (shown in the slide LCA Evidence: Methods) that supports the idea that LCA could be used to discern the eye of defocus comes from restricted rearing experiments that used white and colored light. A simple experiment compared rearing with flickering white lights (the flicker produces a cone contrast signal) to lights that were restricted in their wavelength in that they flickered from red to green or from blue to yellow. These are the rearing conditions experienced by groups 1 - 3. The other groups served as control conditions.
The prediction from the LCA cue is that both of the flickering lights ought to provide an atypical experience to a mechanism that detects the LCA signal described in the previous slide. The results from Groups 1 - 3 are shown in the slide “LCA: Results”. The prediction from LCA is borne out. Those birds reared under white light show less eye growth than those reared with restricted wavelength experience. The key point is that if we disrupt the emmetropization mechanism and provide the atypical environment of rearing under colored light the LCA cue is not useful and eye growth can not be guided. Thus, we see the birds in reared with the flickering red/green (R/G) and blue/yellow (B/Y) have longer eyes because the emmetropization system has been disrupted.
Insights into the development of myopia have come from attempts to treat and control myopia progression. One promising myopia control treatment comes from lenses that have varying optical power. The idea is that traditional spectacle/contact correction only corrects central vision and does not provide optimal correction to the periphery. The cartoon eyes in the slide “Peripheral focus lenses / contacts : scleral remodeling” show peripheral correction. One hypothesis is that the peripheral defocus from correction could cue the sclera to remodel and change. Newly developed lenses counteract this cue. Animal and human experiments have been promising, and one lens has been approved for use in myopia control thus far.
We will end with how myopia control treatments relate to the research we have looked at in this section. Progressive lenses (those usually used for presbyopes) were the inspiration for the novel lenses shown in the last slide.
Atropine has been used topically to help control myopia and slow the growth rate of the eye. We saw in the amblyopia section that monkeys that had one eye treated with atropine developed atypically because of the blur induced by atropine. However, providing atropine for brief periods may influence the emmetropization system to correct the system because the patient experiencing atropine blur may undergo choroidal compensation during the treatment. This is still an active area of research.
Finally, we will end with why the myopia boom remains an ongoing concern. Unfortunately, despite decades of research, no candidate treatment has been effective at abolishing the progression of myopia. Often treatment will slow growth during development, but then the eye will continue to show progressive eye growth. Thus, while the treatments are promising and help guide the eye away from a state of high myopia, current treatments are insufficient. Perhaps it might be as simple as spending more time outdoors?
--- END EMMETROPIZATION ---
--- AGING AS DEVELOPMENT ---
The development of vision continues through to adulthood. However, vision changes as we age as well. Teller and Movshon (1986) provide us with the pithy quote, “[T]hings start out badly [infants], then they get better; then, after a long time, they get worse again.” The quote reflects how visual development research has grown to include the development that occurs during the later stages of life over the last few decades.
We focused on the question, “How does vision typically develop from birth through adulthood?” in the first section. To conclude this section of the course, we will focus on what is typical visual development in later years. We will focus on how mechanisms change during aging and follow the same sequence of looking at the various stages along the visual pathway.
Demographics, especially the demographics of the western world, have spurred research in the development of aged vision. The proportion of the population 65 years or older is rising across the world.
Let us begin with the pupil. When an aging eye is examined, it will likely present with senile miosis. Compared to younger observers, the difference between the size of the dark-adapted and light-adapted pupil is reduced. With advanced age, there is generalized muscle atrophy, and dilator fibers are vulnerable to atrophy. Senile meiosis reduces the amount of light reaching the back of the eye and changes the circle of blur to be mismatched for the light level.
There are also severe changes to the lens. The ability to accommodate to near stimuli is affected, and the thickness of the lens is increased. The lens yellows (absorbs blue light) and scatters light with advanced age. The slide “Increase in lens thickness” shows the difference between a young lens (A) and an older lens (B). The thicker and older lens in B also shows up more intensely in this image due to the scattering of light by the aged lens. Two slides entitled “changes in the lens” summarize facts about how the lens changes.
The key takeaway from the changes to the iris and the lens is that changes to the optics (size of the aperture and crystalline lens) reduce the amount of light available to the retina. The reduced retinal illumination has consequences for daily living and is a crucial variable to consider for researchers who study aging. When testing the vision of aged and young observers, a common control condition is to use neutral density filters to simulate the reduced light available at the retina.
When we move on from the optics and consider the retina, there is good news for aging. Cone density does not change from age 27 through 90. Also, the number of foveal cones is unchanged. There is a decrease in rod density, but the aging visual system has developed a compensatory mechanism that enlarges the rod aperture maintaining coverage across the retina.
Visual acuity can be corrected to normal until late in life, but the proportion of patients can be corrected to normal halves past 70. Even in corrected observers, impairments at low illumination or with low-contrast stimuli there are decreases in resolution acuity.
Recall that we saw a developmental time course when we looked at measures of the contrast sensitivity function. The slide “Contrast Sensitivity and Aging” compares CSFs measured during development and those across the decades of life. Note that changes in the CSF only begin to emerge in the 60-year-old cohort, but from 60 through 80, we see changes that are the reverse of those that occur during development. The high-frequency cutoff is reduced, and the peak spatial frequency (the spatial frequencies observers are most sensitive to) is reduced. However, these changes are not as dramatic as the gain made early in the development of vision.
A common complaint of older observers is that they have difficulty seeing at night. These difficulties are observable when the scotopic (dark-adapted) contrast sensitivity function is measured. However, measurable changes in scotopic CSF appear in the oldest cohort (61-88 years old, open triangles in the plot). The cause of this decrease may be the changes in the rods that we noted earlier rods are less able to collect light efficiently and detect stimuli efficiently. However, the reduction in the CSF is relatively modest compared to the changes we saw early in life (~0.2 log units).
The remainder of the slides explore how research with aged monkeys compares with that of humans (for both spatial and temporal vision). The theme here is that work on the anatomy and function of the aged monkey visual system can guide researchers who study humans to look for deficits in behavioral tasks.
We learned that orientation tuning requires visual experience to develop. (Thought questions: What methods showed this? How is orientation tuning affected by atypical experience?) The key measure of orientation tuning that we saw in optical imaging are cells that preferentially respond to specific orientations; cells in the primary visual cortex will show a bias to respond to either horizontal or vertical orientations.
Recordings from macaque V1 show that more cells were untuned to orientation (low orientation bias in the solid curve in the graph) in the aged versus young monkeys (grey curve in the graph). More than 80% of old monkey cells showed a selectivity (or bias) of less than 0.2, which means they fired only 20% more to their preferred orientation over non-preferred orientations. Young monkeys show a much larger percentage of tuned cells in that 50% of the cells had a selectivity of 50% or above.
These data lead to the prediction that perhaps aged humans would show deficits in discriminating different orientations. However, an alternative hypothesis could be that the visual system could maintain orientation discrimination abilities with only the subset of tuned cells (i.e., 20% of cells in the aged monkey that show orientation selectivity).
The slide “Aging & Monkey Orientation Tuning” dig a little deeper into the idea of orientation selectivity and how it changes in older monkeys. All of the cells in panels a - d show the orientation selectivities from aged macaque monkeys. The cells in panels a and c show a large response to vertical orientations and fire robustly when a vertical orientation is placed within a cell’s receptive field. The cells depicted in b and d are cells in the visual cortex expected to be orientation-selective in younger monkeys. However, with age, they do not preferentially respond to vertical orientations.
We have two competing hypotheses. The first hypothesis is that the decrease in orientation tuned cells will impact orientation discrimination in humans. The second is that the cells that maintain orientation selectivity are sufficient to maintain functional spatial vision for orientation.
The ability of aged observers to perform an orientation discrimination task was measured by placing a grating in a background of orientation noise. The slide “Orientation Tuning Task” shows examples of the stimuli presented to younger and older observers. Notice that A is much less visible than C. The lack of interference is due to the orientation noise in C being concentrated around vertical orientations, which would not affect cells that have a strong preference for horizontal (the orientation of the grating to be detected).
The slide “Orientation tuning curves” shows the logic behind using these stimuli in a way similar to what we saw in the monkey orientation tuning curves. In each of the four panels A - D, we have two hypothetical orientation tuning curves. The solid curve is more selective for horizontal orientations than the dashed curve (i.e., it responds to orientations off horizontal more). The masking orientation noise will have an increased effect on a cell with a lack of orientation tuning (or bias). How this would work is shown in row C. The dashed curve is stimulated by the noise, whereas the solid curve would not respond to the presence of the noise. The method presented here provides a behavioral metric of orientation tuning. The gap in orientation in the masking stimuli is referred to as a notch. The width of this notch and the value where the notch begins to affect an orientation task is a measure of the selectivity of a younger or older observer for processing orientation.
The slide “Aged humans orientation tuning results” shows the data from the experiment. The X-axis is the width of the noise notch, and the Y-axis the ability of the observer to detect the grating. The filled circles represent the data from older observers, and the open triangles the data from younger observers. Note the overlap of the data from the two age groups, demonstrating that older observers maintain a measure of orientation tuning. The lack of an effect suggests that our second hypothesis that the remaining orientation-tuned cells in the aging visual cortex are sufficient to maintain spatial vision.
We saw that CSFs changed for older people above the age of 60. Two bar graphs in the slide “Monkey V1 – Spatial Frequency Tuning” show a result similar to what we saw for orientation. The older monkeys have fewer cells in their primary visual cortex tuned to high spatial frequencies. Also, the largest proportion of cells show tuning to lower frequencies than young (marked by the two arrows).
The paradigm used for orientation where a signal grating was embedded in notched noise was used with older and younger observers. The logic behind the experiment remains the same -- if older observers’ spatial frequency tuning changes, we should see noise with wider notches impacting the ability of older observers to detect gratings.
Again, the results show that the second hypothesis holds. The Y-axis is the same as the previous graph, and the Y-axis now represents a spatial frequency notch in the noise. Despite changes in the visual cortex with aging, those cells remain able to maintain spatial vision. Thus, despite changes to the aging brain, the evidence shows the brain compensates and develops to maintain spatial vision, perhaps because it is fundamental for all other visual abilities (e.g., motion, stereo, object, and face recognition).
Given that we saw differences between spatial vision in the fovea and the peripheral vision (in development with monocular deprivation experiments and in amblyopia) we can now turn to see if aging differentially affects the performance of detecting stimuli in the periphery. The slide “Aging and perimetry” shows the results from a study using a standard clinical tool the Humphrey’s visual field analyzer (HVF). Note that all the participants in this study were screened to be typical, with no patients that were glaucoma suspects or had a glaucoma diagnosis. One measure of visual function that combines sensitivity across the visual field is average sensitivity. This coarse measure of visual function provided by a HVF is plotted on the Y-axis. On the X-axis is the age of the patient in years. On this measure, there are modest declines in visual function from 20 through 80 years of age.
However, if the type of perimetry is changed to short-wavelength perimetry instead of white on white (as used by the HVF) we see a substantial (approximately 15 decibels) decrease in sensitivity. What might be driving this change? As we saw in our discussion of the lens, it yellows and absorbs short-wavelength light. Thus, peripheral spatial vision is impacted, but the deficits seen are likely due to changes in the lens, but SWAP corrected for these known reductions is plotted in this graph, which indicates that the reduced peripheral vision sensitivity arises from mechanisms beyond the optics.
The slide “Visual field sensitivity loss” partials the overall changes that we saw in the last two slides into regions of the visual field. The darkest bars represent the central 0-10 degrees. The second grey bar is 10 - 20 degrees, and the lightest bar 20 - 30 degrees in the periphery. Four different perimetry techniques are shown but focus on STD (which is a standard HVF) and SWAP cor, which are the two measures we discussed in the previous slides. The Y-axis is now the slope (the downward trend) we saw in the last two measures. That is, the slope tracks sensitivity losses per decade of life (how rapidly sensitivity declines with age). The HVF sensitivity declines as we look at the central (0 - 10 deg) to more peripheral measures but not as dramatically as SWAP cor. Note the difference between SWAP unc (which is the SWAP measure without correction for the yellowing of the lens) and SWAP cor. The uncorrected version shows much more substantial decreases with each decade of life, but this is due to the known changes to the lens.
One clinically significant measure (used as a driving screen test in some states such as Michigan) attempts to screen individuals with reduced spatial vision functioning in the periphery. The observer (or aged person attempting to renew their license) must distinguish between a cartoon truck and a car. Two objects are shown, one in the central visual field and the other in the peripheral visual field. This test is known as a useful field of view or (UFOV) task. The UFOV is defined as the ability to reliably distinguish objects in both central vision and the periphery simultaneously. If the observer has preserved visual functioning in the periphery, they will easily distinguish objects in the center and periphery.
Generally, we have seen that spatial vision is preserved into old age. As we saw in children's development, spatial vision (in the what or ventral pathway) tends to develop first, then temporal vision develops (in the dorsal or where pathway). The remainder of the material in this topic will focus on temporal vision and it changes with aging.
Recall that in the first section we saw that motion sensitivity is often measured with random dot patterns. A still frame of a random dot motion stimulus is shown on the left of the slide “Aging and Motion Sensitivity”. On the right are data from different age cohorts in their twenties, thirties, forties, fifties, sixties, and seventies. The graph shows motion sensitivity (higher is better) versus the duration (presentation time) of the random-dot stimuli. Note that all the cohorts below 70 years of age are approximately equally sensitive to motion. However, for those 70 years and older motion sensitivity is reduced.
The slide “Monkey Motion: Results” returns us to the single-cell data for area MT (an area of the brain in the dorsal pathway known to process motion). Graphs A and D show motion tuning curves for young and old monkeys for cells tuned to leftward motion. As we saw with orientation tuning the young monkey cells are more selective for motion direction. Tuning arises from inhibitory processes. In this example, in the young monkey inhibition reduces the response of motion-tuned cells to directions other than leftward. The broader tuning of cells in older monkeys arises from reduced inhibition to motion directions other than the preferred direction.
For motion, unlike spatial vision, we see a consistency between the human data (reduced motion sensitivity in observers aged 70+) and motion tuning.
The perception of the speed of moving stimuli is a crucial ability for older observers to maintain. Older people who remain active are known to have a better quality of life and longer health spans (a period like the life span based on ages which observers report being healthy and active for their age). The slide “Aging and speed perception” shows a frame of a task used to measure the speed perception of older observers. The random dots move at a base or standard speed (e.g., 5 degrees per second), and either the top or bottom set of dots moves slightly faster. The speed increase needed to detect an increase in speed reliably is expressed as a Weber fraction (the change in speed required over the standard speed).
The slide “Aging & Speed Results” shows data from older observers (open circles) and younger observers (filled circles) at three different standard speeds from slow to fast. Two aspects of the data are worth noting. First, older observers require a larger increase in speed for all standard speeds (higher % in Weber fraction). However, the increase in speed required does not vary with the standard speed, indicating no speed-selective deficits. That is, healthy older observers are not particularly impaired at fast speeds.
The slide “Monkey MT – speed tuning (random dots)” and “Monkey MT w/Age” return us to single-cell recordings of speed-tuned cells in the monkey. Again, unlike spatial vision, there is consistency between the work with single cells and human behavior. One hypothesis is that temporal vision is more vulnerable to the changes that occur with aging because abilities that emerge later in development early in life are the first to show declines. This hypothesis is referred to as the “Last in, first out” hypothesis. The rationale is that those visual abilities that have a long developmental time course require more visual processing and require more compensation than the aged visual system maintains with visual experience.
We noted that tuning for motion relies upon inhibitory processing (Thought questions: What other visual abilities require inhibition? Do inhibitory processes emerge early or later in development? What processes in emmetropization require inhibition?) The slide “Aging in Monkey V1 and inhibition” shows a key finding in aging that demonstrates the crucial role inhibition plays in spatial vision. Note the panels labeled G and H. These are orientation tuning responses of young monkeys that are very selective for vertical orientations. In the graph labeled A, we see the response of an untuned cell in an aged monkey. The neurotransmitter gamma-aminobutyric acid (GABA for short) is an inhibitory neurotransmitter. If the untuned older cell is treated to increase inhibition via increasing GABA activity, the formerly untuned cell becomes tuned, resembling the tuned monkey cell.
The following slides describe the behavioral evidence in humans that implicates a decrease in inhibition for older observers? The slide “Motion Size/Contrast interaction as a proxy for inhibition” shows still frames of a task used to measure a task thought to tap inhibitory processes. The amount of time observers require (duration threshold) to distinguish between a leftward or rightward moving grating was measured at a range of sizes (note the difference between the top and bottom rows of pictures) and contrast (the left and right columns).
As the data show, when the contrast is increased for small gratings, the amount of time observers require to judge leftward and rightward motion decreases. However, as we increase the contrast of large gratings, the amount of time younger observers require to discriminate motion increases. The idea is that large, high-contrast gratings show inhibition because the size of the stimulus exceeds the spatial area over which motion signals are summed. Thus, given that older monkeys show reduced inhibition and this task is an example where inhibitory processes impair visual performance, we hypothesized that older observers ought to be better at this task than younger observers.
The slide “Motion, inhibition and Aging” show that this hypothesis holds. Stimulus contrast is shown on the X-axis and the Duration Threshold on the Y-axis. Note the pattern of results for younger (open triangles) and older (filled circles) observers in the top left panel (A) for the small stimuli. For all contrast levels, older participants require more time to discriminate between leftward and rightward motion. In panel D, which represents the data from the large gratings, we see that the open triangles are above the filled circles, indicating that older participants require less time to process motion than younger participants for large high-contrast gratings.
Does this indicate a benefit of aging? Not really. The cartoon in the slide “Age, inhibition, and motion perception – a problem? Is better performance always good? Inhibition decay?” show that while older observers may perform better in the size/contrast motion task, the lack of inhibition will impair their performance in more complex tasks for example where they need to segregate the motion objects and the background in visual scenes.
Motion perception comes in a variety of forms; the slide “Types of motion stimuli” show three simple random-dot stimuli that tap visual abilities. We have already discussed translational (e.g., left/right motion). Radial flow refers to the pattern of motion that occurs as a person moves through the visual world. Finally, biological motion, such as the information present in each of our walking behavior (known as gait), is another source of information that can be used to determine the actions others are performing or even the identity of people known to use. The movie shows an example of a point-light walker used to study biological motion. The next slide shows how each of these abilities changes with age. Row B shows the data for translational motion where the X-axis is age and the Y-axis detection threshold (larger values represent worse performance). As we have seen in previous slides translational motion is impaired with age, particularly for older observers. Radial flow and biological motion are more complex visual tasks yet reduced impairment relative to translational motion. The preservation of radial flow performance raises the following question: are these abilities maintained with age because they are crucial for healthy aging? All of the observers in this research were healthy aged people; thus one question comes to mind: would we find that less mobile aged people are impaired on radial flow tasks?
As we close the book on aging, let us return to the active questions researchers are tackling that link aging to the development of vision. We learned about the critical period for the development of vision. Is there a comparable sustainable period for the development of impairment that we see with age? Establishing a sustainable period would require determining the visual experience required to maintain visual abilities at a given age. The logic is similar to that used to establish the critical period.
While we have seen some evidence that early development can help us predict when visual abilities will decline with age, a principle such as “last-in, first-out” is vague. More detailed research is required to compare the relative rate of decline observed in aging. Another complicating factor is that the aging visual system could recalibrate through learning to preferentially preserve specific abilities, effectively shifting resources from some tasks to others.
Finally, can we find behavioral interventions that slow the developmental decline that comes with aging? Developing interventions could make life better for the large population of aged observers that will appear this century.
--- END AGING AS DEVELOPMENT ---
--- COURSE SUMMARY ---
In the three sections of this course, first, we introduced the developmental filter theory to give us a conceptual framework for thinking about development. Then we saw how human infants develop from birth through adulthood. The theme was that basic visual abilities (e.g., spatial vision) emerge early while more complex ones have a longer developmental time course. The second section followed the same structure as the first section but focused on the anatomy and function and how development is implemented in the central nervous system. The third section allowed us to look at three special topics: amblyopia, emmetropization, and aging, each of which can inform us how visual experience and performance change with atypical experience and even the typical experience that occurs in healthy aging.