Pages

Tuesday, November 30, 2010

Spaced Repetition for Learning Concepts


Spaced Repetition for Learning Concepts:
A new neurobiological foundation for research and a computer-aided means of performing said research.
“Practice makes perfect.” “Use it or lose it.” These are expressions students hear often from parents and teachers attempting to persuade those students to do their homework or practice the piano regularly. It is common knowledge that reading or studying some topic once and then putting it away till test time is a recipe for failure. This is why teachers assign homework. Many even claim the test itself is a learning tool. But are these notions myths, based on centuries old traditions, or do they really work. If so, under what conditions? And how can study time be optimized so that students learn as much as possible in as little time as possible?
Ever since the late 1800s researchers have been trying to determine the answer to that question. Since that time. literally hundreds of studies have been performed verifying and reverifying a principle that has come to be known alternatively as “spaced repetition,” “distributed practice,” the “spacing effect,” and other similar terms. In this paper the term “spaced repetition” (SR) shall be used to name the phenomenon wherein study “items with repetitions that are separated by time or other events are remembered better than items with repetitions that are massed, occurring in immediate succession” (Toppino & Schneider, 1999, p. 1071).  For a phenomenon “many researchers would consider […] to be among the best established phenomena in the area of learning and memory (e.g.,Dempster, 1988)(Toppino & Schneider, 1999, p. 1071), it is interesting that “neither American classrooms nor American textbooks appear to implement spaced reviews in any systematic way” (Dempster, 1988, p. 627).
By ignoring this research, I believe American educators are missing out on an important learning tool. Further, I claim that SR (spaced repetition) can be applied to the learning of complicated concepts – what Sarah D. Mackay Austin (1921) called “logical memory” and Danielle Mazur (2003) called “abstract learning” – in addition to the simple rote memorization of what I call “factoids,” simple word-pair or question-answer associations. Though extensive research – over the past 130 years – has confirmed over and over again that SR works, researchers have had difficulty in developing a reliable theory as to why or how it works (Dempster, 1988, p. 633; Mazur, 2003, pp. 3, 5). In addition there are several legitimate criticisms as to the past and current methods of research as well as the practical application of spaced repetition in the classroom (Dempster, 1988, p. 627). One such criticism is – despite the vast volumes of research – very little of it has involved much more than the memorization of text. Only a very few studies have been conducted examining the potential of SR for learning concepts.  As memory is very likely an evolved trait (Nairne, Thompson, & Pandeirada, 2007, p. 271),  it just doesn’t make sense that there would have been evolutionary pressure to evolve memory for word-pairs but not for general concepts. In fact, it is reasonable to assume just the opposite. Therefore, I believe – as does Mazur (2003, p. 22) – that more research needs to be done in the application of SR for learning concepts.
By examining recent – and not so recent – research revealing how neurons in the brain actually form memories, I hope to provide a new foundation for SR research. Finally, by introducing a new computer-based system which can facilitate the learning of complex concepts while, at the same time, collecting the research data necessary to fine tune the theory and its application, I hope to finally bring 130 years of research to fruition and usher in a new era of education.
Spaced repetition (SR) has – unfortunately, as we shall see – been given many different definitions by many different researchers. Generally, it means that the study time is spread out amongst two or more discrete periods as opposed to doing all studying all at one time, often called “massed” study (Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006, pp. 354-355; Dempster, 1988, p. 627; Mazur, 2003, p. iii; Toppino & Schneider, 1999, p. 1071). However, the intervals between study time which are considered massed or spaced vary greatly from one experiment to another and often overlap. Cepeda, et al. chose to define massed study as any time less than one second elapsed between instances of studying the same factoid and defined spaced repetition as any time more than one second elapsed (2006). On the other hand, Mazur gives as an example: “If a student solves five long division problems on one day and five others one week later, the strategy is distributed. If the student solved all 10 problems on the same day, it would be considered massed practice” (2003, p. 1), regardless of how much time passed between working each problem during the “massed” study. Many researchers have considered the reading of long lists of word pairs over and over again to be massed repetition even though quite some time passed between going back over any specific word pair. Nellie Perkins considered one day between study sessions to be massed repetition and a two to four days  interval to be spaced (1914 cited in: Mazur, 2003, p. 6). So, you can see the time frames have been all over the map. On top of that, some researchers have used the term “spaced repetition” in experiments where the test consisted of providing multiple instances of the stimulus separated by a time interval and then looking for the response (Roberts, 1974). In effect, the experimenters were asking the question twice rather than teaching the behavior twice. It is actually quite amazing that – despite the wide range of time frames used – the results of all these experiments have been so consistently positive (Cepeda et al., 2006, p. 358).
The first experiments were carried out by Herman Ebbinghaus starting in 1879 using nonsense syllables (1885). Ebbinghaus demonstrated what many of us already know, that memory falls off faster at first then at an ever decreasing rate. Though he may not have ever drawn it himself, Ebbinghaus is famous for the “Ebbinghaus Forgetting Curve” – sometimes called a “retension curve” – which looks like this:
Ebbinghaus Curve 1
(Robertson, 2008)
Ebbinghaus was originally only interested in how long it took to forget something but, with some additional experimentation, he found that he could remember things better if he distributed his practice over several days rather than study it all at once. This work was relatively quickly replicated by Jost (1897) who “found that the number of syllables rightly named increased progressively with the extent of the distribution, being greatest where the 24 repetitions were spread over 12 days” (W. G. Smith, 1897, p. 683) and Thorndike (1912 cited in Cepeda et al., 2006, p. 354). Followed quickly by Darwin Lyon (Lyon, 1914a, 1914b, 1914c). Skipping ahead to more recent work, “Hellyer (1962) found that repetition of consonant trigrams raised the level of the short-term retention curve and decreased the rate at which the curve fell” (Roberts, 1972, p. 74). This means the memory was stronger after the repetitions than it was originally and that it then took longer to forget. Joel Zimmerman tested "spaced repetition" by having subjects memorize a list with 87 total occurrences of 42 words arranged such that some of the words were repeated 2 or 3 times in a row (massed repetition) and other words were repeated 2 or 3 times but with either 3 or 14 other words between them for an average total list study time of about 10 minutes. Though this only produced a "spacing" of 28 or 98 seconds it still resulted in a significant difference in recall over the massed repetitions (1975). Toppino and DeMesquita (1984) found that “spacing repetitions facilitated recall, and the function relating recall of repeated items to the spacing between repetitions was the same throughout the age range.” Experiments performed by Kitao and Inoue (1998) “showed significant spacing effects on implicit memory as well as on explicit memory.” In fact there has been so much research on spaced repetition that even a cursory review of the literature is well beyond the scope of this paper. For an extensive list of citations please refer to the reference lists of Mazur (2003) as well as Cepeda et al. (2006) who reviewed 427 articles, choosing “a total of 317 experiments in 184 articles” on which to perform meta-analysis and found “the average observed benefit from distributed practice (over massed practice) in these studies was 15%.” (note: The articles were not chosen to influence the results. Rather, they were chosen because they all followed similar protocols and were, therefore, easy to compare to each other.)
With all this support, you can imagine there are few criticism as to the veracity of the claims made about spaced repetition. Winz (1931) performed experiments wherein massed learning performed better than spaced repetition. However, I believe Winz's assumptions to be incorrect. He couldn't know it at the time but the interval he allowed between spaced repetitions was far too long. Long enough for the memories to have faded almost completely. In addition, the amount of material memorized was so small and the time between study and evaluation was so short that the "massed" learning results could be completely accounted for simply by the information still being in the subject's short-term memory. It almost seems as if Winz specifically devised this test to contradict the work of Jost and Lyon.
Another criticism leveled against SR is that it has not been shown to work consistently in the classroom. Almost all of the positive results listed above were laboratory studies, where subjects memorized lists of either nonsense syllables, words, or word pairs. These are not the kinds of things teachers spend most of their time teaching in class. As Mazur wrote, “While studies such as these show the merits of spaced practice, these findings do not provide information useful to teachers who have entire lesson plans to prepare, often with too much material as it is” (2003, p. 1). Dempster writes, “Obviously, issues regarding the utilization of findings from basic research are complicated, and there are many potential impediments to the implementation of research findings in the classroom” (1988, p. 627). That said, there are at least some studies which have given positive results in the classroom (Bloom & Shuell, 1981; S. M. Smith & Rothkopf, 1984 cited in: Dempster, 1988, p. 630). In fact, most teachers do implement a rudimentary form of spaced repetition. They assign readings, go over the material in class, assign homework, sometimes give quizzes or go over homework, then administer a test. While I do not believe this is ideal, it is still better than nothing as shown by dozens of successful experiments with very long spacing intervals (Cepeda et al., 2006).
Yet another reason spaced repetition may not have been fully implemented in the classroom is that there is still yet not any sound theoretical basis for why and how SR works. Not that psychology theorists haven’t tried. Several different theories have been proposed. Deficient-processing theory (Challis, 1993; Jacoby, 1978; Rose & Rowe, 1976; Shaughnessy, Zimmerman, & Underwood, 1972 cited in: Toppino & Schneider, 1999, p. 1071), sometimes called the voluntary attention hypothesis (Dempster, 1986 cited in: Dempster, 1988; Mazur, 2003, pp. 3-4), claims that students do not give 100% of their energies to studying anything but the first presentation in a massed study session. They pay attention to the material the first time they see it but do not attend to it as carefully when they are forced to go over it immediately thereafter. On the other hand, students are assumed to pay more attention to the repeated material if some time has passed since they studied it, thus the actual processing time is presumed to be longer for the spaced material. There are many different forms of this theory but they all boil down to the same basic hypothesis. However, as Mazur writes:
Although the voluntary attention hypothesis sounds promising, there are some findings that are inconsistent with this theory. First, the spacing effect has been found with pre-school children who have limited voluntary control over their thought processes (Rea & Modigliani, 1985 cited in: Dempster, 1989). Second, researchers have manipulated conditions to make participants pay more attention to massed presentations, but these studies failed to eliminate the spacing effect (Hintzman, 1976 cited in: Dempster, 1989). Third, the effects of spacing have been observed in incidental learning tasks, where little attention was paid to the task at hand (Rowe & Rose, 1974, cited in: Dempster, 1989). Hence, these findings are not consistent with the voluntary attention hypothesis (2003).
In other words, the deficient processing theory is not supported by the evidence.
Toppino and Schneider give us an excellent description of another theory, called “encoding-variability theory”:
According to encoding-variability theories (e.g. Bower, 1972; Glenberg, 1976, 1979; Madigan, 1969), massed repetitions are likely to be encoded similarly, whereas spaced repetitions are likely to be encoded differently, enabling a greater number of effective retrieval cues. Thus, proponents of these theories attribute the superiority of spaced repetitions to the greater accessibility of differentially encoded information. (1999, p. 1071)
However, as Mazur explains”
This theory is also unable to explain all previous findings. Many studies have purposely manipulated changes in context and have resulted in declined recall rather than the predicted increase (Dempster, 1989). Therefore, the encoding variability theory cannot fully explain the presence of the spacing effect. (2003, pp. 4-5)
Yet another theory is called the “study-phase retrieval theory.” “In this theory, the second (restudy) presentation serves as a cue to recall the memory trace of the first presentation” (Cepeda et al., 2006, pp. 369-370). This theory is backed up by empirical evidence (Braun & Rubin, 1998; Murray, 1983; Thios & D’Agostino, 1976 cited in: Cepeda et al., 2006, pp. 369-370) [cited in: (Cepeda et al., 2006, pp. 369-370)]. As we shall see later on – though it doesn’t appear that the theorists quite realized it at the time – there may be a sound, biological basis for this particular theory.
What is a little disturbing to me is that, through all of this research and theorizing, it doesn’t seem as if any of the psychology researchers ever thought to find out what might actually be going on inside the brain, at the cellular level. They continuously use terms such as “memory trace,” “consolidation,” “retrieval,” “short-term memory,” and “long-term memory” as if they are quasi-magical phenomena taking place in a black box, the insides of which can never be examined. But the insides of that box have been thoroughly examined since the late 1950s by neurobiologists such as Eric Kandel (2001, p. 1030) and followed up by many others since then (e.g. Beardsley, 1999; Fields, 2005; Harvey & Svoboda, 2007; Swaminathan, 2007) who have revealed much about how memories are actually formed. Information which memory and learning researchers in the field of psychology have had almost 50 years in which to refer and yet have failed to do so.
In order to fully appreciate how neurobiology should play a major role in spaced-repetition research it is important to understand how memories are formed. What follows is a somewhat brief explanation of what these researchers have discovered. In 1949, Donald Hebb speculated, “that an association could not be localized to a single synapse. Instead, neurons were grouped in ‘cell assemblies,’ and an association was distributed over their synaptic connections” (Seung, 2000), researchers have since determined this is actually the case.
As many readers may know, a nerve-cell – called a neuron – consists of a cell body, containing the nucleus which holds the DNA. Extending out from the cell body are two different types of fibers: many dendrites – the input fibers – with thousands of branches, plus one long axon – the output fiber – with a comparatively small number of branches on the end. The axon of one neuron will connect to one or more dendrite on one or more other neuron at a point called a synapse, which is just a small gap between the cell membranes. Nerve signals travel down an axon, across a synapse, and into the dendrite of the next neuron. We were all taught this in basic biology. What most of us don't know is how those synapses are formed and how the neurons decide where to form them, which ones to connect to which other nerve fibers, and how this causes memories to be formed. For this explanation we will focus primarily on the dendrites.
We have been taught that dendrites are "fibers" but, according to research done by Mirjana Maletic-Savatic and her colleagues at the Cold Spring Harbor Laboratory in New York,  they actually have
Countless tiny fingerlike projections extending from [them] like tentacles. These projections, called filopodia, continually appear[…], change[…] shape and disappear[…] on a timescale of minutes” [When a small electrical stimulus, ] similar to what a nearby neuron might do when excited by a thought or a sight or a touch, [was  applied near to a dendrite it] caused more filopodia to emerge close to the site of the stimulus and made existing ones grow longer. Some eventually generated bulbous heads, suggesting they were turning into dendritic "spines" – permanent structures that can link a dendrite to another neuron via a synapse. “It is very likely these are real synapses being formed,” Maletic-Savatic says. (Beardsley, 1999)
Once the synapses are formed at the end of the "spines" on the dendrites they can then receive a signal from the other neuron's axon.  Research done from the late 1950s to the present by Eric R. Kandel (2001) of the Howard Hughes Medical Institute, Center for Neurobiology and Behavior, College of Physicians and Surgeons at Columbia University and his colleagues, as reported in the November 2, 2001 issue of the journal, Science, as well as the more recent work of R. Douglas Fields (2005) and his colleagues at the Neurosciences and Cognitive Science Program at the University of Maryland, as reported by Fields in the February, 2005 issue of Scientific American, plus the work of many others, have filled in an incredible picture of what happens next. Kandel chose to use sea snails because they have very large neurons that are easy to study (2001, p. 1030). He later moved on to studying the neurons from the hippocampus of mice, both to show what he had learned in the sea snail translated to mammals and because the mouse hippocampus is very similar to that of humans (2001, p. 1035). Fields studied only the hippocampal neurons of mice and rats.
Contrary to popular belief, electricity does not flow down a nerve fiber and jump across the synapse. Scientists just use a small electrical signal because it often simulates and/or triggers the chemical reactions that do transmit the signals. Because these chemical reactions involve ions – charged molecules – the reactions can also often be sensed by using a very sensitive electrical probe. The chemicals that transmit the signal across the synapse are called neurotransmitters. There are many different kinds, but Kandel and his team determined that the neurotransmitter which is used here is one called serotonin (2001, pp. 1032-1033). When a signal traveling down an axon hits a synapse the axon side of the synapse releases a tiny bit of serotonin. The amount released depends on the strength of the original signal. That serotonin is detected by the other side of the synapse on the next neuron's dendrite. This is how the dentritic synapse is actually "stimulated" in the brain.
When a synapse is stimulated by serotonin above a certain threshold, a small voltage potential is created on the cell membrane of the spine and the spine grows slightly larger (Fields, 2005). This is called early long term potentiation (LPT) and can last one to three hours (Kandel, 2001, p. 1035). During this period the spine is more sensitive to additional stimulus (Fields, 2005; Kandel, 2001, pp. 1032-1033) and a similar stimulus produces twice the additional voltage potential on the cell membrane of the spine as did the original signal (Fields, 2005). Scientists believe this temporary sensitivity is the basis for short term memory. The presence of the serotonin also causes a reaction inside the spine which allows it to react to a specific protein which will cause the synapse to grow and become more permanently sensitized to stimulus (Kandel, 2001, p. 1033). The exact protein is unknown but scientists do know that it is only created when it is needed and only lasts for a certain length of time. As with all proteins, it can only be created if a certain gene in the nucleus of the cell is activated. So, when a signal is received on a synapse, that synapse and its spine are now sensitized in two ways. The synapse is more reactive to a subsequent signal coming in and the spine is now looking for that special protein that will tell it to become more permanent (Fields, 2005; Kandel, 2001, pp. 1032-1033). 
As Fields reports, not all signals strong enough to cause this sensitization (early LTP) are necessarily strong enough to cause the neuron as a whole to fire off a signal down its axon – the output fiber. However, if that same synapse receives a burst of additional signals within a very short timeframe (on the order of microseconds to half of a second) then it can build up enough voltage potential across the cell membrane to cause the neuron to fire. This is called an "action potential," presumably because it is enough potential to cause an action. An action potential can also be created when multiple synapses near each other receive a signal at the same time or even when one synapse receives a much stronger stimulus. Any combination of signal strength, rapid repetition, and number of synapses simultaneously stimulated which creates enough of a voltage potential - that action potential - to build up on a small area of the cell membrane, will then cause the neuron to fire (2005). Interestingly, work done by Christopher D. Harvey and Karel Svoboda has also shown when the synapse on one spine has been stimulated enough to cause an action potential, other spines within approximately 10 µm are also then more sensitive to stimulus for a period of about ten minutes (2007, p. 1199).
Creating an action potential, causing the neuron to fire, is important because without it no long-term memories can be formed. When an action potential is created on the cell membrane – in addition to causing the neuron to fire – it also causes a sequence of chemical reactions to occur throughout a web of molecules that stretch from the cell membrane to the nucleus (Fields, 2005; Kandel, 2001, pp. 1032-1033). Fields discovered that this web of molecules is set up in such a way that it reacts differently to action potentials that are created at different intervals. A rapid set of action potentials, occurring too closely together, will cause the same reaction as if it were one single action potential. In other words, the chemicals are set up to ignore multiple redundant action potentials occurring too closely together. Action potentials occurring at various other intervals cause different sequences, or chains, of chemicals to react. This web of molecules and their various chains of chemical reactions act as a kind of filter and routing system. Each different type of chemical-chain reacts to a different pattern in the timing of action potentials and causes the activation of a different gene in the neuron's nucleus (2005).
Kandel's team used puffs of serotonin to stimulate the sea snail synapses in the same way they are stimulated normally. Using this method he determined that it required five spaced puffs of serotonin to start the chain reaction which produces the needed protein (2001, p. 1033). Fields, on the other hand, used electrical stimulus, applied directly to specific mouse synapses and determined that the correct pattern of stimulation to activate the gene needed for long-term memory formation is at least three action potentials, at least ten minutes apart (2005). This pattern of action potentials will cause the correct chain of chemicals to react, which will activate the gene in the nucleus which produces the protein which causes the sensitized spine to become more permanent, a state called "late LTP" (Fields, 2005; Kandel, 2001, p. 1035). Though each research team used different – but acceptably equivalent – stimulation methods and arrived at a different number of stimulations, it is clear that a pattern of multiple stimulations, spaced over time is necessary convert short-term memories into long-term ones.
According to Fields, when the specified gene is activated, it will remain activated and continue to produce the needed protein for about 30 minutes.  Any spines on the neuron which happen to be in the sensitized (early LTP) state will react to that protein, be converted to the late LPT state, and become more permanent, as described earlier. This includes the synapses which caused the action potentials as well as any nearby synapses which had received only enough of a signal to become sensitized. Because the protein degrades over time, the window for this additional reinforcement of the memory is relatively short (2005). However, through further experimentation, Kandel's team was also able to determine that “further training, four brief trains a day for four days, gives rise to an even more enduring memory lasting weeks” (2001, pp. 1031-1032). As Kandel writes, “practice makes perfect, even in snails” (2001, p. 1031).
Now that we know how our memories actually work, how can we apply this to spaced repetition research? For one thing, we can stop picking the time intervals for massed and spaced repetition willy-nilly, hoping for some kind of pattern to emerge. It is clear that two things must occur before a memory can be formed and reinforced. First, the student must actually learn the material. This is not to say they have it in long-term memory, but that they know what the material is and understand it well. In order to avoid confusion with the entire process of learning, I will call this phase “initialization.” This is when those filopodia have started to enlarge and form temporary synapses. It is easy to “understand” word pairs and simple question and answer associations. They do not require a complicated collection of neurons to interact. Therefore simply looking at the pair or reading them aloud is usually enough to initialize the memory. On the other hand, complex concepts may require more study for the student to even understand what they mean. If the student does not understand the material, then a set of neurons obviously cannot form connections representing that concept. When students attempt to “learn” material without first understanding it, they are merely memorizing the word patterns that represent the concept rather than the concept itself. In my own observations of college students, I have seen many who attempt to use this method to study, hoping beyond hope that when the test comes, their ability to repeat the sequence of words will help them decipher how to interpret their meaning for the problem at hand. This may work for what I call “word-based” classes where all students need to do for a test is to regurgitate or recognize sequences of words, without much understanding. However, this trick does not work for courses where understanding of the concepts and how to apply them in differing situations is most important. Although many studies have had positive results by having subjects simply read over material multiple times – either massed or spaced – without attempting to understand the material, I believe much better results would be obtained if students understood it first.
Lack of proper “initialization” of a memory could easily account for the less than consistent results when space repetition is attempted in the classroom. The one-size-fits-all nature and relatively short initialization periods available in the classroom – and of almost all the experiments – practically guarantees that some students will understand the material but that many others will not. Testing for concepts based on memorization of poorly understood words is sure to give poor results.
Once the students have “learned,” or initialized the material in their minds, – in other words, grown some philopodia into spines in response to repeated short term stimulus, then sensitized those spines into early LPT phase – then that material must be repeated multiple times (approx. 3-5) separated by approximately ten minutes in order to set off that chemical chain reaction which causes the memory forming protein to be created. This will convert all those early LPT spines into late LPT (long-term memory) spines. Remember, the material cannot be continually reviewed for that whole time. That would not set off the correct, time-sensitive chemical chain-reaction.
After the protein is created, there will be approximately thirty minutes wherein the material can be studied continuously to further build up the strength of the memory. Additional study during this period causes more stimulus in the region of the original set of synapses, which causes even more philopodia to extend out and form spines, which are then stimulated enough to form synapses, which then get sensitized into early LTP phase, and are then automatically converted to late LTP phase because the memory forming protein is already present in the cytoplasm of the neuron. Please note that this additional study would increase the strength of the memory by increasing the number of synapses involved in storing the memory. However it would not necessarily make the memory last any longer than the first two steps of initialization and spaced repetition. This is because it is only increasing the number of synapses, not the final size or “life span” of any individual synapse-bearing spine. Remember it is the action of being converted from early LTP to late LTP that causes the individual spines to become more permanent and this can only happen once per iteration of this process.
This process of initialization and then reinforcement needs to occur for each and every different concept or fact that needs to be learned because each different fact is stored as a different set of neural connections comprising a different set of synapses. It is entirely likely that it will take varying amounts of time for each student to initialize the memory and it is equally likely that each student will have a different optimum number and timing of repetitions necessary for the long-term memory to form or be reinforced. This is yet another reason why spaced repetition may fail in the classroom. Can you imagine being the teacher who has to remember and track the optimum times for all the different students for all those different topics studied throughout the day? Not to mention the difficulty of trying to schedule each topic to overlap just enough to fill in the empty time but without interfering with the required repetitions of the other material. I wouldn’t want to try it for even one student, even if that student were me.
Once the process described above is complete and a long term memory has formed, it likely takes some time period for everything to settle down and for the neuron to rejuvenate itself. I am not aware of any research which indicates how long this may take. However, after this period has passed, the neurons and their synapses are ready for another round. First we must make sure that the memory is still accurate and that no parts of the set of neural connections has degraded enough to cause an incorrect recollection – or no recollection at all. This can be accomplished by a simple quiz. If the responses are adequate then there is no need to “reinitialize” the memory by rereading the original material or having it re-explained. Once we have established that the memory is intact we have to induce early LTP and then convert that to late LTP by reviewing the material multiple times, about ten minutes apart. All followed by an optional, additional strengthening by continuous study for up to 30 additional minutes.
In a way, there are two levels of spacing required for proper spaced repetition as informed by neurobiological research. First, initialization and the spacing between the subsequent multiple repetitions necessary to set up early LTP and convert it to late LTP, plus strengthening. The second level is the spacing between different iterations of that whole procedure. I believe this two level spacing is a key ingredient which all SR researchers have missed by ignoring the findings in neurobiology. All researchers to date have considered one repetition of material – whether understood or not – to be one reinforcement. And this has surprisingly produced adequate – if not fully understood – results. In addition, almost all the experiments involved only two presentations of the material. The few experiments wherein there were more than two spaced repetitions consistently yielded even better long term results (Cepeda et al., 2006; Ebbinghaus, 1885; Lyon, 1914a, 1914b, 1914c).  To reiterate, Kandel and his team found that repeating the cycle four times per day for four days increased the duration of the initial memories he had created in his snail neurons from days to weeks (2001, pp. 1031-1032).
As almost all spaced repetition research has also involved mere memorization of nonsense syllables, words, and word-pairs, it is important to consider separately the implications of our “new” neurobiological understanding of learning for material of a logical, conceptual, or abstract nature. I could find very few reports of SR research on this type of material. Edwards (1917 cited in: Dempster, 1988, p. 630) [cited in: (Dempster, 1988, p. 630) did a rather haphazard study including the learning of history and geography, the results of which favored spaced study. However, as history and geography fall under what I would call “word-based” material I don’t know if we can really count this work as covering the learning of concepts. Sarah Austin (1921) published “A Study in Logical Memory” in which she carefully counted and listed the number of separate ideas in various pieces of advanced reading material. Massed study consisted of reading the material five times in one day whereas spaced repetitions were from once per day for five days up to once every five days. For testing she asked some subjects to simply write down all that they could recall, others were asked to list the separate ideas they recalled, and still others were given tests over the ideas. She found that ideas were either recalled in their entirety or were forgotten altogether, though recall of details was more variable. This tends to support the notion that each idea is associated with either a single neuron or a single patch of synapses on a set of dendrites. Once that neuron is triggered, it's axon stimulates all the other neurons or synapse patches associated with parts of the idea or images related to the idea. Some possible flaws in Austin’s experiments are: We don’t know how much time elapsed between separate readings of the material for the “massed” study. The subjects were pretty much left to their own devices. So the “massed” study could actually have amounted to one iteration of the “initialize – extend durability – strengthen connections” procedure. Secondly, only three “retention intervals” – the time between the last study period and the final test – were used. This does not give a very clear picture of the forgetting curve in each situation. Finally, she only had subjects read over the material once per repetition. Many subjects could not understand what they had read and the results were so inconclusive that Austin had to throw them out. This reinforces my claim that it is imperative for the students to actually understand (initialize) the material before attempting to achieve any spacing effect.
A very recent study performed by Danielle Mazur explored learning how to do simple matrix multiplication using SR techniques. Her thesis is an excellent review of the history of SR research. Unfortunately, her experiments were confounded by several factors. She only used spacing intervals of zero or seven days (completely skipping over the “sweet spot” of ten minutes), some of the subjects already had experience doing matrix multiplication, and others had great difficulty learning it at all. In the end, she did conclude that SR does work for learning concepts while inadvertently showing the importance of students actually learning the material first (Mazur, 2003, p. iii).
Only a few other researchers explored this particular facet of SR research in the intervening period between 1921 and when Mazur presenter her Masters thesis in 2003. Dempster (1988, pp. 629-630) and Mazur (2003, pp. 9-10) both provide excellent outlines of the work of Edwards (1917), Reynolds and Glaser (1964), Ausubel (1966), Gay (1973), Grote (1995), and Saxon Publishers (2001; 1997a, 1997b), all of which showed that the spacing effect works for learning concepts as well as it does for rote memory tasks, as long as the material is first well understood by the student. However, Dempster (1988, p. 633) and Mazur (2003, pp. iii, 5) both comment as to relative dearth of research for learning concepts via SR as opposed to rote memorization.  Perhaps the “emphasis on convenient single-session studies,” as Cepeda et al. (2006, p. 370) put it, in the psychology research field has something to do with the low number of more involved studies. Needless to say, Dempster, Mazur, and Cepeda et al. all call for far more research in this area.
So, it seems we have a dilemma. More research must be done on SR for learning concepts and yet that research will not be nearly as easy or convenient to perform as most of the research has been to date. As Lyon put it, “no single method can be set down as being the most economical for everyone. The problem is not, What is the most Economical Method?, but What is the most Economical Method for Mr. Brown and how can he find this method out?” (1914c, p. 159). In addition, Austin wrote back in 1921, “the rate of forgetting is variable, depending on (1) the degree to which the material had been learned in the first place, (2) the distribution of the repetitions, (3) the kind of material learned, (4) the method by which it is measured, and (5) individual difference in retentiveness” (1921, p. 373). Now we also have the questions as to the exact timing for the reviews in the reinforcement cycle as well as the longer intervals between each independent cycle. On top of all that complexity, Mazur points out that “it is difficult to find abstract tasks to utilize in this research because many of these tasks are taught in school” (2003, p. 22). Yes, the research necessary to truly show the efficacy of spaced repetition for learning concepts appears as if it will be long, tedious, incredibly convoluted and exhaustive, and therefore incredibly expensive, but appearances can be deceiving.
It is possible for computers to track all of the details necessary in all the possible different combinations to ferret out the information researchers need in order to fine tune all the variables related to using SR for real learning. However, writing a new program with new learning material for a separate study on a relatively small group of students will still be relatively expensive and time consuming. Plus, once we determine all those variables, it will still be nearly impossible for teachers to track and appropriately adjust them all for each individual student. Therefore, I propose an entirely new system of computer based learning which will also facilitate the necessary research. In this system we can break down all educational content into its most basic ideas, concepts, and facts. Those “items” can be organized and marked with various metadata as to the difficulty of the material, the learning method(s) and goals the material was designed to adhere to, any prerequisite topics which should be understood before attempting to learn this material, and any other pertinent metadata researchers may find useful. In addition, multiple different explanations or presentations can be created for each different topic so students can easily choose the material and media that best helps them learn.
Software can then be designed to present this educational content to students, choosing just the right content to present when, based on the students’ learning abilities for each different type of material. The software can then implement spaced repetition in the manner described above, tracking the specific forgetting curve of each topic and calculating the best times for the students to review material based on various algorithms. The software can be designed with a plug-in architecture so that researchers can write different SR algorithms which can then be easily installed and chosen by users.
Current “learning” programs – even programs based on spaced repetition – only work on a simple flashcard system. Each flashcard is a basic question-answer pair, encouraging simple memorization of the word patterns rather than the concepts involved. The various flashcards are not connected or related to each other in any way. Yes, they may be organized in a hierarchical tree but a card’s position in the tree has no influence on how or when it is presented. Each card and the question-answer pair on it are treated as entirely separate factoids as far as the timing for spaced repetition is concerned.  In this proposed system, lots of different questions and problems can be designed for any one topic in order to test and reinforce the concepts within the topic rather than merely associating one specific answer with one specific question.
Finally, if students opt-in, the software can collect a history of everything they do within the system and feed that back to a massive database of the learning histories of millions of students. This data can then be easily mined by education researchers to unlock the secrets of how we learn and to improve the content itself. There will then be very little need for the lengthy, expensive studies often performed by researchers today. A researcher will be able to simply sit down at their desk and search through a database of information. The statistics gathered will be much more valuable because they will be based on the histories of millions of students from all over the world instead of just thirty or so who happen to go to a local school and have time to participate is a study. If a researcher wants to investigate a new approach to presenting content they can simply create that content, submit it to the system, and then sit back and wait for the data to come rolling in (Robertson, 2009b). Similarly, researchers can simply write a new algorithm, or a different set of parameters for an existing algorithm, promote the use of that new algorithm and again, sit back and wait for the data to come rolling in. Normally, it would not be prudent to allow subjects to self select which experimental group they wish to be in. However, with millions of subjects, rather than thirty or so, it will be easy for data mining software to select a sample within that self-selected group which then represents a random sample from among the desired demographic. The software could even be designed such that students can allow the software to use different algorithms for different, randomly-chosen, subsets of the content they are studying, thus facilitating within-subject experiments.
Such a system is already under preliminary development. It is named for the data formatting standard in which the educational content will be distributed. This is called the Distributable Educational Material Markup Language™ (DEMML™). As explained on the DEMML™ website:
The Distributable Educational Material Markup Language™ (DEMML™) is both an XML format for marking up educational material in a highly structured yet incredibly flexible manner and a system for authenticating and distributing that content for independent or shared use throughout the world, even where there is no internet connection. This material is organized and classified to a degree never before attempted, using what turns out to be a rather simple system of encoding the hierarchical tree of all possible educational material right down to the paragraph - or even sentence - level. This allows anyone to easily contribute any amount of material to what will quickly grow to be a vast library of vetted content for all to use. In addition the format facilitates a new level of flexibility in computer based learning by allowing educators to specify what material the student should study while still allowing the student instant access to additional material as their needs require. Multiple different explanations or presentations can exist for any one fact within any very specific topic. This allows any student at any level to quickly find just the right explanation that helps them most efficiently understand the topic at hand.
To be clear, DEMML™ is not yet another Computer Based Training (CBT) system. Instead, it is a way of creating a library of educational material in a standardized format which all compatible CBT systems can instantly draw from, with no manual editing whatsoever. Existing CBT software can be modified slightly to make use of this content or modified even further to employ the rich functionality that only DEMML™ provides - facts, multiple alternate explanations, questions and answers, problems and solutions, multiple alternate explanations for each of those, prerequisites, etc., with very rich metadata about everything. Just as hyperlinking existed long before Tim Berners-Lee invented the World Wide Web and HTML, CBT has been around a long time before DEMML™. Before HTML all hyperlinking systems were proprietary and only worked within limited confines. Similarly, current CBT systems are all either proprietary systems or are relatively unavailable to the public. DEMML™ will be to CBT what HTML and WWW have been to hyperlinking. It will open up a world of possibilities by making education easily available to everyone, everywhere. (Robertson, 2009a)
It is my hope and belief that the DEMML™ system will become a de facto standard for education world-wide. Rather than replace teachers, I believe it will free them to become mentors and guides, a much more rewarding role than repeating lecture after lecture to disinterested students.
We have seen the exhaustive research which shows that spaced repetition works for both rote memorization as well as the learning of concepts. We now have a new understanding of the neurobiological basis for memory formation. Though there are some criticisms as to the efficacy of using spaced repetition in the classroom, I have shown that it isn’t necessary for teachers to apply SR directly. Teachers can be guides to students who use DEMML™ and its related software for self-directed learning. At the same time all this learning is taking place in millions (or even billions) of minds – young and old – vast quantities of valuable research data can be collected which will help scientists better understand and fine tune the learning process.


References
Austin, S. D. M. (1921). A Study in Logical Memory. American Journal of Psychology. 32, 370-403.
Ausubel, D. P. (1966). Early versus delayed review in meaningful learning. Psychology in the Schools, 3, 195-198.
Beardsley, T. (1999). Getting Wired. Scientific American, 280(6), 24-25. doi:10.1038/scientificamerican0699-24b
Bloom, K. F., & Shuell, T. J. (1981). Effects of massed and distributed practice on the learning and retention of second-language vocabulary. Journal of Educational Research, 74, 245-248.
Bower, G. H. (1972). Stimulus-sampling theory of encoding variability. In A. W. Melton & E. Martin (Eds.), Coding processes in human memory (pp. 85-123). Washington, D.C.: V. H. Winston & Sons.
Braun, K. A., & Rubin, D. C. (1998). The spacing effect depends on an encoding deficit, retrieval, and time in working memory: Evidence from once-presented words. Memory, 6, 37-65.
Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed Practice in Verbal Recall Tasks: A Review and Quantitative Synthesis. Psychological Bulletin, 132(3), 354-380. doi:10.1037/0033-2909.132.3.354
Challis, B. H. (1993). Spacing effects on cued-memory tests depend on level of processing. Journal of Experimental Psychology: Learning, Memory and Cognition, 19, 389-396.
Dempster, F. N. (1986). Spacing effects in text recall: An extrapolation from the laboratory to the classroom. Manuscript submitted for publication., .
Dempster, F. N. (1988). The spacing effect: A case study in the failure to apply the results of psychological research. American Psychologist. Vol. 43(8), 43(8), 627-634.
Dempster, F. N. (1989). Spacing effects and their implications for theory and practice. Educational Psychology Review, 1(4), 309-330. doi:10.1007/BF01320097
Ebbinghaus, H. (1885). Memory  a contribution to experimental psychology,  (H. A. Ruger & C. Bussenius, Trans.). New York city: Teachers college  Columbia university.
Edwards, A. S. (1917). The distribution of time in learning small amounts of material. In Studies in psychology contributed by colleagues and former students of Edward Bradford Titchener (pp. 209-213). Worcester, MA: Wilson.
Fields, R. D. (2005). Making Memories Stick. Scientific American, 292(2), 74-81. doi:10.1038/scientificamerican0205-74
Gay, L. R. (1973). Temporal position of reviews and its effect on the retention of mathematical rules. Journal of Educational Psychology, 64, 171-182.
Glenberg, A. M. (1976). Monotonic and nonmonotonic lag effects in paired-associate and recognition memory paradigms. Journal of Verbal Learning and Verbal Behavior, 15, 1-16.
Glenberg, A. M. (1979). Component-levels theory of the effects of spacing of repetitions on recall and recognition. Memory & Cognition, 7, 95-112.
Grote, M. G. (1995). Distributed versus massed practice in high school physics. School Science and Mathematics, 95, 97-101.
Harvey, C. D., & Svoboda, K. (2007, December 20). Locally dynamic synaptic learning rules in pyramidal neuron dendrites. Nature, 450(7173), 1195(8). Retrieved from http://find.galegroup.com/gtx/infomark.do?&contentSet=IAC-Documents&type=retrieve&tabID=T002&prodId=EAIM&docId=A189749279&source=gale&srcprod=EAIM&userGroupName=wuacc_mabee&version=1.0
Hellyer, S. (1962). Frequency of stimulus presentation and short-term decrement in recall. Journal of Experimental Psychology, 64, 650.
Hintzman, D. L. (1976). Repetition and memory. In G. H. Bower (Ed.), The Psychology of Learning and Motivation (Vol. 10, pp. 47-91). New York: Academic Press.
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior, 17, 644-667.
Jost, A. (1897). Die Assoziationsfestigkeit in ihrer Abha¨ngigkeit von der Verteilung der Wiederholungen [The strength of associations in their dependence on the distribution of repetitions]. Zeitschrift fu¨r Psychologie und Physiologie der Sinnesorgane, 14, 436-472.
Kandel, E. R. (2001). The Molecular Biology of Memory Storage: A Dialogue between Genes and Synapses. Science, New Series, 294(5544), 1030-1038. Retrieved from http://www.jstor.org/stable/3084944
Kitao, N., & Inoue, T. (1998). The effects of spaced repetition on explicit and implicit memory. Psychologia: An International Journal of Psychology in the Orient. Vol 41(2), (1998), 114-119.
Lyon, D. O. (1914a). The relation of length of material to time taken for learning, and the optimum distribution of time. Part I. Journal of Educational Psychology. Vol 5(1), 5(1914), 1-9.
Lyon, D. O. (1914b). The relation of length of material to time taken for learning and the optimum distribution of time. Part II. Journal of Educational Psychology. Vol 5(2), 5(2), 85-91.
Lyon, D. O. (1914c). The relation of length of material to time taken for learning, and the optimum distribution of time. Part III. Journal of Educational Psychology. Vol 5(3), 5(3), 155-163.
Madigan, S. A. (1969). Interserial repetition and coding processes in free recall. Journal of Verbal Learning and Verbal Behavior, 8, 828-835.
Mazur, D. (2003). Optimizing long-term retention of abstract learning (Masters of Psychology). University of South Florida,, [Tampa, Fla.] : Retrieved from http://etd.fcla.edu/SF/SFE0000201/MastersThesisMazur.pdf
Murray, J. T. (1983). Spacing phenomena in human memory: A studyphase retrieval interpretation (Doctoral Dissertation). University of California, Los Angeles, Los Angeles, CA. Retrieved from Dissertation Abstracts International. (43, 3058)
Nairne, J. S., Thompson, S. R., & Pandeirada, J. N. S. (2007). Adaptive memory: Survival processing enhances retention. Journal of Experimental Psychology: Learning, 33(2), 263-273.
Perkins, N. L. (1914). The value of distributed repetitions in rote learning. British Journal of Psychology, 7, 253-261.
Rea, C. P., & Modigliani, V. (1985). The effect of expanded versus massed practice on the retention of multiplication facts and spelling lists. Human Learning: Journal of Practical Research & Applications, 4, 11-18.
Reynolds, J. H., & Glaser, R. (1964). Effects of repetition and spaced review upon retention of a complex learning task. Journal of Educational Psychology, 55, 297-308.
Roberts, W. A. (1972). Short-term memory in the pigeon: Effects of repetition and spacing. Journal of Experimental Psychology. Vol. 94(1), 94(1), 74-83.
Roberts, W. A. (1974). Spaced repetition facilitates short-term retention in the rat. Journal of Comparative and Physiological Psychology. Vol. 86(1), 86(1), 164-171.
Robertson, G. (2008, July). A Brief History of the Mathematical Definition of Forgetting Curves. Ideationizing.com. Blog, . Retrieved November 26, 2010, from http://www.ideationizing.com/2009/06/brief-history-of-mathematical.html
Robertson, G. (2009a, December 21). DEMML.org. DEMML.org. Retrieved November 30, 2010, from http://demml.org/
Robertson, G. (2009b, December 31). DEMML.org - Features. DEMML.org. Retrieved November 30, 2010, from http://demml.org/features/index.htm#research
Rose, R. J., & Rowe, E. J. (1976). Effects of orienting task and spacing of repetitions of frequency judgments. Journal of Experimental Psychology: Human Learning and Memory, 2, 142-152.
Saxon Publishers. (2001). Saxon Mathematical Results. Norman, OK: Saxon Publishers.
Saxon, J. (1997a). Algebra I (3rd ed.). Norman, OK: Saxon Publishers.
Saxon, J. (1997b). Algebra II (2nd ed.). Norman, OK: Saxon Publishers.
Seung, H. (2000, November 1). Half a century of Hebb.(Computational Approaches to Brain Function). Nature Neuroscience, 3(11s), 1166(1). Retrieved from http://find.galegroup.com/gtx/infomark.do?&contentSet=IAC-Documents&type=retrieve&tabID=T002&prodId=HRCA&docId=A185568793&source=gale&srcprod=HRCA&userGroupName=wuacc_mabee&version=1.0
Shaughnessy, J. J., Zimmerman, J., & Underwood, B. J. (1972). Further evidence on the MP-DP effect in free recall learning. Journal of Verbal Learning and Verbal Behavior, 11, 1-12.
Smith, S. M., & Rothkopf, E. Z. (1984). Contextual enrichment and distribution of practice in the classroom. Cognition and Instruction, 1, 341-358.
Smith, W. G. (1897). Review of Die Assoziationsfestigkeit in ihrer Abhängigkeit von der Verteilung der Wiederholungen. Psychological Review. Vol. 4(6), 4(6), 682-684.
Swaminathan, N. (2007, December 20). Signaling Neurons Make Neighbor Cells "Want In": Synapses are primed to strengthen (and thus enable learning) if a nearby one has just been stimulated. Scientific American. Science News, . Retrieved January 29, 2008, from http://www.scientificamerican.com/article.cfm?id=keeping-up-with-the-neurons
Thios, S. J., & D’Agostino, P. R. (1976). Effects of repetition as a function of study-phase retrieval. Journal of Verbal Learning and Verbal Behavior, 15, 529-536.
Thorndike, E. L. (1912). The curve of work. Psychological Review, 19, 165-194.
Toppino, T. C., & DeMesquita, M. (1984). Effects of spacing repetitions on children's memory. Journal of Experimental Child Psychology. Vol 37(3), (1984), 637-648.
Toppino, T. C., & Schneider, M. A. (1999). The mix-up regarding mixed and unmixed lists in spacing-effect research. Journal of Experimental Psychology: Learning, 25(4), 1071-1076.
Winz, W. (1931). Neue Versuche über Lernen in Häufung und Verteilung. / A new investigation on learning by the massed vs. the distributed method. Psychotechnisches Zeitschrift. 6, 129-140.
Zimmerman, J. (1975). Free Recall after Self-Paced Study: A Test of the Attention Explanation of the Spacing Effect. The American Journal of Psychology, 88(2), 277-291. Retrieved from http://www.jstor.org/stable/1421597


The contents of this post is Copyright © 2010 by Grant Sheridan Robertson.

1 comment:

  1. This is some of the most insightful information I've been able to find on the neurobiology of learning and spaced repetition. I'm no expert, but you seem to have provided a very solid prescription for the acquisition of conceptual knowledge. I'll definitely be putting some of this research into practice for my upcoming term at university. You have my thanks.

    ReplyDelete