Print Friendly, PDF & Email

Learning Theory: The Foundation of Cognitive Behavior Management

Cognitive Behavior Management derives from theories of learning. Learning is roughly defined as the process by which behavior is modified or added to an organism’s repertory. In plain English: the person or animal becomes able to do something new or different.

To be regarded as learning, an addition to the repertory must be relatively permanent, and it must occur as a result of experience.

Note that the definition of learning refers to changes in someone’s behavior. Before going on, we therefore need a definition of what is meant by ‘behavior’. It includes muscle movements (aka ‘actions’ or ‘motor behavior’) and other ‘overt’ behaviors (i.e., things that can be seen, heard, felt, tasted or smelt by others).

But it also includes ‘covert’ or ‘private’ behaviors to which only the person doing the behaving has, or can have direct access. These covert behaviors include thoughts, mental images, conscious memories, fantasies, emotions, moods, and physiological activities including their accompanying bodily sensations. In other words, behavior includes more than what we often mean by ‘behavior’.

Learning is inseparable from memory; in fact they may be regarded as simply the two sides of a single coin. ‘Memory’ in this context means any lasting consequence of experience that is encoded in your brain, is stored there, and can be accessed by one means or another. The computer analogy is the information stored in your hard drive. There is also a short-term form of brain-resident memory that corresponds to the information temporarily held in your RAM chips [‘user memory’], but that is not what we are normally concerned with in Cognitive Behavior Management, except as a place to attend to, analyze and alter covert behavior.

Memory and learning refer to stored information that the person is not normally, although s/he could be, conscious of, but also includes things that are stored in such a manner that you cannot be aware. When you automatically and unthinkingly step on the brake pedal as you approach a red light, you are accessing your memory for something you have learned about safe driving. Similarly, if the sight of a perfectly harmless stranger automatically triggers an anxious feeling, you are accessing a piece of stored learning that associates strangers with danger.

The fact that the latter bit of learning may be unrealistic, because it represents an over-generalization, and usually works against your best interests is not the point. Learning is considered helpful or not helpful based on a personal view of whether it is more utile [more pleasurable than painful or more effective in helping you reach your goals].

The point is that most of our behavior is learned, even when it is ultimately rooted in and influenced by our biological inheritance. And that is why learning theory is absolutely central to an understanding of what can go wrong in people’s lives, and of what can be done to help them overcome these problems in living. Without good theories of learning — derived from psychological and neurological experimentation with both humans and animals — we would be sailing without a compass or a map. We might get to our destination, but it would usually be via an inefficient and roundabout route, and would rely unnecessarily on blind chance and luck. Much of the time, in fact, we simply wouldn’t get there at all.

It was considerations such as these that led clinical psychologists to begin developing methods of intervention based on learning theory. Until then, psychotherapy and the theories supporting it had been based almost exclusively on clinical data, most of it from one version or another of psychoanalysis. This had led to some valid and useful insights, but it had also led to some appalling errors — some of which persist even today among those who believe, in the face of overwhelming evidence, that experimental science is largely irrelevant to clinical practice.

The learning theories that were first reasonably well developed were two forms of ‘conditioning’ — respondent (aka classical or Pavlovian) conditioning, and operant conditioning (aka instrumental learning or behavior modification). The former derived originally from the work of Ivan Pavlov, and the latter from the work of John B. Watson and B.F. Skinner.

Respondent-conditioning theory provided the basis for such things as treating phobias through desensitization procedures. In these procedures, the person is exposed repeatedly or for a long period of time to the phobic stimulus, e.g., a snake or an elevator. Exposure may be through imagination and memory (so-called visualization exposure) or to the actual physical stimulus itself (in vivo exposure). Through this process the person becomes less afraid of the stimulus.

Operant-conditioning research led to methods that influenced behavior thorough systems of reward (reinforcement) and punishment, not ‘negative reinforcement’, which refers to rewarding desired behavior by stopping something unpleasant that is going on. In practice, operant conditioning relies almost entirely on positive reinforcement rather than punishment. This is partly for ethical reasons, but also because punishment tends to have unpredictable results.

Both forms of learning theory — respondent and operant conditioning — as well as the clinical applications, remain valid and highly useful today, and they have been extended and greatly refined.

The success of early attempts to apply respondent and operant conditioning principles to human services encouraged further developments. Social learning theory, represented most prominently by the work of Albert Bandura, introduced observational learning (‘modeling’ or ‘vicarious learning’), i.e., learning from watching and listening to other people. Social learning theory also introduced the concept of ‘self-efficacy’ — the sense that one is able to pursue one’s goals and cope with challenges with reasonable competence and efficiency.

Experimental psychologists were also becoming more interested in cognition than ever before in the history of the science. (‘Cognition’ refers basically to thinking, conscious remembering, imagining, and control of attention.) During the peak of behaviorism in American psychology — it was widely believed that covert (private) behavior such as cognition was not a proper object of scientific study. The reason given was that nobody besides the cognizer (thinker, rememberer, etc.) could verify the accuracy of the data.

However, as time went on more psychologists were willing to accept self-reports of thoughts, feelings, etc., as acceptable data, even if they had to be taken with a grain of salt. For ‘radical behaviorists’ cognitions were viewed solely as dependent variables — i.e., as behaviors that could be related functionally to events in the environment, but that had no causal properties of their own. Ruled out were private events such as thoughts (e.g., “I’m in danger”) as things that could directly cause other private events (e.g., “I feel scared”) or public events (running away from a stranger).

Because this was such an arbitrary posture and so in conflict with universal experience, behaviorism gave way to the ‘cognitive revolution’ in psychological research. The result has been a great surge in studies of cognitive learning and its effects on the course of people’s lives.

Clinical psychologist Albert Ellis and psychiatrist Aaron T. Beck worked on parallel changes. Both noticed how people’s thoughts seemed to correlate closely with their feelings, moods and actions. Moreover, they were willing to assume that, at least a good part of the time, client’s troubles were being caused by the form and content of their thoughts and their beliefs about themselves and the world.

This was a radical shift from the psychoanalytic view, which seeks to explain ‘neurotic’ experience and behavior in terms of events remote in time, particularly early childhood. The trouble with the psychoanalytic approach is that, at best, it can explain how a person’s troubles first began. But we are left without a truly satisfactory answer to the question of how early experiences could be responsible for difficulties in the present. The cognitive focus of Ellis and Beck helped bridge this gap, and brought the new, expanded scope of learning theory into applied clinical services.

In addition, Ellis and Beck stressed conscious thought, which psychoanalysis had tended to dismiss as little more than a voice-over narration, or as a breeding-ground for comfortable excuses, while the ‘real’ motives behind people’s behavior were to be found in the unconscious. Cognitive psychology also recognizes and studies nonconscious processes, but defines them very differently and accords them a less central role in determining behavior. For cognitive psychologists there is no murderous, sex-crazed ‘id’ to be tamed — only ordinary biological processes.

Classical behavior therapy, based on the work of Pavlov, Watson, Skinner and their adherents, and cognitive therapy, based on the work of Ellis, Beck, and countless cognitive researchers in university research labs, began as separate and distinct applications of learning theory to clinical work. But in time their fundamental compatibility was recognized, and most science-oriented clinicians have practiced cognitive behavior management.

However, Beck and Ellis used only a small sampling, a minimum of the cognitive behavior management repertoire [ignoring imagery almost totally] and tended to confuse it with the psychodynamic processes that they had previously learned and continued the biomedical jargon that went with it. Thus, cognitive therapy was considered to be a form of psychodynamic intervention, despite the fact that it was of quite a different order. Fortunately, key theoretical development was being made in several fields, including linguistics, anthropology, psychology and artificial intelligence. One of the main engines was artificial intelligence, which was engaged in getting computers to learn. The complexities of teaching computers how to behave cognitively created a swell of research into the processes that were involved in human thinking.

Another major influence on cognitive behavior management came from a linguistic theory of Noam Chomsky. Transformational grammar is a theory of how grammatical knowledge is represented and processed in the brain and consisted of:

1. Two levels of representation of the structure of sentences: an underlying, more abstract form, termed ‘deep structure’, and the actual form of the sentence produced, called ‘surface structure’. Deep structure is represented in the form of a hierarchical tree diagram, or “phrase structure tree”, depicting the abstract grammatical relationships between the words and phrases within a sentence.
2. A system of formal rules specifying how deep structures are to be transformed into surface structures.

Consider the two sentences “Steven wrote a book on language” and “A book on language was written by Steven.” Chomsky held that there is a deeper grammatical structure from which both these sentences are derived. The transformational grammar provides a characterization of this common form and how it is manipulated to produce actual sentences

John Grinder and Richard Bandler utilized Transformational Grammar as a basis for developing a method of human service intervention based upon ‘modeling’ exemplars to determine how they did what they did. The fact that they chose exemplars who were experts in the field of clinical interventions led to the understanding that they could teach people how to do clinical interventions through what is now called Neurolinguistic Programming.

We as human beings use our language in two ways. We use it first of all to represent our experience – we call this activity reasoning, thinking, fantasying, and rehearsing. We are creating a model of our experience. This model of the world that we create by our representational use of language is based upon our perceptions of the world. Our perceptions are also partially determined by our model or representation.

Since we use language as a representational system, our linguistic representations are subject to the three universals of human modeling: Generalization, Deletion and Distortion.

Generalization is the process by which elements or pieces of a person’s model become detached from their original experience and come to represent the entire category of which the experience is an example. Our ability to generalize is essential to coping with the world.

Generalization may lead a human being to establish a rule such as ‘Don’t express feelings’. This rule, in the context of a prisoner-of-war camp, may have a high survival value. However, using the same rule in a marriage, limits the potential for intimacy.

Deletion is a process by which we selectively pay attention to certain dimensions of our experiences and exclude others. An example would be the ability that people have to filter out or exclude all other sound in a room full of people talking in order to listen to one particular person’s voice.

In the structure of the person’s use of language we can identify differing types of deletions that occur regularly. The deletion may simply be a ‘shorthand’ method of responding in which the person is easily able to specify what is missing; or the deletion may confuse the client as well as the clinician, since the client is unable, when attention is drawn to it, to supply the additional information without help.

Distortion is a process that allows us to make shifts in our experience of sensory data. Fantasy, for example, allows us to prepare for experiences that we may have before they occur. All the great novels, all the revolutionary discoveries of the sciences involve the ability to distort and misrepresent reality.

Secondly, we use our language to communicate our model or representation of the world to each other. We call it talking, discussing, writing, lecturing, and singing. We are not conscious of the process of selecting words to represent our experience. We are almost never conscious of the way in which we order and structure the words we select. Language so fills our world that we move through it as a fish swims through water.

To qualify as a native speaker …one must learn… rules…. This is to say, of course, that one must learn to behave as though one knew the rules. [Slobin, 1967]

…people have consistent intuitions about the language they speak.

Cognitive neuroscience, as it is typically called, is creating a biological basis to understand what is happening in the mind.

Foremost among the contributions of cognitive neuroscience is the expansion of learning theory to encompass brain structure and functioning. This new frontier in learning theory is called Neural Network Learning Theory (NNLT), and it is a fascinating and promising development indeed — not merely in pure science, but (when more is known) in the applied science of clinical psychology.

It is now well established that brains are made up of two basic structures — neurons and synapses — and that together they form extensive ‘neural networks’ that perform all psychological operations such as thinking, learning, memory and emotion, and that these network operations both drive, and are driven in part by, motor activity.

Neurons are the individual brain cells, of which there are estimated to be around 100 billion in humans. Synapses are the connections between and among neurons, and there are thought to be about 100 of them connecting the average neuron to its neighbors. That makes 10 trillion synapses in all — enough to encode and store all the information that comes our way many times over.

Neuroscientists have discovered that our brains are physically altered by what we experience and thus learn. It is as if the chips in your computer were actually rewired somewhat every time you ran a program. In brains, the software is the hardware is the software.

Here’s how it seems to work. Let’s say a new neighbor moves in next door. She introduces herself. You find her likable, and chat enough with her that you notice and learn a few other things about her. We’ll call that the ‘event’ that has just been become part of your experience and thus been added to your personal learning history. This begins a process in which some of the synapses in your brain become more efficient in transmitting signals to the ‘downstream’ neurons they’re connected to, and others become less efficient. These local efficiencies are called ‘synaptic weights’. As life goes on, synaptic-weight patterns change.

The experience is encoded in a change in the pattern of synaptic weights in some network or — most often — an interconnected, overlapping set of networks in your brain. This change constitutes your memory for the event — including all of its implications and ramifications. These might include changes that assign your new neighbor to various categories encoded in your brain, such as ‘woman’, ‘likeable’, ‘short’, ‘wears plaid shirts’, ‘computer programmer’, ‘thirtyish’, ‘intelligent’, “green-eyed and so forth. This all goes on without much conscious effort or even awareness on your part.

Because the newly-altered networks are interconnected and overlapping, they constitute an intricate ‘filing system’ for memory retrieval. When in future any encoded aspect of what you have learned about your neighbor becomes personally relevant, several or all of the affected networks can be activated. If you see her, you will find it easier to think of other likeable, short, etc. women and those who program computers for a living.

Pretty amazing thing, this network-based system for encoding, learning, memory storage and retrieval, isn’t it? There is much more to be said about the topic, and you can read about it elsewhere. But even with the limited glimpse given here, you should have no trouble thinking on your own about NNLT, and how it might explain things in your experience and point the way to practical applications from which you could benefit.

George Edelman has taken this neurological process on step further and asks the question: “How does the brain ‘decide’ what thoughts are most important”?

The answer Edelman proposes, is that an evolutionary process takes place – not one that selects organisms and takes millions of years, but one that occurs within each particular organism, and within its lifetime, by competition among cells, or selection of cells for, [or rather groups] in the brain.

Edelman discusses two kinds of selection in the evolution of the nervous system; ‘developmental’ and ‘experimental’. The first takes place largely before birth. The genetic instructions in each organism provide general constraints for neural development, but they cannot specify the exact destination of each developing nerve cell, for these grow and die, migrate in great numbers and in entirely unpredictable ways; all of them are ‘gypsies’, as Edelman likes to say. Thus the vicissitudes of fetal development themselves produce in every brain unique patterns of neurons and neuronal groups [‘developmental selection’]. Even identical twins with identical genes will not have identical brains at birth; the fine details of cortical circuitry will be quite different. Such variability, Edelman points out, would be a catastrophe in virtually any mechanical or computational system, where exactness and reproducibility are of the essence, But in a system in which selection is central, the consequences are different, here variation and diversity are themselves of the essence.

…the creature is born, thrown into the world, there to be expose to a new form of selection based upon experience [‘experiential selection’]. …a sudden, incomprehensible [perhaps terrifying] explosion of electromagnetic radiation, sound waves, and chemical stimuli…. …the world encountered is not one of complete meaninglessness and pandemonium, for the infant shows selective attention and preferences from the start. These (innate) biases, Edelman calls ‘values’. Such values are essential for adaptation and survival. These ‘values’ – drives, instincts, intentionalities – serve to weight experiences differently, to orient the organism toward survival and adaptation, to allow what Edelman calls ‘categorization on value’. … ‘values’ are experienced, internally, as feelings: without feeling there can be no animal life.

At a more elementary physiological level, there are various sensory and motor ‘givens’, from the reflexes that automatically occur [for example the response to pain] to innate mechanisms in the brain, as, for example, the feature detectors in the visual cortex that, as soon as they are activated, detect verticals, horizontals, angles, etc., in the visual world. Thus we have a certain amount of basic equipment; but … very little else is programmed or built in.

It is up to the infant animal, … to create its own categories and to use them to make sense of, to construct a world – and it’s not just a world that the infant constructs, but its own world, a world constituted from the first by personal meaning and reference.

…a unique neuronal pattern of connections is created and then,…experience acts upon this pattern, modifying it by selectively strengthening or weakening connections between neuronal groups, or creating entirely new connections.

Thus experience itself is not passive, a matter of ‘impressions’ or ‘sense-data’, but active, and constructed by the organism from the start.

Every perception … is an act of creation.

This perceptual generalization is dynamic and not static, and depends on the active and incessant orchestration of countless details. Such a correlation is possible because of the very rich connections between the brain’s map connections, which are reciprocal, and may contain millions of fibers.

…a continuous ‘communication’ between the active maps themselves, which enables a coherent construct such as ‘chair’ to be made.

…the outputs of innumerable maps…not only compliment one another at a perceptual level but are built at higher and higher levels. …the brain… ‘categorizes its own categorizations’, and does so by a process that can ascend indefinitely to yield more generalized pictures of the world.

this re-entrant signaling is different from the process of ‘feedback’, which merely corrects errors.

…at higher levels, where flexibility and individuality are all-important and where new powers and new functions are needed and created, one requires a mechanism that can construct, not just control and correct.

The construction of perceptual categorizations and maps, the capacity for generalization made possible by re-entrant signaling, is the beginning of psychic development, and far precedes the development of consciousness or mind, or of attention or concept formation – yet it is a prerequisite for all of these;…. Perceptual categorization, …, is the first step, and it is crucial for learning, but is not something fixed, something that occurs once and for all. On the contrary – there is then a continual recategorization, and this itself constitutes memory.

Unlike computer-based memory, brain-based memory is inexact, but it is also capable of great degrees of generalization.

Primary consciousness is the state of being mentally aware of things in the world, of having mental images in the present. But it is not accompanied by any sense of [being] a person with a past and a future…In contrast, higher-order consciousness involves the recognition by a thinking subject of his or her own acts and affections, It embodies a model of the personal, and the past and future as well as the present…It is what we as humans have in addition to primary consciousness. Edelman

Higher order consciousness allows us to reflect, to introspect, to draw upon culture and history, and to archive by means of a new order of development and mind. To become conscious of being conscious, Edelman stresses, systems of memory must be related to representation of a self. This is not possible unless the contents, the ‘scenes’, of primary consciousness are subjected to a further process and are themselves recategorized.

Language immensely facilitates and expands this by making possible previously unattainable conceptual and symbolic powers. Thus two steps, two re-entrant processes, are envisaged: first the linking of primary (or ‘value-category’) memory with current perception – a perceptual ‘bootstrapping’, that creates primary consciousness; second, a linking between symbolic memory and conceptual centers – the ‘semantic boot strapping’ necessary for higher consciousness. “Consciousness of consciousness” becomes possible.

New theories arise from a crisis in scientific understanding, and acute incompatibility between observations and existing theories,

Body-image is not something fixed, but plastic and dynamic, and dependent upon a continual inflow of experience and use; and that if there is continuing interference with one’s perception of a limb or its use, there is not only a rapid loss of its cerebral map, but a rapid remapping of the rest of the body which then excludes the limb itself.

Thelen finds that the development of skills, as Edelman’s theory would suggest, follows no single programmed or prescribed pattern. Indeed there is great variability.

Remembering is not the re-extraction of innumerable fixed, lifeless and fragmentary traces. It is an imaginative reconstruction, or construction, built out of the relation of our attitude toward the whole mass of organized past reactions or experiences. [Fredric Bartlett]

The essential achievement of primary consciousness is to bring together the many categorizations involved in perception into a scene. The advantage of this is that “events that may have had significance to an animal’s past learning can be related to new events”. The relation established will not be a causal one, one necessarily related to anything in the outside world; it will be an individual (or ‘subjective’) one, based on what has had ‘value’ or ‘meaning’ for the animal in the past.

The ‘scene’ is not an image, not a picture but is a correlation between different kinds of categorization.

Higher order consciousness arises from primary consciousness; it supplements it, it does not replace it. It is dependent on the evolutionary development of language, together with the evolution of symbols, of cultural exchange; and with this brings an unprecedented power of detachment, generation, and reflection, so that finally self-consciousness is achieved, the consciousness of being a self in the world, with human experience and imagination to call upon.

While we are still very limited in our ability to study directly the fine detail of neurons and synapses at work, great strides are being made through computer simulations. Computers can easily be programmed with virtual neural networks similar to those in living brains, which can then be experimented on.

The latter, direct procedures are indispensable at various points in experimental cognitive neuroscience. But even when they are not feasible, research with virtual neural networks is helping us decide what will be worth studying in live brains while we await the technology for doing so in an ethical and efficient manner. Meanwhile, some of the studies of virtual neural networks have succeeded in simulating processes already known to occur in real brains — notably for our purpose, learning, memory and certain features of what is often referred to as ‘mental illness’.

For additional information, the reader may want to explore Relational Frame Theory, Constructivist Learning Theory and/or Contextualist Philosophy