Literature Review: Cognitive Functioning Models and Cognitive Brain Imaging


Currently, there are two fundamental approaches to cognitive science of modeling. The two approaches are the connectionist approach and the probabilistic or computational approach. The probabilistic or computational approach is viewed as the top-down approach of studying the mind whereas the connectionist approach is viewed as the bottom-up approach. Connectionist modeling begins with “the characterization of the neural mechanism and exploring what macro-level functional phenomenon might emerge” (Griffins, et al., 2010). In contrast, the probabilistic approach starts with “identifying the ideal solutions, then, modeling the mental process using algorithms to approximate the solutions” (Griffins, et al., 2010).  

For the purposes of this review, I will focus on the Box-and-Arrow concept as it forms the fundamental base to all three kinds of cognitive function models that I will discuss later in this review (Griffins, et al., 2010).  Box-and-Arrow information processing models are normally designed to follow the input-cognitive system-output logic. For a normal subject with intact cognitive functioning, input is sent to a specific area of the brain (cognitive system) to be processed.  This then results in a desired and correct outcome. Box-and-Arrow models are normally depicted using fairly generalizable verbal descriptions to yield what a normal individual with intact cognitive function would produce if given the same input words.

To detect cognitive impairments, a model designer can change the cognitive structures of the model to mimic that of a cognitively impaired subject but keep the input the same. Investigators can then compare the outcomes from this cognitively impaired model to the outcomes from the model with intact cognitive functions.  The difference in outputs between the two models help investigators detect the correct positioning of the impaired cognitive function area of the brain. Although the predictions based on the box-and-arrow models are fairly good for capturing the characteristics of normal and impaired cognitive function, they are “generally unreliable to account for detailed phenomenon” (Ashby and Maddox, 1993).

There are numerous types of cognitive functioning models in the literature. For the purpose of this synthesis, I have chosen to focus on three of these models including: 1) the prototype model of categorization, 2) the exemplar model of categorization, and 3) the artificial neural networks models.

In the prototype model of categorization (the nearest prototype classifier) the “learner estimates the central tendency from all the examples experienced from and within each category during the training” (Ashby and Maddox, 1993).  The learner is then able to “assign any new observed instances to the class of the prototype that is nearest” (Gagliardi, 2008) to the training data.

The exemplar based model (the nearest neighbor classifier) is referred to as the memory based model (Gagliardi, 2009). There is no learning phase in this model.  Instead, the learner memorizes all the category examples during the training and when a new stimulus is presented, the “category with the greatest total similarity is chosen” from the stored or memorized example (Ashby and Maddox, 1993).

The artificial neural network (ANNs) model has “small numbers of nodes particularly feed forward networks (with input nodes, hidden nodes, and output nodes) and simple recurrent networks (SRNs)” (Krebs, 2005). The feed forward and simple recurrent networks architecture have been used to “model high level cognitive functions such as detecting syntactic and semantic features for words” (Elman, 1990, 1993; as cited in Krebs, 2005), “learning the English past tense of verbs”(Rumelhart and McClelland, 1996; as cited in Krebs, 2005), and “cognitive development” (Schultz, 2003; as cited in Krebs, 2005).

The difference between the prototype models and the exemplar models are based on the assumptions they make regarding what is learned and how the category decision is made. For the prototype model, the assumption is that when identifying a category of objects, we refer to a precise object that is typical of the category (Krebs, 2005). Decision making in a prototype model is based on the similarity between the input target and the category prototype that was used during training. The category that is the most similar prototype is selected to match the input target. While in exemplar models, decision making is based on the memorized examples for each of the stored categories in the model. When a new stimulus is presented, the similarity of the target is computed against each stored example, and the example with the highest similarity will then be chosen. This is based on exemplar theory which states that “people increment the number of stored exemplars by observing different objects to the same category, and so they categorize new objects according to the stored ones” (Krebs, 2005).

The artificial neural networks (ANNs) are very different from the two models mentioned previously. There are two types of ANNs models including the feed-forward network model and the simple recurrent model. The feed-forward network model transfers information in a unidirectional way from input units to output units via a hidden layer. The simple recurrent networks are believed to be more appropriate since they have interconnections between the input units, the hidden layer, and the output units.  The ANNs are a “loose adaptation of the processes by which the brain is thought to operate” (MCMillen & Henley, 2001).  The operating processes of ANNs are analogous to learning by experience as the network “learns associations by modifying the strength of connections between nodes “(McMillen & Henley, 2001). Unlike the other two types of models described above, ANNs are robust and work well with problematic data such as missing data and data with high random variance.

All three of these cognitive models are similar in that they “must account for a common set of empirical laws or basic facts that have accumulated from experiments on categorization” (Krebs, 2005). In addition, they are all based on basic architectural structure derived from the Box-and-Arrow model (i.e., input, cognitive system, and output). Thus, all these models are employed to try to understand and detect cognitive functions of the brain. Furthermore, all three models follow the see-think-and-do architectural sequence. In this sequence, a new stimulus is received; a mental picture of the received stimulus is created; and a stored mental construct is used to predict and/or detect its representation.

The models have many aspects that are related to brain cognitive function and metacognition.  Elman (1993) posits “successful learning may depend in starting small”. This is true not just only for the models but also for the human child. It is believed that the “greatest learning in humans occurs during childhood” (Elman, 1993). This is because most dramatic maturational changes along with the ability to learn complex language patterns occur during childhood (Elman, 1993). Like the human child, “a model succeeds only when networks begin with limited working memory and gradually mature to the adult like state” (Elman, 1993). Consequently, the metacognitive ability of the model, like that of a child, will be more enhanced if the information (input) is restricted to mimic developmental restrictions necessary for mastering complex domains such as language acquisition (Domoney, Hoen, Blanc & Lelekov-Boissard, 2003). 

According to Elman, (1993) training “fails when models (networks) are fully formed and adult like in their capacity”. The reason for the failure may be attributed to the fact that two things are happening when learning complex domains such as language. The first is that we learn through incremental input of simple and childlike language and progressively increase the difficulty to achieve adult language skills. Second, a child’s memory increases in complexity as he/she undergoes developmental changes and matures. For models to be successful, they must take this same approach.  Starting with full adult-level words will lead the model to fail because the model is not given the opportunity to start small and increase in complexity.

There are several relationships between these cognitive functioning models and metacognition.  First, each of the models employs a sequence of “see-think-and-do” (Hudlicka, 2005) similar to a metacognitive process. The models “map incoming stimuli (cue) onto an outgoing behavior (action) through a series of representational structures referred to as mental construct” (Hudlicka, 2005). The mental construct created in the training cycle is then used to predict which action to take when a model encounters a new stimulus that resembles a particular mental construct. The subsequent encounter with stimulus resembles the feedback mechanism in a metacognitive process.  In addition, sequential procedural activities, like those used in these models, help with metacognition.  Finally, the cognitive system architecture of the models resembles metacognitive functions such as “attention allocation, checking, planning, memory retrieval and encoding strategies, and detection of performance errors” (Hudlicka, 2005).

I will now turn to discussing a neurological process that explains some aspects of cognition. According to Straube (2012) “memory formation comprises at least three sub-processes including encoding, consolidations, and retrieval of the learned material”. In other words, for a memory to happen the brain has to encode the incoming imagery, consolidate it, and then retrieve it. However, the processes of encoding, consolidation, and retrieval are prone to many types of errors that may lead to a false or true memory (Straube, 2012).

Declarative memory or long term memory in humans is associated with recall of facts, knowledge, and events (Straube, 2012). Declarative memory is “further divided into semantic memory and episodic memory” (Straube, 2002).  Semantic memory deals with “facts about the world”, while episodic memory “deals with the capacity to re-examine an event in the context in which it originally occurred” (Straube, 2012).  Human memory is governed by many factors including “prior knowledge, present mental state, and emotions” (Straube, 2012). What is retrieved from memory sometimes differs measurably from what was initially encoded. Thus, memory does not “reflect a perfect representation of the external world” (Straube, 2012).

Research indicates that processes like imagery, self-referential processing, and spreading activation at encoding may result in the formation of false memories (Straube, 2012). According to Straube (2012) memory of an imagined event or “fantasy” can later be falsely remembered as a “true” event and lead to the retrieval of a false memory. In brain imagery research, increased brain activity of the precuneus region is believed to “indicate the engagement of visual imagery during encoding which can lead to falsely remembering something that was only imagined” (Straube, 2012).  Brain imaging results have also indicated that “greater activity in the hippocampus was related to correct context”, while the “ventral anterior cingulate cortex was activated for subsequent inaccurate context memory” (Straube, 2012).  Similarly, a study using functional magnetic imaging (fMRI) found that “activity in the left ventrolateral prefrontal cortex (PFC) and visual areas at encoding contribute to both true and false memory and the activity in the left posterior medial temporal lobe (MTL) contribute mainly to formation of true memories” (Kim & Cadeza, 2007).  These results suggest that activity in different regions of the brain is associated with creation of a false and/or a true memory.

Cognitive brain imaging (CBI) research, however, has many critics. Most criticisms relate to three main points: 1) resolution, 2) differences between individuals, and 3) reproducibility. Critics argue that most of the brain imaging technology (i.e., MRI, fMRI, and PET) lacks the ability to capture brain processes at the neuron level.  Instead, their magnification captures processes at the millimeter level, deemed by critics to be too large to detect neural brain activity occurring at the neuron level. Thus, brain imaging technology provides “an inaccurate reflection of the underlying activity” (Logothetis et al, 2001). 

Cognitive brain imaging has also been criticized for not accounting for the differences between individuals.  This issue was addressed in a brain imaging study by Miller and colleagues (2002) who found a lot of variability between individuals and stable variability within individuals. Miller (2002) suggests that brain functions related to memory are not localized and may differ significantly between individuals.  If true, this suggests the need to be cautious when interpreting the results of studies involving the use of brain imaging technology to study memory formation.

The issue of reproducibility has also been a contentious issue in cognitive brain imaging research. Reproducibility is the idea that if you repeat an experiment under the same conditions, you should be able to reproduce the same results as the original investigator. Reproducibility is the hallmark of scientific experimentation that allows researchers in the field to validate or invalidate the results of other researchers and to build on each other’s work.  Critics have argued that results from cognitive brain imaging studies are difficult to reproduce.  As stated by Marshall et al., (2004) the “generally poor quantitative task repeatability highlights the need for further methodological developments before much reliance can be placed on functional MR imaging results of single-session experiments”.

In conclusion, cognitive brain imaging techniques can be plausibly used to study some aspects of brain function (e.g. patterns of activity associated with the basic learning mechanisms which are believed to be localized) but are not as effective at studying more complex brain functions (e.g. memory formation which is not believed to be localized). Caution needs to be taken when interpreting the results of cognitive brain imaging studies until issues of resolution and reproducibility have been addressed. 

 

 

 

Reference

Ashby, F. G., & Maddox, W. T. (1993). Relations between prototype, exemplar, and decision bound models of categorization. Journal of Mathematical Psychology, 37(3), 372-400.

Domoney, P. F., Hoen, M., Blanc, J. & Lelekov-Boissard. (2003). Neuralogical badis of language and sequential cognition: Evidence from simulation, aphasia, and ERP studies. Journal of Brain and Language, 86, 207-225.

Elman, J. L. (1993). Learning and development in neural networks: the importance of starting small. Journal of Cognition, 48, 71-99.

Gagliardi, F. (2009). The necessity of machine learning and epistemology in the development of categorization theories: A case study in prototype-exemplar debate. In AI* IA 2009: Emergent Perspectives in Artificial Intelligence (pp. 182-191). Springer Berlin Heidelberg.

Gagliardi, F. (2008). A prototype-exemplars hybrid cognitive model of “phenomenon of typicality” in categorization: A case study in biological classification. In Proc. 30th Annual Conf. of the Cognitive Science Society, Austin, TX (pp. 1176-1181).

 Griffiths, T., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. (2010). Probabilistic models of cognition: exploring representations and inductive biases. Journal of Trends in Cognitive Sciences, 14, 357-364.

Hudlicka, E. (2005). Modeling interaction between metacognition and emotion in a cognitive architecture. In Proceedings of the AAAI Spring Symposium on Metacognition in Computation. AAAI Technical Report SS-05-04. Menlo Park, CA: AAAI Press. pp. 55-61.

 

Kim, H. & Cadeza, R. (2007). Differential contributions of prefrontal, medial temporal, and sensory-perceptual regions to true and false memory formation. Journal of Cereb Cortex, 17(9), 2143-2150.

Krebs, P. R. (2005). Models of cognition: Neurological possibility does not indicate neurological plausibility. [Conference Paper]

Logothetis, N. K., Pauls, J., Augath, M., Trinath, T., & Oeltermann, A. (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature, 412(6843), 150-157.

Marshall, I., Simonotto, E., Deary, I. J., Maclullich, A., Ebmeier, K. P., Rose, E. J., … & Chappell, F. M. (2004). Repeatability of motor and working-memory tasks in healthy older volunteers: Assessment at functional MR imaging1. Radiology, 233(3), 868-877.

MCMillen, R. & Henley, T. (2001). Connectionism isn’t just for the cognitive science: neural networks as methodological tools. Journal of Psychology Record, 51(1), 3-18.

Miller, M.B., Van Horn, J., Wolford, G.L., Handy, T.C., Valsangkar-Smyth, M., Inati, S., Grafton, S., & Gazzaniga, M.S. (2002). Extensive individual differences in brain activations during episodic retrieval are reliable over time. Journal of Cognitive Neuroscience, 14(8), 1200 – 1214.

Straube, B. (2012). An overview of the neuro-cognitive processes involved in the encoding, consolidation, and retrieval of true and false memories. Journal of Behavioral and Brain Functions, 8(35), 1-10.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s