By way of introduction, here is the first section from "Introduction to Computational Cognitive Modeling," by Ron Sun, to begin a definition of what is meant by computational models of cognition, or of a computational theory of mind.
And here also is well-known philosopher (if only for his hair) David Chalmers offering a brief introduction to the topic in his 1993 paper, "A Computational Foundation for the Study of Cognition" (Journal of Cognitive Science, 2012):
1. What is Computational Cognitive Modeling?
Research in computational cognitive modeling, or simply computational psychology, explores the essence of cognition (broadly defined, including motivation, emotion, perception, and so on) and various cognitive functionalities through developing detailed, process-based understanding by specifying corresponding computational models (in a broad sense) of representations, mechanisms, and processes. It embodies descriptions of cognition in computer algorithms and programs, based on computer science (Turing 1950). That is, it imputes computational processes (in a broad sense) onto cognitive functions, and thereby it produces runnable computational models. Detailed simulations are then conducted based on the computational models (see, e.g., Newell 1990, Rumelhart et al 1986, Sun 2002). Right from the beginning of the formal establishment of cognitive science around late 1970’s, computational modeling has been a mainstay of cognitive science. 
In general, models in cognitive science may be roughly categorized into computational, mathematical, or verbal-conceptual models (see, e.g., Bechtel and Graham 1998). Computational models (broadly defined) present process details using algorithmic descriptions. Mathematical models presents relationships between variables using mathematical equations. Verbal-conceptual models describe entities, relations, and processes in rather informal natural languages. Each model, regardless of its genre, might as well be viewed as a theory of whatever phenomena it purports to capture (as argued extensively before by, for example, Newell 1990, Sun 2005).
Although each of these types of models has its role to play, in this volume, we will be mainly concerned with computational modeling (in a broad sense), including those based on computational cognitive architectures. The reason for this emphasis is that, at least at present, computational modeling (in a broad sense) appears to be the most promising approach in many respects, and it offers the flexibility and the expressive power that no other approach can match, as it provides a variety of modeling techniques and methodologies and supports practical applications of cognitive theories (Pew and Mavor 1998). In this regard, note that mathematical models may be viewed as a subset of computational models, as normally they can readily lead to computational implementations (although some of them may appear sketchy and lack process details).
Computational models are mostly process based theories. That is, they are mostly directed at answering the question of how human performance comes about, by what psychological mechanisms, processes, and knowledge structures and in what ways exactly. In this regard, note that it is also possible to formulate theories of the same phenomena through so called “product theories”, which provide an accurate functional account of the phenomena but do not commit to a particular psychological mechanism or process (Vicente and Wang 1998). We may also term product theories blackbox theories or input-output theories. Product theories do not make predictions about processes (even though they may constrain processes). Thus, product theories can be evaluated mainly by product measures. Process theories, in contrast, can be evaluated by using process measures when they are available and relevant (which are, relatively speaking, rare), such as eye movement and duration of pause in serial recall; or by using product measures, such as recall accuracy, recall speed, and so on. Evaluation of process theories using the latter type of measures can only be indirect, because process theories have to generate an output given an input based on the processes postulated by the theories (Vicente and Wang 1998). Depending on the amount of process details specified, a computational model may lie somewhere along the continuum from pure product theories to pure process theories.
There can be several different senses of “modeling” in this regard, as discussed in Sun and Ling (1998). The match of a model with human cognition may be, for example, qualitative (i.e., nonnumerical and relative), or quantitative (i.e., numerical and exact). There may even be looser “matches” based on abstracting general ideas from observations of human behaviors and then developing them into computational models. Although different senses of modeling or matching human behaviors have been used, the overall goal remains the same, which is to understand cognition (human cognition in particular) in a detailed (process-oriented) way.
This approach of utilizing computational cognitive models for understanding human cognition is relatively new. Although earlier precursors might be identified, the major developments of computational cognitive modeling have occurred since the 1960’s. It has since been nurtured by the Annual Conferences of the Cognitive Science Society (which began in the late 1970’s), by the International Conferences on Cognitive Modeling (which began in the 1990’s), as well as by the journals of Cognitive Science (which began in the late 1970’s), Cognitive Systems Research (which began in the 1990’s), and so on.
From Schank and Abelson (1977) to Minsky (1981), a variety of influential symbolic “cognitive” models were proposed in Artificial Intelligence. They were usually broad and capable of a significant amount of information processing. However, they were usually not rigorously matched against human data. Therefore, it was hard to establish cognitive validity of many of these models. Psychologists have also been proposing computational cognitive models, which are usually narrower and more specific. They were usually more rigorously evaluated in relation to human data. An early example is Anderson’s HAM (Anderson 1983). Many of such models were inspired by symbolic AI work at that time (Newell and Simon 1976).
The resurgence of neural network models in the 1980’s brought another type of model into prominence in this field (see, e.g., Rumelhart et al 1986, Grossberg 1982). Instead of symbolic models that rely on a variety of complex data structures that store highly structured pieces of knowledge (such as Schank’s scripts or Minsky’s frames), simple, uniform, and often massively parallel numerical computation was used in these neural network models (Rumelhart et al 1986). Many of these models were meant to be rigorous models of human cognitive processes, and they were often evaluated in relation to human data in a quantitative way (but see Massaro 1988).
Hybrid models that combine the strengths of neural networks and symbolic models emerged in the early 1990’s (see, e.g., Sun and Bookman 1994). Such models could be used to model a wider variety of cognitive phenomena due to their more diverse and thus more expressive representations (but see Regier 2003 regarding constraints on models). They have been used to tackle a broad range of cognitive data, often (though not always) in a rigorous and quantitative way (see, for example, Sun and Bookman 1994, Sun 1994, Anderson and Lebiere 1998, Sun 2002).
For overviews of some currently existing software, tools, models, and systems for computational cognitive modeling, the reader may refer to the following Websites (among others):
as well as the following Websites for specific software, cognitive models, or cognitive architectures (e.g., Soar, ACT-R, and CLARION):
1. The roots of cognitive science can, of course, be traced back to much earlier times. For example, Newell and Simon’s early work in the 60’s and 70’s has been seminal (see, e.g., Newell and Simon 1976). The work of Miller, Galanter, and Pribram (1960) has also been highly influential. See the chapter by Boden in this volume for a more complete historical perspective (see also Boden 2006).
Where Gerald O'Brien seems to differ from these more traditional models is in his adherence to a connectional model of neural networks, which he describes as fully computational. Here is a brief definition of connectionism from Wikipedia:
Perhaps no concept is more central to the foundations of modern cognitive science than that of computation. The ambitions of artificial intelligence rest on a computational framework, and in other areas of cognitive science, models of cognitive processes are most frequently cast in computational terms. The foundational role of computation can be expressed in two basic theses. First, underlying the belief in the possibility of artificial intelligence there is a thesis of computational sufficiency, stating that the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental properties. Second, facilitating the progress of cognitive science more generally there is a thesis of computational explanation, stating that computation provides a general framework for the explanation of cognitive processes and of behavior.
These theses are widely held within cognitive science, but they are quite controversial. Some have questioned the thesis of computational sufficiency, arguing that certain human abilities could never be duplicated computationally (Dreyfus 1974; Penrose 1989), or that even if a computation could duplicate human abilities, instantiating the relevant computation would not suffice for the possession of a mind (Searle 1980). Others have questioned the thesis of computational explanation, arguing that computation provides an inappropriate framework for the explanation of cognitive processes (Edelman 1989; Gibson 1979), or even that computational descriptions of a system are vacuous (Searle 1990, 1991).
Advocates of computational cognitive science have done their best to repel these negative critiques, but the positive justification for the foundational theses remains murky at best. Why should computation, rather than some other technical notion, play this foundational role? And why should there be the intimate link between computation and cognition that the theses suppose? In this paper, I will develop a framework that can answer these questions and justify the two foundational theses.
In order for the foundation to be stable, the notion of computation itself has to be clarified. The mathematical theory of computation in the abstract is well-understood, but cognitive science and artificial intelligence ultimately deal with physical systems. A bridge between these systems and the abstract theory of computation is required. Specifically, we need a theory of implementation: the relation that holds between an abstract computational object (a "computation" for short) and a physical system, such that we can say that in some sense the system "realizes" the computation, and that the computation "describes" the system. We cannot justify the foundational role of computation without first answering the question: What are the conditions under which a physical system implements a given computation? Searle (1990) has argued that there is no objective answer to this question, and that any given system can be seen to implement any computation if interpreted appropriately. He argues, for instance, that his wall can be seen to implement the Wordstar program. I will argue that there is no reason for such pessimism, and that objective conditions can be straightforwardly spelled out.
Once a theory of implementation has been provided, we can use it to answer the second key question: What is the relationship between computation and cognition? The answer to this question lies in the fact that the properties of a physical cognitive system that are relevant to its implementing certain computations, as given in the answer to the first question, are precisely those properties in virtue of which (a) the system possesses mental properties and (b) the system's cognitive processes can be explained.
The computational framework developed to answer the first question can therefore be used to justify the theses of computational sufficiency and computational explanation. In addition, I will use this framework to answer various challenges to the centrality of computation, and to clarify some difficult questions about computation and its role in cognitive science. In this way, we can see that the foundations of artificial intelligence and computational cognitive science are solid.
Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience, and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models.This is closer to what I tend to believe than more traditional computational models, although I am not sure how he or other connectionists account for the input of the body and its role in shaping the mind (see Lakoff and Johnson, Philosophy In The Flesh).
This is a nice discussion from the IEET podcast, Rationally Speaking.
Rationally Speaking | Posted: Oct 28, 2013
(Studies in Applied Philosophy, Epistemology and Rational Ethics)
This episode of Rationally Speaking features philosopher Gerard O'Brien from the University of Adelaide, who specializes in the philosophy of mind - Physical Computation and Cognitive Science was published Oct, 27 of 2013. Gerard, Julia, and Massimo discuss the computational theory of mind and what it implies about consciousness, intelligence, and the possibility of uploading people onto computers.
Gerard's pick: "Alan Turing: The Enigma The Centenary Edition"