I still have this type of nightmare: I’m back at the university, I’m at the exam, I get the question, and I realize that I know nothing about the topic; I didn’t understand it during lectures, and I couldn’t understand it with self-study. I’m terrified. The topic is ‘representational theory of mind’. I wake up from fear, and I decide that I need to read something to understand representational theory of mind better. The book that I read this time is ‘Representations in Cognitive Science’ by N. Shea.
Representational theory (RTM) is an attempt to explain how the mind (brain) operates in the environment. Let’s say I think a thing. A certain population of neurons in my brain is activated. Then one may say that the thing that I’m thinking is represented by the activity of this particular kind. In the terminology of RTM, the neuronal population activated would be a vehicle of a representation. The content of representation will be a thought. My thinking is thus a manipulation of this particular representation. The general underlying assumption is that cognition is a computation over representations. The weak spot of the whole theory, in my view, is why do we need representations at all to explain cognition? Why should there be a meaning carried out by the particular activation of a neuronal population? The author of the book suggests his idea.
The author states right away that he is committed to the RTM, meaning that representations are there and the brain (mind) uses them. However, he admits that some computations may be carried out without representations, but this is not the focus of the book. The author attempts to establish which computations need representations. When we try to explain the behavior of the organism, and when we try to map the internal working of the organism to the outside world, we may do it with or without representations. However, as the author suggests, there is a class of functions that are better explained with representations than without. The author terms these functions task functions. A task function is the output of an organism that is robust and stabilized. Robustness means that under many external conditions the desired goal will be most likely achieved. Stability means that the particular outcome was influenced by certain factors and has become solidified. The factors could be 1) evolution, 2) keeping an organism alive, and 3) learning to produce the outcome. Therefore, when an organism performs computations to achieve a task function, the representation arises.
The author thus poses that an explanation of behavior that is carried out in order to implement task functions (that are robust and stabilized) with a notion of representational content is better than without it. It is better because ostensibly the correlation between the inner algorithm performed by the organism and the outside world is being exploited by the organism. Additionally, one may say when representation was correct: the correct representation accounts for success in implementing a behavior (reaching a goal), while misrepresentation accounts for failure.
The book provides a convenient guide for finding the content of representation.
1. Find a behavior that is intentional or adaptive.
2. Look for the ways how the organism might be able to perform such a behavior, namely, look for the links between environment and actions.
3. Identify the organism’s internal processes (for example, neuronal activations) that can be in operation.
4. If an algorithm that explains the observed behavior can reasonably be linked to the organism's internal processes, then those internal elements are likely responsible for carrying meaning, and the algorithm will reveal what they represent.
The book was a bit difficult for my level of understanding. At parts, when brain activity was described, I was getting the point. However, most of the time I found myself in the thick fog of abstract concepts and of general contemplation. It’s not to say that the author didn’t try. The book is actually very well structured. There are summaries at the end of each chapter, but also a detailed recap of the whole book at the end. Thanks to that, I think I understood much more than I could have otherwise.
Despite the diligent workings of the author’s arguments, I still don’t understand the ontological status of a representation. I can see that the vehicle, for example, the activation of neurons, is a physical thing. But how physical is the content? If there was no one who could try to build a link between the algorithm carried out by the organism and the behavior, would there still be content? How two neural activations are different to the observer when one represents something? If misrepresentation accounts for failure to produce the behavior, what exactly and where went wrong? If there is a seeming correlation between behavior and neuronal activity, why would we assume that it's meaningful and not like in Chinese room argument? Now, I think I need to read something from the deniers of representationalism. Perhaps, after that I will pass the exam in my sleep.
Favorite quote:
“Content is partly a matter of explaining how a system achieves its functions—of how its internal workings implement an algorithm for performing its functions.”
April, 2025