The concept of frames can be traced back, at least, to Kant, who believed that the mind necessarily utilizes Schemas or Schemata. His basic insight was that we understand the world through an internal framework; incoming sensory data, “raw data” as it’s sometimes put, is processed through a system of categories. For Kant, these divide into two types – the a priori “forms” – space and time, which for Kant were respectively Euclidean and Newtonian, and the categories proper, in his terminology – such things as causality and the having of properties. All of the foregoing are what we generally think of as falling under the study of ontology, and are essential to Kant’s understanding of “synthetic” reasoning. Schemata are the link between the forms and categories, and sensory experience; the Schemata render experience intelligible.
For Kant, such schemata were trans-historical – part of the nature of human reasoning itself, and unchangeable – we cannot get outside them to see the world “as it really is”. I am not, here, particularly interested in expounding the ideas of Kant, but rather in the usefulness of this concept of frames. It seems to me that there may be such basic, unchangeable categories (though perhaps they can be altered within scientific disciplines, as has happened to Euclidean and Newtonian frameworks), but also, more changeable frameworks, of a cultural or individual psychological nature, which can alter, develop, or sometimes go awry. These more alterable frameworks might be based on the more fundamental frameworks: a sort of malleable superstructure on an adamantine foundation, the more specific grounded in the more general.
This idea of frames was picked up again, or perhaps reinvented, with the development of Artificial Intelligence in the post-war period. One of the problems which the attempt to build intelligent machines started to encounter was that though computers were good at using abstract logical rules, they had no way of classifying or understanding information about the real world, of “recognizing” or having familiarity with certain situations. A possible solution to this was proposed by Marvin Minsky, one of the leading lights in the field, with his “Frame System Theory”:
“A frame is a sort of skeleton, somewhat like an application form with many blanks or slots to be filled. We’ll call these blanks its terminals; we use them as connection points to which we can attach other kinds of information. For example, a frame that represents a “chair” might have some terminals to represent a seat, a back, and legs, while a frame to represent a “person” would have some terminals for a body and head and arms and legs. To represent a particular chair or person, we simply fill in the terminals of the corresponding frame with structures that represent, in more detail, particular features of the back, seat, and legs of that particular person or chair.” Minsky, The Society of Mind. p.245
Particularly important is the idea of “default assignments” – we assume some typical assignments. Thus we deal with things as, in a sense, stereotypes. As Minsky notes, “Much of the phenomenological power of the theory hinges on the inclusion of expectations and other kinds of presumptions.” Minsky, A Framework for Representing Knowledge.
Minsky also talks of super-frames and sub-frames, more general frames which would perhaps embrace more specific frames.
A similar idea, perhaps more temporally orientated, is the idea of a “script” – a kind of template of typical things we might expect, and typical actions or responses, within a delineated field (for example, a restaurant). Schank and Abelson developed this approach. These ideas, though useful, hardly solved all the problems in the field of A.I., but that need not concern us here. Similar ideas to those of frame, framework, schema and script are Koestler’s idea of matrix and Kuhn’s much used (and abused) idea of paradigms.
We have, so far, a sort of duality – of Frame and of what I will call Data.
I’ve indicated that the sort of frames in which I am interested would be the more flexible ones, the ones which are subject to alteration. (It seems that it was the flexibility of human frames which posed one of the difficulties which A.I. then encountered). Such alteration could be refinement, modification, collapse, synthesis or tension with another frame, extension, over-extension or increasing rigidity (as in some forms of obsessive behaviour), and perhaps could take many other forms.
Now, the question is, what causes the alteration, or perhaps, what determines the alteration? Possibly, other frames. Or perhaps the incoming data alters the frames? I’m going to leave that question hanging for a while, but return to it later.
Neurons and Brains
I’ve been exploring the idea of frames because it seems that it gives us insight into how the human mind can do the sort of things that it does – thinking, understanding, being conscious, and so on. I’d like now to take a step back, and consider some aspects of what we generally accept to be the material basis for the mind, which is the brain.
The mind has long puzzled philosophers. One modern school of thought, the “mysterians”, believe that an understanding of consciousness within a materialist framework will always elude us, and as part of this belief, think that advances in the understanding of the brain will be of little use for understanding consciousness. Colin McGinn, a philosopher of this tendency, says –
“How could the aggregation of millions of individually insentient neurons generate subjective awareness? We know that brains are the de facto causal basis of consciousness, but we have, it seems, no understanding whatever of how this can be so. It strikes us as miraculous, eerie, even faintly comic.”
A philosopher of the opposed school, [the source, alas, eludes me for now] says that such aggregations of neurons are exactly the sort of thing which could underpin the mind. This opposed school of thought insists that understanding of the brain will go a long way, perhaps all the way, to helping our understanding of the mind. My own sympathies are with this latter school, though I don’t think, as yet, we have anything like an adequate understanding.
What is it about all these squiggly wiggly neurons that makes them good candidates? First, a fairly basic observation – the brain is the destination for the incoming neuronal bundles, from the senses, and the source of the outgoing bundles, motor neurons, which cause our actions and behaviour. Damage to incoming or outgoing bundles can affect our capabilities, and damage to the brain can too. Crudely, the brain is “in the middle”, so may well be the core component. Even dualists, those who believe the mind is separate from the brain, tend to put the mind/body interface in the region of the brain.
But there are deeper reasons. In the quote above, the word “aggregate” seems to me to be significantly wrong; the neurons are interconnected in vastly complex ways – they are parts within a whole, the whole having a structure. Now, I’ve used here the two words “interconnected” and “complex”, but the development of Complexity Theory indicated that these two concepts are not merely related externally – there’s something about interconnection which is part of the nature of complexity.
I’ve talked about interconnection, but interconnection is a fairly static concept – if we want a picture of this, we imagine the neurons as vastly entangled. The dynamism comes from what the neurons do; they fire when incoming signals from other neurons reach a threshold, transferring an impulse down the cell body, possibly firing other neurons. This signal, to my knowledge, always goes one way. Another important feature is that the more one neuron fires up another, the more it is likely to; the insight from Donald Hebb’s research in this area is – “Neurons that fire together, wire together.” (I’m making a massive generalization and simplification here, which doesn’t always hold, but nevertheless the simplification gives us a way forward). We have, with this finding, a potential source of flexibility. Neurons are themselves quite complex, and their behaviour includes all sorts of subtleties, but the important point for me is the idea of direction, because at the level of complexes of neurons, we find that neurons don’t merely feed from the senses to the motor neurons in a simple “handing on the baton” kind of way, but can loop back, so that neuron A. might connect to neuron B., neuron B. to neuron C., and neuron C back to neuron A. Add in to this image that other neurons are also feeding into and out of neurons A., B., and C., that we have billions of neurons, and also think of the way that the extent of the firing can alter the strength of the connection between one neuron and another, thus meaning that process can modify structure, and we seem to have a few ideas in play which give us a glimpse of a very complex and malleable system.
Reentrance and Feedback
The foregoing sketch will probably remind many people of the idea of “feedback”. Feedback is a very important concept within Cybernetics, the study of control and communication within the animal and machinic worlds, and studies of systems: General System Theory, Operational Research, and Management Theory. It is key to understanding how complex systems maintain a steady or optimal state within a changing environment. It deals with the sort of circular causation outlined above, but might perhaps best be regarded as a special case of a wider phenomenon. A thinker who has influenced my ideas here, Gerald Edelman, talks of “reentrance” with reference to neural assemblies, which he is at pains to distinguish from “feedback”. Unfortunately, Edelman is not the clearest of writers, and the point is moot regarding whether he is dealing with a form of feedback.
Most important for me is that feedback alone doesn’t seem to fully account for a system that can alter its actual structure – not its fundamental structure, certainly not its biochemical nature, nor many levels up – but its mid-level structures, in a way that isn’t captured by the idea of a mere change of state – whether of a thermostat or of a much more complex and multi-leveled feedback device.
Squaring the Circle: Framing the Cycles
In the earlier sections of this chapter, I wrote of the duality of frame and data, and left a question hanging – What causes the alteration of frames? In the later sections, I wrote about neurons and such stuff. I would now like to try to pull these different ideas together and attempt an answer to the question.
In a sense, the only thing that can alter a frame is data – we can imagine this as some kind of lack of fit between the data and the frame provoking an adjustment. Yet this seems too much to require data to speak for itself, to interpret itself – as if it can protest at the imposition of an ill-fitting frame, and this seems wrong.
It certainly seems that it is the incoming data which can be the only real source of change in the neural networks; left to themselves, we would expect the patterns of activation of the network to settle into a stable state or an endlessly repeating cycle, the physical equivalent of a solipsistic, self-contained and unchanging world-view (this is to treat the matter abstractly – I don’t know what the physiological results of such a nightmare situation are).
I think the best way of understanding how change comes about is to think of the input, the downstream-back-to-upstream circularity, and synapse strengthening all at once; these things in combination are responsible for the mutability of frames. They are core to an understanding of human creativity.
But even at this abstract level, the idea of a frame is sitting uneasily with the idea of a circular network; there is a fundamental difference between the hypothesized frames and the hypothesized networks: frames seem like a phenomenological conjecture, whereas networks are purely physical in nature; frames seem already semiotically interpreted – slots for properties, and so on, whereas networks seem pre-semiotic. In a way, that is okay, as we can see the frames as emergent from the more physicalistic networks, but we still feel the need for some middle steps to make things clearer.
I don’t have an answer to this, a way of bridging the gap between the two models, even though I think the two models both stand a reasonable chance of being valuable to our understanding of the mind/brain. The most I can hope for is that my way of presenting the problem might be useful to its solution. I will conclude with a hopefully suggestive observation on another difference between the models:
The “frame” model seems mainly to be in a dimension “head on” to incoming data – we imagine it as a record, a form, with each of its fields getting populated as the mind shifts attention.
The model of circular neural networks is orthogonal to that – we represent it with inputs to the left, spiraling forms of transformation in the middle, and outputs to the right.
There are possible link-ups though – wider and more all-embracing spirals might be the set frame, and narrower inputs the data. Physiologically, there is no absolute division between structure and state changes at the levels which interest us.
NOTE – I have recently (25/04/2015), since writing this article, come across a passage in John H. Holland’s “Complexity: A Very Short Introduction”, which, if I’m interpreting it correctly, gives some back-up to my points above –
The combination of high fanout and hierarchical organization results in complex networks that include large numbers of sequences that form loops. Loops recirculate signals and resources … Loops also offer control through positive and negative feedback (as in a thermostat). Equally important, loops make possible program-like ‘subroutines’ that are partially autonomous in the sense that their activity is only modulated by surrounding activity rather than being completely controlled by it.”