“A poetic text is ‘semantically saturated’, condensing more ‘information’ than any other discourse; but whereas for modern communication theory in general an increase in ‘information’ leads to a decrease in ‘communication’ (since I cannot ‘take in’ all that you so intensively tell me), this is not so in poetry because of its unique kind of internal organisation. Poetry has a minimum of ‘redundancy’ – of those signs which are present in a discourse to facilitate communication rather than convey information – but still manages to produce a richer set of messages than any other form of language.”
___________________ Terry Eagleton on Yuri Lotman, in “Literary Theory”.
I have indicated elsewhere that one of the principal tasks of the mind is the identification of patterns, or regularities. I felt that it was necessary to explore this area from a scientific viewpoint, and took recourse to a book by Murray Gell-Mann, “The Quark and the Jaguar”. Gell-Mann is one of the foremost theoretical physicists of the last century, the key figure in the development of quantum chromodynamics, and the man who named the quark. His book is a popular exposition of his ideas on how the fundamental laws of physics give rise to the complexity we see around us, and is extremely wide-ranging, from sections on quantum mechanics to considerations on the degradation of the environment. However, my focus here is on one aspect of the book: in dealing with complexity, he gives a good overview of ideas of regularity as developed within information theory. He acknowledges a debt to the work of Charles H. Bennett, physicist and information theorist, for the concepts underpinning these sections.
I have two particular concerns here:
Firstly, with the relevance or adequacy of these ideas to our understanding of the mind as some sort of pattern identifier.
Secondly, with their possible relevance to, and clarification of, notions within poetics and literary theory of the poetic or literary work, at least the “great” or worthwhile ones, as being particularly complex, “rich”, or saturated with meaning. Indeed, such ideas are not restricted to academic theory, but are part of many people’s intuitions about the more highly-valued works. The quote at the head of this section indicates these views, but also represents an attempt to make precise such intuitions about this complexity and richness, and perhaps validate them, by putting them under the magnifying glass of complexity theory and information theory, to find out whether literary works have some sort of particular or peculiar informational richness. It is very much in the same spirit that this enquiry will be conducted.
But before I deal with these two concerns, I will give a brief synopsis of Gell-Mann’s presentation.
[Crude] Complexity – Algorithmic Information Content
The first measure of complexity which Gell-Mann considers is Algorithmic Information Content. I will represent strings of information here in binary, as that is the basic level to which all strings or streams of information are assumed by information theory to be reducible. An example of a binary bit string would be –
If we have a bit string such as –
this has low algorithmic information content, since its description can be shortened to something like –
PRINT “10” x 50.
(A purist might at this point cry “foul!”, since my shortened description is not itself in binary, but I must crack on.) The bit string has low algorithmic content, since it follows one simple pattern.
By contrast, imagine a bit string of a hundred 1’s or 0’s which has very few, perhaps no regularities – such a string as might be generated by a hundred coin tosses recorded in sequence. Such a bit string would, in all likelihood, have high algorithmic information content, since it would be difficult to compress into a shorter description; some aspects might be compressible, but to nothing like the level of our very regular string.
There are subtleties to the concept of randomness, which Gell-Mann discusses, but which need not detain us here. I refer readers to the actual book should they be interested.
How does Algorithmic Information Content fare as a candidate measure of the sorts of complexity in which we might be interested?
Not very well – “randomness” isn’t quite what we mean by “complexity”; as Gell-Mann points out, a longish string generated by outputs from the proverbial monkey on a typewriter would have higher algorithmic information content than a string of the same length from the works of Shakespeare, but we would surely think of the Shakespearean string as more complex. For such reasons, algorithmic information content has been dubbed a measure of “crude” complexity.
Is there a better measure of complexity than crude complexity / algorithmic information content, one which might more fruitfully capture and clarify our intuitions? It seems that there is; “effective complexity”. Effective complexity is the length of a concise description of a string’s regularities. The diagram at the start of this article, taken from Gell-Mann, indicates how effective complexity varies with crude complexity.
The concept of effective complexity is important, as it means we can be a little clearer about whether we are talking about maximization of information, or maximization of patterning. The latter is more central to our concerns with psychology and aesthetics, and is captured at least provisionally in this concept.
For the sake of completeness, I’ll mention here two other concepts – [Logical] Depth, and Crypticity, but make little of them; again, curious readers are referred to Gell-Mann’s book.
Logical Depth is the time it takes to compute from a program or schema to a full description of the system, or at least of the system’s regularities.
Crypticity is something like the reciprocal of this – the time it takes to compute from a full description to a program or schema.
Complex Adaptive Systems
Gell-Mann also considers what we could regard as the subjective pole to these ideas of complexity, the sorts of beings which have evolved to identify and exploit regularities within information. He terms such beings Complex Adaptive Systems, of which the most familiar are biological creatures, including ourselves, but the category also includes certain forms of computerized system which evolved systems such as ourselves have designed.
The identification of regularities involves their condensation into a “schema” or model, and such schemata can then be used as the basis for action. Gell-Mann also talks of compression of regularities.
Schemata are for purposes of description, prediction, and prescription. Gell-Mann is clearly an evolutionary thinker, and regards complex adaptive systems as things which are results of a honing by natural selection; in this regard, I find his triple of purposes pleasing; logically, description comes first, the use of such regularities in prediction second, and the use of such prediction for the prescription of actions to be executed in the world third.
But in evolutionary terms, the order can be reversed – it is the usefulness for survival in the “smart” actions prescribed by the identification of regularities which drives the increasing sophistication of the complex adaptive systems as pattern identifiers.
However, unless I’m missing something, there seems to be a gap between the idea of compression of effective complexity and what we would more humanly think of as schemata; a merely mathematical notion of compression may be in danger of elision into an already-interpreted idea of condensation of sensory flux into concepts. There is not really any sort of bridge here between a pure and rather abstract notion of a pattern spotter, and what we might regard as an Actually Existing Pattern Spotter – a mammal, intelligent bird, or whatnot – within the general concept of “pattern spotter”, outside of computerized systems, born mathematicians, and other specialists.
The concept of “schema” runs the danger of getting blurred into something like “shortest mathematical description” in a way which obscures the role of conceptual thought, whilst seeming to have covered it. This is partly because “schema” is in use within other areas of philosophy, with a broader and more psychologistic meaning.
Related to this, there is little indication in this material of any decent general heuristics for deriving effective complexity. Gell-Mann considers pattern extrapolation in a fairly abstract and mathematical way, which I think is fine, and should indeed be part of our understanding of what is meant by “pattern” and “regularity”. But Gell-Mann only gives us an abstract description of what a pattern identifier does.
None of this is to find fault with Gell-Mann, but only to indicate a possible way forward, in that this use of “schema” might not fully capture “concept”, though concepts surely are a way of condensing regularities.
As an aside, an interesting insight afforded by such an abstract and mathematical treatment is that it involves us in what I call “Godelisation”; it is quite likely, perhaps provable, that we can never arrive at a general “best pattern identifier” – one that would spot and condense all regularities in what we would know to be the neatest way; effective complexity seems to fall prey to problems here in the same way that algorithmic complexity has been proven to. Readers may be aware of such issues from acquaintance with the work of Kurt Godel and Alan Turing.
Effective Complexity and Literary Theory
Within literary theory, there is a school of thought which privileges foregrounding as the distinctive feature of literary texts. Foregrounding is regarded as achievable by two means – deviation, and extra patterning. I am sympathetic with the identification of these two aspects of literary and poetic works as fundamental. (I am, however, at present uneasy with their subsumption under the function of foregrounding, but my unease must await proper consideration, exploration, and justification elsewhere on this site.)
Deviation and extra patterning are in a sense opposites – deviation being a loss of regularity, and extra patterning an apparently superfluous regularity.
The considerations here give some precising of, and constraint on, the notion of extra patterning, extra regularities, or, as Geoffrey M. Leech puts it, “parallelism in the widest sense of that word.” In conclusion, I’d like to refer back to the Eagleton quote at the head of this article – the distinction between maximization of information (crude complexity) and maximization of patterning (effective complexity) explored in our enquiry could help clarify and further develop Eagleton’s (and Lotman’s) intuitions, rescuing them from surface mystification and paradox, and helping to shed further light on at least one aspect of the “unique kind of internal organisation” of poetry.
Finally, I must point out that this article has only cut a certain path through Gell-Mann’s “The Quark and the Jaguar” for my own purposes – my comments should not be taken as a review of the book as a whole. My focus has been narrow, but the book itself is panoramic, and at times the view is breathtaking.
_____________BELOW HERE UNDER CONSTRUCTION______________
One of the main ways in which maximization of patterning can occur within literature is through the exploitation of the various linguistic levels – at the phonic level, rhythm, rhyme, alliteration, assonance, etc, add regularities, at the syntactic level, parallels can be established, and so on.
[Complex patterning across linguistic levels – e.g. the use of more purely linguistic patterns to establish a semantic pattern]
[Problems with the foregrounding model]
[Bennett and Gell-Mann’s other two articles.]