The Figures and the Ground

tanguy1

[Under Construction]

Two Cheers for the Trivium

Rhetoric was, in the middle ages, partly as a result of scholastic enterprise to put in order the best ideas of antiquity, regarded as one of the seven liberal arts. The seven liberal arts divided into two groups – the trivium, made up of three disciplines – logic, grammar, and rhetoric, and the quadrivium, made up of four disciplines which we would now very roughly usually assimilate to the sciences. Much of rhetoric was concerned with the classification of figures of speech.

What I would like to do here is indicate how the three core terms of my Aesthetics, namely Maximization, Defamiliarization and Bisociation, lock on to the various figures of speech. The three core terms should have been justified upwards, as features of mind and indeed complexity, elsewhere, but the interrelations of those three to the lower level of the classification of a lot of the figures must be explored.

I must make clear that I do not regard the figures as the be all and end all of literary art, as an early Russian Formalist text maintains [sum of its devices?]. Rather, the figures are a good initial testing ground, since many have already been very well defined, in contrast to other rather vague literary terms.

A glance at An Outline of the Rhetoric Database in Rhetoric will establish that regarding the core and most systematizable aspects, for at least a chunk of it, we can discern different operations – deletion, addition, distortion, and repetition / parallelism, being brought to bear upon different levels – syllables, words, etc.

[distortion and substitution?]

defamiliarization and figures of

ambiguity and bisociation

deviation and extra patterning, sometimes considered as the two forms of foregrounding

schemes and tropes

figures of speech and figures of thought

tanguy2

 

Advertisements
Posted in poetry | Leave a comment

Bridges

5

“Perhaps it would not satisfy completely, and that is what the esteemed author would have for all the diligence employed, whereas with a promise he could easily benefit himself and others even more than if he had written a prodigy of a system.”

“When the word “mediation” is merely mentioned, everything becomes so magnificent and grandiose that I do not feel well but am oppressed and chafed. Have compassion on me in only this one respect; exempt me from mediation …”

Kierkegaard – Prefaces

Here, I wish to make certain connections between the various articles on this site. For these connections, I take the metaphor of bridges. Some of these bridges should be straight freeways, undergirded with iron, but many will seem for now like ricketty precarious swinging walkways, with shaky handrails of bamboo and old rope. Above, I quote Kierkegaard satirizing systematic philosophers who, in the wake of Hegel, give promissory notes to the effect that they will soon deliver “The System” – for him, the note is, for its merciful brevity, preferable to the full-blown system. Kierkegaard, like Nietzsche, distrusted such systematizers, feeling that their systematization is inevitably also a falsification, forcing reality onto the mythical procrustean bed.
My “bridges” could be the equivalent of Kierkegaard’s promissory notes of, or perhaps more in lieu of, “The System” – I have the system-builder’s ambition, but alas, maybe not the stamina. So rather than here unveiling a pristine, crystalline architectonic, I will attempt to indicate some sort of unity by way of noting linkages between concepts across all the different articles. Without at least a glance at the other articles, what I say here will seem at best gnomic; this is not intended as an introduction, so do not start here.

NOTE: I have considered the use of hyperlinks, but for now don’t want to insert them for stylistic reasons.

2d3

Highlands and Lowlands

Most of my thought is underpinned by a generally “materialist” outlook, with which I was once preoccupied for a long while, but which is hardly directly argued for in these writings – the nearest would be some remarks in “The Mind Ouroboros”. However, I hope that a materialist / naturalist / physicalist / realist sensibility is easily detected in general.

Materialism is usually taken as a position within metaphysics, or ontology, and realism as a position within epistemology. Materialism, to put it in the simplest way possible, is the belief that matter is more fundamental than mind – that mind is a form of, an arrangement of, matter, which sometimes arises. Realism, again to put it in the simplest way possible, is the belief that there is a real world, which exists independently of our thoughts, and that we can and do know something of it.

The materialist orientation is hardly unusual or remarkable in philosophy, but at some point my materialism took a novel inflection into an awareness and recognition of the importance of relations, patterns, and regularities, and this metaphysical notion is for the moment best expressed in the post “Relationalism”.

Parallel to this inflection of materialism into relationalism is an inflection of my realism by an acknowlegement of the importance of schemata – a Kantian element. This is elucidated at the start of “The Mind Ouroboros” in the first section, “Frames”.

2d2

Relationalism is a very speculative and rudimentary attempt to expound a metaphysics based primarily on relations using graph-theoretical notions, and this misty highland connects with the lower lands of pattern and regularity, both very much relationalist concepts, and maximization. One need not commit to the extreme version of relationalism of the article, but merely be prepared to give relationships their ontological due. Certainly, Patterns does not oblige one to my idea of Relationalism, since patterns can be across objects, properties, and other ontological categories.

With pattern and regularity, we must group maximization. Maximization is a core concept within my aesthetics, but transcends this – in its subjective aspect, it is a vital task of mind to identify patterns. In its objective aspect, it is a feature of complexity, indeed in algorithmic information theory a definition of complexity.

Relationalism, pattern, regularity and maximization, and schemata and frames, all fit very well with the generally formalist tendency at the core of the article Aesthetics.

Complexity deals with cyclicity, a general feature of life and metabolism, and The Mind Ouroboros with Edelman’s concept of re-entrance, the psychological form of the same. A unity of life and mind is discernible.

Between Ontology and Schemata, it is difficult to decide which constitutes the highland and which the lowland. In a sense, the two are roughly equivalent. For Kant, the Categories (the top-level ontology) are more basic than the schemata, which are something like a temporalisation of the Categories in their grasping of sense-data.

A top-level ontology is an attempt to specify the way frames or schemata usually fall in terms of their most abstract categories, which would make top-level ontology the highlands of ontology and schemata. However, top-level ontology is a species of ontology, and thus at a lower level: a species of ontology in general!

Another consideration is that Ontology is often regarded as a fundamental division of philosophy, and so a highland. Really, here, I regard the two concepts (Ontology and Schemata) as intimately related, and it is a matter of the specific enquiry which is the higher.

A related question might be why the relation of Ontology to Metaphysics is not more clearly elucidated. My main excuse is that my Kantian tack leads me to consider Ontology primarily under the umbrella of Epistemology. In another sense of the use of the term Ontology, it is, like my metaphysics, materialist – in objective terms, if we are considering “matter” and “mind” as ontological categories, I, like all materialists worthy of the name, regard the former as primary.

Similarly, Complexity should be taken as falling under Materialism, perhaps with “emergence” and “levels” as stops on the bridge, and then up to the highlands, perhaps by way of Relationalism.

1d

Bridges of Duality – The Importance of Trade

“Duality” is in many ways the acceptable face of dualism: it is a mastered opposition, often, once grasped, expressed with the metaphor “two sides of the same coin”. To continue in an economic mode of metaphor, a duality is two islands linked by the bridge of commerce – each side of a duality needs the other.

The most significant duality is that between patterns and schemata. Without some purchase on the identification of patterns or regularities, schemata and ontology would be merely an unmotivated classification exercise or procedure. The whole raison d’etre for a schema or an ontology is as an aid to identifying patterns, primarily at and for our level and manner of existence.
Continue reading

Posted in philosophy | Leave a comment

Complexity

cells2

Could “Complexity Theory” be an oxymoron? Melanie Mitchell in her book “Complexity: A Guided Tour” talks of “the sciences of complexity”, and this might indicate a lack of integration to the field. Indeed, John Bragin in a review of the book for the Journal of Artificial Societies and Social Simulation notes the lack of broad agreement on necessary and sufficient fundamentals within the field, shown by great variability in the course materials for its study at different educational institutions, and the absence of widely accepted and recognised textbooks. Perhaps complexity is just complicatedness, and general theories will forever elude us – complexity might inhabit the interstices of various theories, shot through so completely with contingency and local uniqueness as to evade generalization into any sort of global paradigm. This reminds us of the saying that the Devil (or God, depending on one’s theology) is in the details.

An interesting turning-around of complexity is made by Cohen and Stewart in their book “The Collapse of Chaos” – they indicate that one of the tasks of complexity theory is to explain high-level simplicities, which make the world to some extent navigable for creatures like ourselves; in many ways we do not experience an overwhelming explosion of complexity; they coin the term “simplexity” to indicate this aspect of reality.

Darwinian evolution, the theory of natural selection, seems to be a well-established and relatively simple, at least in its basic outlines, kind of complexity theory, and is often considered in the literature of complexity theory. (For now, here and elsewhere, I take Darwinism, the theory of natural selection, and its concomitants as given and assumed, rather than something I need argue for or about as such.)

[bridging laws]

Why isn’t there just fundamental physics? As Per Bak asks in his ground-breaking book “How Nature Works” –

“How can the universe start with a few types of elementary particles at the big
bang, and end up with life, history, economics, and literature? The question is
screaming out to be answered but it is seldom even asked. Why did the big
bang not form a simple gas of particles, or condense into one big crystal?”

I’ve already used the term “level”. The idea of levels is often invoked to explain higher orders of complexity, and here the related concepts of emergence and hierarchy are relevant. Levels are a fascinating aspect of reality, but should not be taken to dispel all mystery. Rather, I think levels are part of what is to be explained, and not a thorough explanation. We must always bear in mind that levels is very much a metaphor. Often, levels seem bound up with grain and resolution, micro- and macro-, fundamental physics often dealing with the very small, chemistry with full atoms and molecules, biology with biochemistry and larger entities, and so on. However, this is not always the case, for example, the astrophysics of gravity deals with some vast objects.

We often think of there as being a kind of hierarchy of sciences, which would be something like – physics, chemistry, biology, psychology, sociology, to put it in a rudimentary form. I’ve appropriated this diagram from the web to illustrate the idea, but it’s probably familiar –

sciences

Each higher science is more limited in its applicability to reality, for example, physics would apply across the universe, but biology only to restricted situations. This interpretation of the narrowing towards the pinnacle of the triangle may be more proper than its suggestiveness for our inclinations to think of superiority and “the higher the fewer”. Personally, I would not count Mathematics as a science, nor put Arts at the pinnacle (the understanding of the arts, aesthetics, maybe).

This article will consider what we might call “Actually Existing Complexity”, complexity as it arises in the physical world. My article Maximization pays more attention to complexity in its mathematical, informational, algorithmic form, as something measurable.

The field of complexity could clearly be quite vast. It is difficult to follow my own path whilst still accurately showing the field, especially as the field is not settled, so I will try to indicate, as I go along, that wider field. My own path here will be to explore some fundamental ideas rooted in the thermodynamics of non-equilibrium systems as pioneered by Ilya Prigogine, and then attempt to unify these ideas with a consideration of complexity as involving some sort of circularity, utilizing ideas from Wiener, Kauffman, Edelman, and Maturana and Varela. The two movements are thus –

1 Thermodynamics – Prigogine

2 Cybernetics – Wiener

I’m hoping to move from Prigogine’s ideas of the thermodynamics of non-equilibrium open systems, via the idea of imbalance, to the idea of something separating off and forming a boundary. I’m then going to try to drive forward the idea of boundary, and circular processes within the boundary, in tandem, and I hope they can be seen as two sides of the same coin.

_____________BELOW HERE UNDER CONSTRUCTION______________

The Prehistory of Complexity Theory

  1. General System Theory                      founded by Ludwig von Bertalanffy
  2. Cybernetics                                            founded by Norbert Wiener

Complexity Theory could be regarded as the modern equivalent of the search for the philosopher’s stone, and has its precursors in systems theory and cybernetics (we might also add here dialectics, which I deal with in an independent article, and holism, gestalt, …, perhaps even going back to the hermetic and alchemical traditions …)

Both General System Theory and Cybernetics took as imperative the desirability of identifying similar patterns (Bertalanffy talks of “isomorphic laws”) which occur within different specialized sciences. It is here that we encounter an idea which vertically cuts downwards through our idea of levels: similar laws may be identified at different levels within our hierarchy. This indicates a deep integrity to the levels, a similarity between them, with “systems” as the potentially unifying concept. Continue reading

Posted in philosophy | 2 Comments

On Russian Formalism

xx

“We do not see the walls of our rooms”  Victor Shklovsky

Russian Formalism began in the immediately pre-revolutionary period in Russia, developed through the revolutionary and post-revolutionary periods, receiving some negative criticisms from within the new communist regime, most notably from Leon Trotsky, and was suppressed as Russia descended into the Stalinist night. It is in many ways at the inception of modern literary theory, fathering early Structuralism by way of Prague, though in the west its influence was largely posthumous and belated, as if it was time-warped from 1920’s Russia to 1960’s Western Europe.

Russian Formalism was not very tightly unified as a school, but its general orientation was to overcome the sort of criticism and reflection on literature which preceded it, and which is in other places and at all times, even now, a pole of attraction – a muddling of specifically literary concerns with biographical, even gossipy, details of an author’s life, psychological conjectures, over-emphasis on contemporary social events and currents, philosophical musings, and so on. All this, the Formalists felt, condemned literary theory to an unscientific, cosy dilettantism, and, though the sorts of concerns just indicated may have their place as subsidiary enquiries, these were obscuring our view of the specifically literary. In contrast to this, many of the Formalists saw their project as being to put literary studies on a scientific footing.

One can detect even from this rudimentary outline a tendency to emphasize the autonomy, whether relative or absolute, of literature, and to split it off from its embeddedness in wider society; as one might expect, the communist regime did not look too kindly on it, being guided by a philosophy which is in many ways quite the opposite.

“Formalist” was, at least initially, not a term of their own choosing, but more a term of disparagement from their opponents, such as we find in the phrase “merely formal” – their own view of themselves is, perhaps, better indicated by the term “specifiers”: they were trying to analyse what was specific to literature that made it literature. As they developed their views, they started to define their object not as literature but as literariness – literary texts may have a multiplicity of features, but it was the literary features which were of central concern to literary theory.

The Formalists had two main geographical centres – St. Petersburg was home to the Society for the Study of Poetic Language, (acronymed in Russian as Opojaz), and Moscow to the Moscow Linguistic Circle. The key figure in the St. Petersburg society was Victor Shklovsky, and the leader of the Moscow circle Roman Jakobson.

Shklovsky maintained that “art exists that one may recover the sensation of life; it exists to make one feel things, to make the stone stony”, and that this was accomplished by a certain technique – “The technique of art is to make objects ‘unfamiliar,’ to make forms difficult, to increase the difficulty and length of perception because the process of perception is an aesthetic end in itself and must be prolonged.”

The central notion here is usually named Defamiliarization, or Estrangement, from the Russian Ostranenie.  Closely related terms are Alienation (taken up by Bertolt Brecht), De-automatization, Deformation and Deviation.

Shklovsky believed that in ordinary life we tend to fall prey to a tendency to “recognize” rather than really “see” things – our perceptions become routine, habitual, and automatized – “We no more feel the world in which we live than we feel the clothes we wear.” and “After we see an object several times, we begin to recognize it. The object is in front of us and we know about it, but we do not see it – hence, we cannot say anything significant about it.” However, “Art removes objects from the automatism of perception …”

The main way this is done is through the peculiar form language takes in literary works – “The language of poetry is, then, a difficult, roughened, impeded language.” “The poet brings about a semantic dislocation, he snatches the concept out of the sequence in which it is usually found and transfers it with the aid of the word (the trope) to another meaning-sequence. And now we have a sense of novelty at finding the object in a fresh sequence.”

In some ways, Shklovsky seems to be flying in the face of a lot of our intuitions about art – for instance, that poetic language is the most direct and immediate form of language. Yet, if we pick up on his use of the word “trope” here, we can begin to make some sort of sense of what he is getting at. “Trope” is originally Greek, meaning a turn, an alteration, or a change, and is roughly equivalent to “figure of speech” or “rhetorical device”. The key idea is that in using language in altered ways, our perception of the world is changed and freshened.

I mentioned earlier that in attempting to found literary theory as an autonomous discipline, the Formalists frowned upon psychological conjectures – they had as their targets those who had too great a concern with the mindset or attitudes of a writer, and those who regarded literature as being in some special way about the mind, as telling us about the mind. However, Shklovsky’s thought clearly has a psychological dimension in a different sense – we are dealing with perception, caught within the polarity of its automatization and its defamiliarization.

In developing the concept of defamiliarization, the early Russian Formalist analysis bifurcated – some saw defamiliarization as related to general perception and not exclusively linguistic (Shklovsky tends in this direction) whilst others saw defamiliarization as essentially linguistic. That which is defamiliarized could thus either be out in the big wide world, or be constructions within language itself.

_____________BELOW HERE UNDER CONSTRUCTION______________

The Formalists who were more inclined to generalize features of literature outside literature itself noted the similarity between literature and other arts, and, whilst this seems to pull poetics away from the purely linguistic, the concept of semiotics, a science of signs which would include linguistics as a subsector, would afford some room for manouevre even for specifiers: literature would be a species within two genera – language, and art – both of which could be understood within semiotics. Whether the understanding of pure music or pure abstract art can be largely assimilated within a semiotic paradigm, orientated as it is to the concept of the sign, remains a puzzle.

Shklovsky pays great attention to Tolstoy’s “Kholstomer: The Story of a Horse” where the observation of human behaviour and values from the perspective of a horse serves to defamiliarize and subvert our habitual outlook. Though the story depends on language in the most obvious way, its main impact is not achieved by unusual use of language, but rather at the semantic level. Although this example is one from prose fiction, it is not too difficult to find similar examples within poetry. Continue reading

Posted in poetry | 1 Comment

Structuralist Poetics and the New Criticism

Re-reading Jonathan Culler’s seminal Structuralist Poetics last summer, I was pleasantly surprised to note that in the chapter on Poetics of the Lyric, (the chapter most at the focus of my own concerns), Culler seemed to indicate that after the Structuralist groundwork, our theories could make some use of New Critical ideas of the content of literary works.

My surprise was a result of a conditioning which dates way back – when first studying literary theory in the mid-1980’s at Leeds, the New Critics were the recently-overthrown consensus – the status quo ante – and the still somewhat new-fangled approaches of Structuralism, Post-structuralism and Marxism, then in ascendancy, were often set in contrast to the old school. New Criticism was old hat, and often portrayed as intrinsically reactionary and conservative, particularly for its idea of the literary text as showing integration and reconciliation.

My surprise was pleasant, since I’ve felt for a while that this “revolutionary” rejection of the New Critics threw some precious babies out with the bathwater. This is ironic, in that the Young Turks of Structuralism and Marxism in many ways had a philosophical view of the world, or at least the human world, as oppositional, in contradiction, in tension, and dialectical. To me, irony, paradox, ambiguity and other terms of the New Critics are not a million miles away from the framework of their erstwhile opponents.

My own inclinations as a theorist are towards the formalist pole, but clearly sheer form, without what I provisionally call “human concern”, would, if scrupulously adhered to, give us fairly arid works of art, such as would only delight a thoroughgoing technician. Only within music, I think, do we find entirely successful and purely formal artworks.

Yet a complete separation of form and humanly-interesting content, as if they were two different dimensions, seems to fall short of what we would expect of an adequate aesthetics. It is here that I find the direction of Culler’s thought suggestive in indicating a bridge between the two.

Culler begins the chapter “Poetics of the Lyric” by arguing, with apt examples, that to read a poem as a poem is at least partly a matter of conventions and expectations which are in many ways external to any intrinsic features of the “poem”. Neither linguistic deviation nor formal patterns, both often considered as the two generic forms of such intrinsic features, will suffice to clarify this matter. I make much, elsewhere, of just these two forms, but here, we are stepping back to a point which analytically precedes that formalist stage.

(Note – Because I capitalize “New Criticism” and “New Critics” throughout (to identify it as a school), I usually capitalize “Structuralism” and “Structuralist”, for consistency.)

Distance and Deixis

[indexical, demonstrative, anaphora]

We read a poem with a kind of distance, taking it out of any usual circuit of communication, and taking it impersonally. Again, this is an expectation brought to the poem. This expectation alters the effects of deictics or shifters:

“for our purposes the most interesting are first and second person pronouns (whose meaning in ordinary discourse is ‘the speaker’ and ‘the person addressed’), anaphoric articles and demonstratives which refer to an external context rather than to
other elements in the discourse, adverbials of place and time whose reference depends on the situation of utterance (here, there, now, yesterday) and verb tenses, especially the non-timeless present.”

“we recognize from the outset that such deictics are not determined by an actual situation of utterance but operate at a certain distance from it.” p. 193

Culler regards these conventions of reading as operating to fulfil the demands of coherence and of thematic function.

Totality / Unity / Coherence

With his consideration of the second fundamental convention of the lyric, the expectation of totality or coherence, Culler moves closer to concerns which were also those of the New Critics. Near synonyms are unity, (organic) wholeness, harmony, and symmetry. Again, Culler emphasizes that this is a convention of reading, as much as a property of the poem.

“even if we deny the need for a poem to be a harmonious totality we make use of the notion in reading. Understanding is necessarily a teleological process and a sense of totality is the end which governs its progress.” p. 200

Culler concludes his consideration of totality by noting that its literary manifestation is a version of ideas explored in gestalt psychology, and lists six models of unity

binary opposition

dialectical resolution of a binary opposition

displacement of an opposition by a third term

four-term homology

series united by a common denominator

series with a transcendent or summarizing final term

Provisionally, I note that the first three form a group based on opposition, and the last two are based on difference (and similarity) rather than opposition.

“Four-term homology” is explored by Culler more thoroughly elsewhere in Structuralist Poetics. It is the pattern that a is to b as c is to d. This is generally regarded as a parallelism indicated or sought between two pairs of oppositions. It is not clear to me that a and b need be opposites, but again, we encounter the pervasiveness of opposition within human thought. Four-term homology seems to be very closely related to analogy and metaphor, though perhaps of the type whereby the network of some of the concepts in the poem indicates an anatomization of each of the source and target terms. I would regard it as somewhere between opposition and similarity / difference, perhaps a fusion of the two.

Significance

Regarding significance, once again Culler treats this as a matter of the conventions we bring to a poem as much as a feature of the poem. Continue reading

Posted in poetry | Leave a comment

Maximization

Effeective Complexity

__________________________________________________________

A poetic text is ‘semantically saturated’, condensing more ‘information’ than any other discourse; but whereas for modern communication theory in general an increase in ‘information’ leads to a decrease in ‘communication’ (since I cannot ‘take in’ all that you so intensively tell me), this is not so in poetry because of its unique kind of internal organisation. Poetry has a minimum of ‘redundancy’ – of those signs which are present in a discourse to facilitate communication rather than convey information – but still manages to produce a richer set of messages than any other form of language.”

___________________           Terry Eagleton on Yuri Lotman, in “Literary Theory”.

I have indicated elsewhere that one of the principal tasks of the mind is the identification of patterns, or regularities.  I felt that it was necessary to explore this area from a scientific viewpoint, and took recourse to a book by Murray Gell-Mann, “The Quark and the Jaguar”. Gell-Mann is one of the foremost theoretical physicists of the last century, the key figure in the development of quantum chromodynamics, and the man who named the quark. His book is a popular exposition of his ideas on how the fundamental laws of physics give rise to the complexity we see around us, and is extremely wide-ranging, from sections on quantum mechanics to considerations on the degradation of the environment. However, my focus here is on one aspect of the book: in dealing with complexity, he gives a good overview of ideas of regularity as developed within information theory. He acknowledges a debt to the work of Charles H. Bennett, physicist and information theorist, for the concepts underpinning these sections.

I have two particular concerns here:

Firstly, with the relevance or adequacy of these ideas to our understanding of the mind as some sort of pattern identifier.

Secondly, with their possible relevance to, and clarification of, notions within poetics and literary theory of the poetic or literary work, at least the “great” or worthwhile ones, as being particularly complex, “rich”, or saturated with meaning. Indeed, such ideas are not restricted to academic theory, but are part of many people’s intuitions about the more highly-valued works. The quote at the head of this section indicates these views, but also represents an attempt to make precise such intuitions about this complexity and richness, and perhaps validate them, by putting them under the magnifying glass of complexity theory and information theory, to find out whether literary works have some sort of particular or peculiar informational richness. It is very much in the same spirit that this enquiry will be conducted.

But before I deal with these two concerns, I will give a brief synopsis of Gell-Mann’s presentation.

[Crude] Complexity – Algorithmic Information Content

The first measure of complexity which Gell-Mann considers is Algorithmic Information Content. I will represent strings of information here in binary, as that is the basic level to which all strings or streams of information are assumed by information theory to be reducible. An example of a binary bit string would be –

10010111010100010101111.

If we have a bit string such as –

1010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010

this has low algorithmic information content, since its description can be shortened to something like –

PRINT “10” x 50.

(A purist might at this point cry “foul!”, since my shortened description is not itself in binary, but I must crack on.) The bit string has low algorithmic content, since it follows one simple pattern.

By contrast, imagine a bit string of a hundred 1’s or 0’s which has very few, perhaps no regularities – such a string as might be generated by a hundred coin tosses recorded in sequence. Such a bit string would, in all likelihood, have high algorithmic information content, since it would be difficult to compress into a shorter description; some aspects might be compressible, but to nothing like the level of our very regular string.

There are subtleties to the concept of randomness, which Gell-Mann discusses, but which need not detain us here. I refer readers to the actual book should they be interested.

How does Algorithmic Information Content fare as a candidate measure of the sorts of complexity in which we might be interested?

Not very well – “randomness” isn’t quite what we mean by “complexity”; as Gell-Mann points out, a longish string generated by outputs from the proverbial monkey on a typewriter would have higher algorithmic information content than a string of the same length from the works of Shakespeare, but we would surely think of the Shakespearean string as more complex. For such reasons, algorithmic information content has been dubbed a measure of “crude” complexity.

Effective Complexity

Is there a better measure of complexity than crude complexity / algorithmic information content, one which might more fruitfully capture and clarify our intuitions? It seems that there is; “effective complexity”. Effective complexity is the length of a concise description of a string’s regularities. The diagram at the start of this article, taken from Gell-Mann, indicates how effective complexity varies with crude complexity.

The concept of effective complexity is important, as it means we can be a little clearer about whether we are talking about maximization of information, or maximization of patterning. The latter is more central to our concerns with psychology and aesthetics, and is captured at least provisionally in this concept.

For the sake of completeness, I’ll mention here two other concepts – [Logical] Depth, and Crypticity, but make little of them; again, curious readers are referred to Gell-Mann’s book.

Logical Depth is the time it takes to compute from a program or schema to a full description of the system, or at least of the system’s regularities.

Crypticity is something like the reciprocal of this – the time it takes to compute from a full description to a program or schema.

Complex Adaptive Systems

Gell-Mann also considers what we could regard as the subjective pole to these ideas of complexity, the sorts of beings which have evolved to identify and exploit regularities within information. He terms such beings Complex Adaptive Systems, of which the most familiar are biological creatures, including ourselves, but the category also includes certain forms of computerized system which evolved systems such as ourselves have designed.

The identification of regularities involves their condensation into a “schema” or model, and such schemata can then be used as the basis for action. Gell-Mann also talks of compression of regularities.

Schemata are for purposes of description, prediction, and prescription. Gell-Mann is clearly an evolutionary thinker, and regards complex adaptive systems as things which are results of a honing by natural selection; in this regard, I find his triple of purposes pleasing; logically, description comes first, the use of such regularities in prediction second, and the use of such prediction for the prescription of actions to be executed in the world third.

But in evolutionary terms, the order can be reversed – it is the usefulness for survival in the “smart” actions prescribed by the identification of regularities which drives the increasing sophistication of the complex adaptive systems as pattern identifiers.

Limitations

However, unless I’m missing something, there seems to be a gap between the idea of compression of effective complexity and what we would more humanly think of as schemata; a merely mathematical notion of compression may be in danger of elision into an already-interpreted idea of condensation of sensory flux into concepts. There is not really any sort of bridge here between a pure and rather abstract notion of a pattern spotter, and what we might regard as an Actually Existing Pattern Spotter – a mammal, intelligent bird, or whatnot – within the general concept of “pattern spotter”, outside of computerized systems, born mathematicians, and other specialists.

The concept of “schema” runs the danger of getting blurred into something like “shortest mathematical description” in a way which obscures the role of conceptual thought, whilst seeming to have covered it. This is partly because “schema” is in use within other areas of philosophy, with a broader and more psychologistic meaning.

Related to this, there is little indication in this material of any decent general heuristics for deriving effective complexity. Gell-Mann considers pattern extrapolation in a fairly abstract and mathematical way, which I think is fine, and should indeed be part of our understanding of what is meant by “pattern” and “regularity”. But Gell-Mann only gives us an abstract description of what a pattern identifier does.

None of this is to find fault with Gell-Mann, but only to indicate a possible way forward, in that this use of “schema” might not fully capture “concept”, though concepts surely are a way of condensing regularities.

As an aside, an interesting insight afforded by such an abstract and mathematical treatment is that it involves us in what I call “Godelisation”; it is quite likely, perhaps provable, that we can never arrive at a general “best pattern identifier” – one that would spot and condense all regularities in what we would know to be the neatest way; effective complexity seems to fall prey to problems here in the same way that algorithmic complexity has been proven to. Readers may be aware of such issues from acquaintance with the work of Kurt Godel and Alan Turing.

Effective Complexity and Literary Theory

Within literary theory, there is a school of thought which privileges foregrounding as the distinctive feature of literary texts. Foregrounding is regarded as achievable by two means – deviation, and extra patterning. I am sympathetic with the identification of these two aspects of literary and poetic works as fundamental. (I am, however, at present uneasy with their subsumption under the function of foregrounding, but my unease must await proper consideration, exploration, and justification elsewhere on this site.)

Deviation and extra patterning are in a sense opposites – deviation being a loss of regularity, and extra patterning an apparently superfluous regularity.

The considerations here give some precising of, and constraint on, the notion of extra patterning, extra regularities, or, as Geoffrey M. Leech puts it, “parallelism in the widest sense of that word.” In conclusion, I’d like to refer back to the Eagleton quote at the head of this article – the distinction between maximization of information (crude complexity) and maximization of patterning (effective complexity) explored in our enquiry could help clarify and further develop Eagleton’s (and Lotman’s) intuitions, rescuing them from surface mystification and paradox, and helping to shed further light on at least one aspect of the “unique kind of internal organisation” of poetry.

Finally, I must point out that this article has only cut a certain path through Gell-Mann’s “The Quark and the Jaguar” for my own purposes – my comments should not be taken as a review of the book as a whole. My focus has been narrow, but the book itself is panoramic, and at times the view is breathtaking.

_____________BELOW HERE UNDER CONSTRUCTION______________

One of the main ways in which maximization of patterning can occur within literature is through the exploitation of the various linguistic levels – at the phonic level, rhythm, rhyme, alliteration, assonance, etc, add regularities, at the syntactic level, parallels can be established, and so on.

[Complex patterning across linguistic levels – e.g. the use of more purely linguistic patterns to establish a semantic pattern]

[Problems with the foregrounding model]

[Bennett and Gell-Mann’s other two articles.]

Posted in philosophy, poetry | 2 Comments

The Mind Ouroboros

worm

Frames

The concept of frames can be traced back, at least, to Kant, who believed that the mind necessarily utilizes Schemas or Schemata. His basic insight was that we understand the world through an internal framework; incoming sensory data, “raw data” as it’s sometimes put, is processed through a system of categories. For Kant, these divide into two types – the a priori “forms” – space and time, which for Kant were respectively Euclidean and Newtonian, and the categories proper, in his terminology – such things as causality and modality (the having of properties). All of the foregoing are what we generally think of as falling under the study of ontology, and are essential to Kant’s understanding of “synthetic” reasoning. Schemata are the link between the forms and categories, and sensory experience; the Schemata render experience intelligible.

For Kant, such schemata were trans-historical – part of the nature of human reasoning itself, and unchangeable – we cannot get outside them to see the world “as it really is”. I am not, here, particularly interested in expounding the ideas of Kant, but rather in the usefulness of this concept of frames. It seems to me that there may be such basic, unchangeable categories (though perhaps they can be altered within scientific disciplines, as has happened to Euclidean and Newtonian frameworks), but also, more changeable frameworks, of a cultural or individual psychological nature, which can alter, develop, or sometimes go awry. These more alterable frameworks might be based on the more fundamental frameworks: a sort of malleable superstructure on an adamantine foundation, the more specific grounded in the more general.

This idea of frames was picked up again, or perhaps reinvented, with the development of Artificial Intelligence in the post-war period. One of the problems which the attempt to build intelligent machines started to encounter was that though computers were good at using abstract logical rules, they had no way of classifying or understanding information about the real world. A possible solution to this was proposed by Marvin Minsky, one of the leading lights in the field, with his “Frame System Theory”:

“A frame is a sort of skeleton, somewhat like an application form with many blanks or slots to be filled. We’ll call these blanks its terminals; we use them as connection points to which we can attach other kinds of information. For example, a frame that represents a “chair” might have some terminals to represent a seat, a back, and legs, while a frame to represent a “person” would have some terminals for a body and head and arms and legs. To represent a particular chair or person, we simply fill in the terminals of the corresponding frame with structures that represent, in more detail, particular features of the back, seat, and legs of that particular person or chair.” Minsky, The Society of Mind. p.245

Particularly important is the idea of “default assignments” – we assume some typical assignments. Thus we deal with things as, in a sense, stereotypes.

Minsky also talks of super-frames and sub-frames, more general frames which would perhaps embrace more specific frames.

A similar idea, perhaps more temporally orientated, is the idea of a “script” – a kind of template of typical things we might expect, and typical actions or responses, within a delineated field (for example, a restaurant). These ideas, though useful, hardly solved all the problems in the field of A.I., but that need not concern us here. Similar ideas to those of frame, framework, schema and script are Koestler’s idea of matrix and Kuhn’s much used (and abused) idea of paradigms.

We have, so far, a sort of duality – of Frame and of what I will call Data.

I’ve indicated that the sort of frames in which I am interested would be the more flexible ones, the ones which are subject to alteration. (It seems that it was the flexibility of human frames which posed one of the difficulties which A.I. then encountered). Such alteration could be refinement, modification, collapse, synthesis or tension with another frame, extension, over-extension or increasing rigidity (as in some forms of obsessive behaviour), and perhaps could take many other forms.

Now, the question is, what causes the alteration, or perhaps, what determines the alteration? Possibly, other frames. Or perhaps the incoming data alters the frames? I’m going to leave that question hanging for a while, but return to it later.

Neurons and Brains

I’ve been exploring the idea of frames because it seems that it gives us insight into how the human mind can do the sort of things that it does – thinking, understanding, being conscious, and so on. I’d like now to take a step back, and consider some aspects of what we generally accept to be the material basis for the mind, which is the brain.

The mind has long puzzled philosophers. One modern school of thought, the “mysterians”, believe that an understanding of consciousness within a materialist framework will always elude us, and as part of this belief, think that advances in the understanding of the brain will be of little use for understanding consciousness. Colin McGinn, a philosopher of this tendency, says –

“How could the aggregation of millions of individually insentient neurons generate subjective awareness? We know that brains are the de facto causal basis of consciousness, but we have, it seems, no understanding whatever of how this can be so. It strikes us as miraculous, eerie, even faintly comic.”

A philosopher of the opposed school, [the source, alas, eludes me for now] says that such aggregations of neurons are exactly the sort of thing which could underpin the mind. This opposed school of thought insists that understanding of the brain will go a long way, perhaps all the way, to helping our understanding of the mind. My own sympathies are with this latter school, though I don’t think, as yet, we have anything like an adequate understanding.

What is it about all these squiggly wiggly neurons that makes them good candidates? First, a fairly basic observation – the brain is the destination for the incoming neuronal bundles, from the senses, and the source of the outgoing bundles, motor neurons, which cause our actions and behaviour. Damage to incoming or outgoing bundles can affect our capabilities, and damage to the brain can too. Crudely, the brain is “in the middle”, so may well be the core component. Even dualists, those who believe the mind is separate from the brain, tend to put the mind/body interface in the region of the brain.

But there are deeper reasons. In the quote above, the word “aggregate” seems to me to be significantly wrong; the neurons are interconnected in vastly complex ways – they are parts within a whole, the whole having a structure. Now, I’ve used here the two words “interconnected” and “complex”, but the development of Complexity Theory indicated that these two concepts are not merely related externally – there’s something about interconnection which is part of the nature of complexity.

I’ve talked about interconnection, but interconnection is a fairly static concept – if we want a picture of this, we imagine the neurons as vastly entangled. The dynamism comes from what the neurons do; they fire when incoming signals from other neurons reach a threshold, transferring an impulse down the cell body, possibly firing other neurons. This signal, to my knowledge, always goes one way. Another important feature is that the more one neuron fires up another, the more it is likely to; the insight from Donald Hebb’s research in this area is – “Neurons that fire together, wire together.” (I’m making a massive generalization and simplification here, which doesn’t always hold, but nevertheless the simplification gives us a way forward). We have, with this finding, a potential source of flexibility.  Neurons are themselves quite complex, and their behaviour includes all sorts of subtleties, but the important point for me is the idea of direction, because at the level of complexes of neurons, we find that neurons don’t merely feed from the senses to the motor neurons in a simple “handing on the baton” kind of way, but can loop back, so that neuron A. might connect to neuron B., neuron B. to neuron C., and neuron C back to neuron A. Add in to this image that other neurons are also feeding into and out of neurons A., B., and C., that we have billions of neurons, and also think of the way that the extent of the firing can alter the strength of the connection between one neuron and another, thus meaning that process can modify structure, and we seem to have a few ideas in play which give us a glimpse of a very complex and malleable system.

Reentrance and Feedback

The foregoing sketch will probably remind many people of the idea of “feedback”. Feedback is a very important concept within Cybernetics, the study of control and communication within the animal and machinic worlds, and studies of systems: General System Theory, Operational Research, and Management Theory. It is key to understanding how complex systems maintain a steady or optimal state within a changing environment. It deals with the sort of circular causation outlined above, but might perhaps best be regarded as a special case of a wider phenomenon. A thinker who has influenced my ideas here, Gerald Edelman, talks of “reentrance” with reference to neural assemblies, which he is at pains to distinguish from “feedback”. Unfortunately, Edelman is not the clearest of writers, and the point is moot regarding whether he is dealing with a form of feedback.

Most important for me is that feedback alone doesn’t seem to fully account for a system that can alter its actual structure – not its fundamental structure, certainly not its biochemical nature, nor many levels up – but its mid-level structures, in a way that isn’t captured by the idea of a mere change of state – whether of a thermostat or of a much more complex and multi-leveled feedback device.

Squaring the Circle: Framing the Cycles

In the earlier sections of this chapter, I wrote of the duality of frame and data, and left a question hanging – What causes the alteration of frames? In the later sections, I wrote about neurons and such stuff. I would now like to try to pull these different ideas together and attempt an answer to the question.

In a sense, the only thing that can alter a frame is data – we can imagine this as some kind of lack of fit between the data and the frame provoking an adjustment. Yet this seems too much to require data to speak for itself, to interpret itself – as if it can protest at the imposition of an ill-fitting frame, and this seems wrong.

It certainly seems that it is the incoming data which can be the only real source of change in the neural networks; left to themselves, we would expect the patterns of activation of the network to settle into a stable state or an endlessly repeating cycle, the physical equivalent of a solipsistic, self-contained and unchanging world-view (this is to treat the matter abstractly – I don’t know what the physiological results of such a nightmare situation are).

I think the best way of understanding how change comes about is to think of the input, the downstream-back-to-upstream circularity, and synapse strengthening all at once; these things in combination are responsible for the mutability of frames. They are core to an understanding of human creativity.

Aporia

But even at this abstract level, the idea of a frame is sitting uneasily with the idea of a circular network; there is a fundamental difference between the hypothesized frames and the hypothesized networks: frames seem like a phenomenological conjecture, whereas networks are purely physical in nature; frames seem already semiotically interpreted – slots for properties, and so on, whereas networks seem pre-semiotic. In a way, that is okay, as we can see the frames as emergent from the more physicalistic networks, but we still feel the need for some middle steps to make things clearer.

I don’t have an answer to this, a way of bridging the gap between the two models, even though I think the two models both stand a reasonable chance of being valuable to our understanding of the mind/brain. The most I can hope for is that my way of presenting the problem might be useful to its solution. I will conclude with a hopefully suggestive observation on another difference between the models:

The “frame” model seems mainly to be in a dimension “head on” to incoming data – we imagine it as a record, a form, with each of its fields getting populated as the mind shifts attention.

The model of circular neural networks is orthogonal to that – we represent it with inputs to the left, spiraling forms of transformation in the middle, and outputs to the right.

There are possible link-ups though – wider and more all-embracing spirals might be the set frame, and narrower inputs the data. Physiologically, there is no absolute division between structure and state changes at the levels which interest us.

NOTE – I have recently (25/04/2015), since writing this article, come across a passage in John H. Holland’s “Complexity: A Very Short Introduction”, which, if I’m interpreting it correctly, gives some back-up to my points above –

“Loops
The combination of high fanout and hierarchical organization results in complex networks that include large numbers of sequences that form loops. Loops recirculate signals and resources … Loops also offer control through positive and negative feedback (as in a thermostat). Equally important, loops make possible program-like ‘subroutines’ that are partially autonomous in the sense that their activity is only modulated by surrounding activity rather than being completely controlled by it.”

circle-icon-30

Continue reading

Posted in philosophy | Leave a comment