The Traumatic Word (3)

This is the third post in a series on “The Traumatic Word.”

*     *     *     *     *     *

All that glitters is not gold.

— Old commonplace

Even for us, gold still glitters. However, we don’t any longer attend especially either to gold or to glittering . . . We have no sense for that “sense” any longer. Insofar as gold “is” gold for us, it is only as a metal that carries value.

— Martin Heidegger

 

The word gives voice to the silence it breaks.

Sometimes during the second half of my long university teaching career, I would bring a small Tibetan meditation gong to class, to give the students an opportunity to experience two different modalities of listening, as I myself had first experienced them once by fortuitous accident. I would ask the students to find a comfortable position in their chairs, close their eyes gently, and hold themselves relaxed but attentive. Then, before ringing the bell, I would telling them to focus their attention on the sound of the ringing itself, and to hold onto the sound for as long as they could continue to hear it, however dimly, then just to stay quiet and attentive, eyes closed. After giving the ringing sound ample time to die away, I would ring the bell again. This time, however, I would first direct the students not to focus on the ringing of the bell as such, trying to hear it as long as they could, but rather to listen for the silence to return to the bell.

Afterwards, the class and I would talk about the difference between the two experiences of listening. Some of the students reported that they really hadn’t been able to tell any difference. However, others—usually a smaller number, which is to be expected, for reasons I need not discuss here—would report surprise at just how different in quality the two experiences were.

I would then end by encouraging all of the students, whichever of those two reporting groups they belonged to, to practice the two different ways of listening on their own. I know from subsequent feedback that some did, but I also have good grounds for suspecting that most did not—for reasons similar to those I think account for the disparity in size between the two reporting groups, but that, once again, I do not need to discuss here.

As I already remarked above, when I first experienced the difference at issue myself it was not under any special guidance or direction, but just by serendipity. It happened twenty or so years ago. I was quietly meditating one fall morning, with my eyes gently closed, outside the chapel of the secluded Benedictine Monastery where I’ve retreated for a few days from time to time for the last quarter-century. As I was calmly and quietly sitting there, thinking nothing, the bell in the chapel tower began to ring, calling the monks to come together for one of their daily session of common prayer. Calm and comfortable yet attentive as I found myself at that moment to be, I just continued to sit there, eyes closed, thinking nothing, and just let the ringing of the bell continue to sound. I was so calm and comfortable that I didn’t even find myself listening to the ringing itself. Rather, as I said, I just let it go on, giving it no special attention, but still fully aware of it in my open, attentive frame of mind.   To my surprise, as the sound of the rung bell died away, I heard the silence return to the bell, and with it to the world of the monastery as a whole.

Through the slow dying away of the bell’s ringing, I heard the silence itself began to ring.

*     *     *     *     *     *

Decorations, ornaments and adornments are there to call attention to what they decorate, ornament, or adorn. So they glitter, like gold.

In Der Spruch des Anximanders, a manuscript that Heidegger wrote apparently in the 1940s for a never-delivered lecture course, but that was not published until 2010, when it came out as volume 78 of his Gesamtausgabe (GA: the “Complete Edition” of Heidegger’s Works published by Vittorio Klostermann in Frankfurt). The title means “the saying (or ‘dictum,’ to use a common Latin-derived term) of Anaximander.” Anaximander was the second of the three “Milesians” (the first being Thales, and the third Anaximenes), so called because all lived in Miletus, a Greek colony in Asia Minor. The three have gone down in tradition as the first three philosophers. Only one saying or dictum has survived from Anaximander, and that is what is at issue for Heidegger in his manuscript.

At one point in the text, Heidegger has a lengthy discussion about gold, and what gold was for the ancient Greeks. I have taken my second epigraph for this post, above, from that discussion (from a passage to be found on page 70 of GA 78). In addition, a bit earlier in the same discussion (on page 67) Heidegger himself cites the German version of the old commonplace I used for my first epigraph for this post, “All that glitters is not gold,” which in the German Heidegger uses is, “Es ist nicht alles Gold, was glänzt.”

That commonplace, Heidegger goes on to add, contains implicitly the recognition that “gold is what authentically glitters, such that on occasion what also glitters can appear to be gold, even though that appearance is a sheer semblance.” The German glänzen means to glitter, that is, to sparkle, glisten, or shine. That last word, shine, can be used as a verb, as I just used in the preceding sentence, but also as a noun, as when we speak about the shine of a pair of polished shoes, or of gold itself. The noun shine is indistinguishable in sound from the German equivalent, Schein. To form the infinitive of the corresponding verb, “to shine,” however, German adds the suffix –en to form scheinen, which in turn can become again a noun when given a capital first letter, Scheinen. The German phrase “das Scheinen” would need to be translated in some contexts as “the shining” (as in the title of the famous Steven King novel or Stanley Kubrick’s movie version thereof). In other contexts, however, it would need to be translated differently, as I have done in quoting Heidegger in saying that what isn’t gold can sometimes appear to be gold although that appearance is “a sheer semblance,” which I could also have rendered as “a mere seeming”: “ein blosses Scheinen.”

To be sure, not everything that glitters is gold. However, whatever is gold does glitter. Glittering, sparkling, glistening, shining, belongs essentially to gold, constituting its very being-gold, its very golden-ness. So says Heidegger at any rate. Glittering or shining as such (page 68) “belongs to being-gold itself, so truly that it is in the glittering [or shining: das Glänzen] of gold that its very being(-gold) resides.” Glittering resides essentially in gold regardless, Heidegger says, of whether the gold has been polished up already, or is still dull from being newly mined, or has had its shine go flat through neglect.

Gold glitters. It shines. That is the very purpose of gold, what it is for: to shine. In other words, gold as such, the golden, has no “purpose,” is not “for” anything. It just shines. Gold is simply lustrous, that is, “filled with luster,” from Latin lustrare, “spread light over, brighten, illumine,” related to lucere, “shine.” As essentially shining in itself, gold adds shine to that on which it shines, as it were: as lustrous, filled with luster, it is suited in turn to add luster to what is suited to wear or bear it.

Hence the role that gold has always had as decoration, ornament, and adornment. Decorate derives from Latin decoris, as does decorous. Latin decoris is the genitive form of decus, from the presumed Indo-European root *dek-, “be suitable.” What is decent, from the same root, is what is becoming, comely, befitting, proper; what is decent is what is suitable.

Ornament comes from Latin ornare, which means to equip, to fix up or deck out, to adorn—which last ends up saying the same thing twice, since adorn also comes from ornare, plus the prefix ad-, “to.”

Worn decorously, gold adorns those it ornaments: When it fits, it adds luster to what it decks out.

*     *     *     *     *     *

W. G. Sebald devotes one of his essays in A Place in the Country (New York: Random House, 2013) to Gottfried Keller, the great nineteenth century Swiss poet, novelist, and story-teller. “One might say,” writes Sebald in the essay, “that even as high capitalism was spreading like wildfire in the second hall of the nineteenth century, Keller in his work presents a counter-image of an earlier age in which the relationships between human beings were not yet regulated by money.”

A bit latter in the same essay Sebald writes: “It is, too, a particularly attractive trait in Keller’s work that he should afford the Jews—whom Christianity has for centuries reproached with the invention of moneylending—pride of place in a story intending to evoke the memory of a precapitalist era.” Sebald then recounts how, in that story, Jews who are welcomed into a shop built not on capital but on barter—thus, a shop that serves as an example of just such a pre-capitalist era. The non-Jewish proprietress welcomes itinerant Jewish traders among those who regularly frequent her shop, to come inside to sit and talk.

When the talk in the shop turns to tales of how the Jews abduct children, poison wells, and the like, those Jewish traders, writes Sebald:

merely listen to these scaremongering tales, smile good-humoredly and politely, and refuse to be provoked. This good-natured smile on the part of the Jewish traders at the credulity and foolishness of the unenlightened Christian folk, which Keller captures here, is the epitome of true tolerance: the tolerance of the oppressed, barely endured minority toward those who control the vagaries of their fate. The idea of tolerance, much vaunted in the wake of the Enlightenment but in practice always diluted, pales into insignificance beside the forbearance of the Jewish people. Nor do the Jews in Keller’s works have any dealings with the evils of capitalism. What money they earn in their arduous passage from village to village is not immediately returned to circulation but is for the time being set to one side, thus becoming like the treasure hoarded by Frau Margaret [the non-Jewish proprietress of the shop herself], as insubstantial as gold in a fairy tale.

Sebald then concludes the passage: “True gold, for Keller, is always that which is spun with great effort from next to nothing, or which glistens as a reflection above the shimmering landscape. False gold, meanwhile, is the rampant proliferation of capital constantly reinvested, the perverter of all good instincts.”

In their remarks on gold, Sebald and Heidegger are two fingers pointing to the same thing.

*     *     *     *     *     *

The English word order derives from the same roots as do the English words ornament and adorn. All three come from the Latin ornare, which, as I’ve already noted, means to equip, to fix up or deck out. That is fitting, which is to say decorous, since proper order—well-ordered order, we well might say, as opposed to disordered order (or “dysfunctional” order, to use some currently commonplace jargon, even though it has already lost much of its shine, having been in circulation for quite a while by now)—is there for the sake of what it sets to order, rather than the other way around.

Proper order is an ornament to be worn by what it orders, in order to let the latter come fully into its own radiance, its own shine. Such proper order is rare, so rare as to be genuinely golden.

What is genuinely golden—what shines of itself, and needs no trafficking in the market to give it monetary value—does not really call attention to itself, properly speaking. Rather, like the sun in Plato’s Divided Line at the end of Book VI and Myth of the Cave at the start of Book VII in the Republic, which calls attention to that on which it shines, but, as shining itself, vanishes in its own blinding brilliance, the genuinely golden calls attention to that which it adorns.

Soon after the lines I have used as this post’s second epigraph, in which Heidegger says that we of today have lost all sense for the genuine sense of gold and the golden, he observes that ornaments, decorations, and adornments do not as such call attention to themselves for their own sake, but rather to that which they ornament, decorate, or adorn, for its sake. As he writes (on page 73), “decoration and ornament [der Schmuck und die Zier] are in their proper essence nothing that shines for itself and draws the glance away from others to itself. Decoration and ornament are far rather such wherein [that is, in the “shine” of which, we might say] the decorated is first made ‘decorous’ [“schmuck”: “bejeweled,” that is, “decked out, as with jewels”—so “neat,” “natty,” “smart,” in effect], that is, stately [stattlich, “imposing,” from a root meaning “place”—so: having “status”], something that, upright in itself, has a look [hat ein Aussehen, a word that also suggests “splendor”: “good looks,” in effect, to go with its imposing status] and stands out [hervorragt], that is, itself comes to appearance [zum Scheinen].”

Thus, for instance, jewelry, does not distract attention from the one who decorously wears it, the one to whom it is fitting or suited. Rather, decorously worn, jewelry calls attention to the splendor already there in the wearer, adding luster to that luster. It lets the wearer shine forth in all her own glory, shining brilliantly with all her own splendor, radiant.

So adorned, the radiant one is there to be adored.

*     *     *     *     *     *

The two words, adorn and adore, have distinct etymologies. The former, as I’ve already noted, comes from ad-, “to,” plus ornare, “to deck out, add luster to.” On the other hand, adore comes from ad- plus orare—with no ‘n,’ just as English adore is bare of the sound of ‘n’ that gets added to adorn. Orare means “to speak,” most especially in the decorous, stately sense of “praying” or “pleading,” as in delivering an “oration,” a formal speech before a court or other august assembly, a speaking that honors and thereby “praises” the high standing of the assembly being addressed.

Despite the disparate etymologies of the two terms, my own hearing discerns a deeper, semantic resonance between adorning and adoring. To add luster to what is already lustrous, as adornments add shine to those who already shine of themselves, polishing that shine to its own full radiance, and to speak to and of what already speaks for itself, addressing it in such a way as to honor its stature, attesting to its renown, fit together. Each, adorning and adoring, adds luster to the other in my eyes. Each praises the other—as creation, in Christian tradition, is said to praise its Creator.

Adornments speak well of those they decorously adorn. When decorous, adornments fit the adorned, fitting them in such a way as to defer to them, letting the adorned come forth in their own glory, bespeaking the radiance of the adorned, rather than boasting of their own adorning sparkle.

So do I like to think, at any rate. It fits for me. Most especially it fits my experience, years ago, of sitting outside the monastery as the bell rang, calling the community together to pray, and calling my own attention not to itself but to the silence it decorously broke, giving it voice—calling: “Oh come, let us adore!”

*     *     *     *     *     *

I plan to complete this series on “The Traumatic Word” with my next post.

The Traumatic Word (2)

This is the second post in a series on “The Traumatic Word.”

*     *     *     *     *     *

The word in its purest form, in its most human and divine form, in its holiest form, the word which passes orally between man and man to establish and deepen human relations, the word in a world of sound, has its limitations. It can overcome some of these—impermanence, inaccuracy—only by taking on others—objectivity, concern with things as things, quantification, impersonality.

The question is: Once the word has acquired these new limitations, can it retain its old purity? It can, but for it to do so we must reflectively recover that purity. This means that we must now seek further to understand the nature of the word as word, which involves understanding the word as sound.

— Walter J. Ong, S. J., The Presence of the Word (page 92)

The spoken word is a gesture, and its meaning, a world.

— Maurice Merleau-Ponty, Phenomenology of Perception (page 184)

We listen not so much to words as through them.

May years ago, when I first had to start wearing glasses, which was not until well into adulthood, it took me a while to adjust, as is common. Until that adjustment had taken place, I often found myself seeing my glasses themselves, rather than (or at least in addition to) what I saw through them. My eyes were unsure, as it were, about just where to focus: on my glasses, or on what lay beyond them. During that adjustment period, the glasses were more of a distraction to my vision than an enhancement of it. I found myself wanting to look at my glasses, rather than through them.

Similarly, when some year later I had to start wearing hearing aids, at first they were also more distractions to my hearing than aids to it. I found myself wanting to listen to the hearing aids, rather than through them.

As is true for any good, useful tool, the job of glasses and hearing aids is to vanish into their usage—in the case of glasses and hearing aids, into the vision and audition they are respectively designed to make possible. That’s just what both my glasses and my hearing aids did, at least as soon as I’d adjusted to wearing them.

Insofar as words are no more for us than means of conveying information or “messages” back and forth between “senders” and “receivers”—they too, at least when they are good little words, vanish into their usage. Otherwise, they become “noise” in the sense at issue in information theory: “interference” that distorts the message, just as static does on a radio. Words that call attention to themselves are just so much noise, when it comes to the transfer of information.

It is worth noting that, taken as the Word of God, Jesus is very noisy. He constantly calls attention to himself in one way or another.

*     *     *     *     *     *

At one point in The Presence of the Word Walter J. Ong discusses how the word, as spoken sound, is “noninterfering” (page 147), whereas in contrast the gesture is “interfering” (page 148). By that he does not mean that the word is low on the noise-making scale, and the gesture high on it. Obviously, the contrary is the case. As sound, the word is nothing but noise, whereas a gesture makes no noise at all. The word is to be heard, and therefore must sound off; it must make noise. The gesture, however, is given to be seen.

Of course, in saying such things I am clearly just playing with the word noise, since the noisiness of the word is not a matter of its interference with the delivery of a message, but is instead actually essential to the usefulness of the word for carrying messages. A word that made no noise in the sense that it did not sound at all, would be a word that remained unspoken and therefore incapable of sending any message, conveying any information, whatever. In turn, however, the same thing applies to the gesture: a gesture that called no attention to itself—which made no noise in that sense—would be no less incapacitated as an information-transfer system than would a never-sounded word. It would be tantamount to a gesture that did not “give itself to be seen” in the first place, and therefore utterly failed to deliver any message at all.

By making such noise about the word noise, by playing noisily with that word, what I want to call to readers’ attention is, at least in part, that when Ong says the sounded word is “noninterfering,” whereas the gesture is “interfering,” he is not using that latter term the same way it is used in information theory. Rather, what he means when he says the sounded word, the voice, is “noninterferring” is, he explains, that “one can use the voice while doing other things with the muscles of the hands, legs, and other parts of the body.” In contrast, the gesture is “interfering”: “It demands the cessation of a great many physical activities which can be carried on easily while one is talking.”

Despite differentiating between gesture and word in that way, Ong nevertheless writes (on page 148) that “[i]t may be that human communication began with gesture and proceeded from there to sound (voice). Gesture would be a beautiful beginning, for gesture is a beautiful and supple thing.” If we take that suggestion seriously, then it may even turn out that the word itself remains a gesture—only a vocal, audible gesture, rather than a non-vocal, visible one. That would still fit with Ong’s point about the voiced word being “noninterfering,” since it would simply require confining “interfering” to non-vocal gestures. And that, in turn, would still leave room for what Ong says next, right after remarking on the beauty of a possible gestural beginning for the word: “But, if this was a development which really took place, the shift from gesture [that is, now: non-vocal gesture] to sound [vocal gesture] was, on the whole, unmistakably an advance in communications and in human relations.”

Yet even if that be granted, it still remains the case that, in the sense of “interference” at issue in information theory, as opposed to Ong’s own usage of that term, it is not just what he calls gesture, that is, what I just suggested might better be called “non-vocal gesture,” that “interferes.” Rather, both his “gesture” (my “non-vocal gesture”) and his “word” (my “vocal gesture”) are essentially “interfering.” That is, both by their very nature throw up obstacles to optimum transparency of any “message” they might be used to carry, any transmission of information they might be used to accomplish. That is because both call attention to themselves, not just to what comes packaged in them.

The beauty of gesture to which Ong himself calls attention is inseparable from gesture’s thus calling attention to itself. Beauty does that. It stops us in our tracks, brings us up short, dazzles us, stuns us, shocks us into silence and admiration—from Latin mirare, “to look,” and ad, “to or at,” but we also extend our usage of “admire” with ease to cover as well our attitude toward perceived auditory beauty, beauty that is heard rather than seen. Both gestures and words (or non-verbal gestures and verbal ones, if that is what the distinction at issue finally turns out really to be) have that arresting quality. Both a raised middle finger and the verbal equivalent, for example, have it.

*     *     *     *     *     *

Whether I silently “give the finger” to people or yell “Fuck you!” at them, in either case I am telling them the same thing. What is more, however, the process of telling those to whom they are directed whatever those two, the nonverbal gesture and the verbal one, do tell them, both gestures tell it in a way designed to call attention to the telling itself. For both, just delivering information is far from all they are doing, or even the most important thing.

What they are doing, when taken in their fullness as gestures, is actually sharing a world. To be sure, the specific nonverbal and verbal gestures I have chosen as my examples (flipping someone off, or telling someone the same thing verbally) share the world with the person to whom they are directed in a very polemical, which is to say war-like, way (from Greek polemos, “war” or “strife” ”—which, according to Heraclitus, is “the father of all things”). Such gestures, verbal or not, convey enmity, even hatred. Indeed, it is for that very reason that I have chosen them as my examples.

As Sartre was good at pointing out, hate no less than love is a way of taking the other person seriously. It is a way of remaining genuinely in communication with that other person, rather than breaking the communication off. What breaks off communication—or never lets it get started in the first place—is not hate, but rather the indifference of passing one another by, unheeded.

In communicating with one another, we certainly process information back and forth. By yelling, “Fuck you!” at someone, I convey considerable information to that person, should said person wish to treat my behavior as no more than a message to be processed—ignoring me and focusing instead on decoding whatever information my behavior encodes. Such a decoder could decode lots and lots of bits of information from that single bit of my behavior: information about me (such as information about the current condition of my vocal apparatus, or where I was born, from details of my pronunciation); information about the culture from which I come; information about the decoder himself or herself (including that he or she apparently just did something that somehow triggered my outburst, and may even be under immediate threat of danger from me as a result, should I stop yelling and start acting). My behavior is chock full of all sorts of information, enough to satisfy any would-be decoder. However, in ignoring me to focus instead on decoding the information contained in my outburst, the person to whom I directed that outburst would run the very real risk of just enraging me further through such a display of personal indifference.

Sartre’s point that hating someone is a way of remaining in relationship with that person can be put in a more Heideggerian way by saying that hating is continuing to care about the other person. Ong also makes essentially the same point in The Presence of the Word, when he says that no matter how polemical or even verbally abusive talk between people may become, at its core (page 192) “[t]he word moves toward peace because it mediates between person and person.” As he proceeds to point out (page 193):

When hostility becomes total, the most vicious name-calling is inadequate: speech is simply broken off entirely. One assaults another physically or at least ‘cuts’ him by passing him in total silence. Or one goes to court, where, significantly, the parties do not speak directly to each other but only to the judge, whose decision, if accepted as just by both parties, at least in theory and intent brings them to resume normal conversation with each other once more.**

To pass from speech, no matter how vicious or even abusive, to a fist striking a jaw or a bullet tearing flesh is to cease gesturing at all any longer, whether verbally or nonverbally. To send a fist into the face of another or a bullet into that other’s chest is not to gesture at anyone. It is to break off all gesturing, and therewith to break off all genuine further communication.

To continue with Ong’s ways of formulating things, what is truly distinctive about communication, properly so called, is that it is the sharing with one another of what is “interior” with regard to each of the communicants—sharing it precisely as “interior,” so that it continues, in its very being shared, still to be closed off, unseen, not laid out in the open, in short, continues to be invisible. That is why Ong repeatedly insists that the word as such is sound. Sound alone can plumb the interior depths that vision—or taste or smell or touch, for that matter, in the final analysis—can never attain, depths that vision can never “sound,” as we by no accident say. Sound sounds from, and “resounds” or “resonates” from, the interior of that which is sounding, whether sounding of itself (as does the animal in its cry or the human being in speaking) or sounding through the action of another (as does a melon when thumped or a wall when knocked).

In that telling sense, communication is the sharing of what can never be processed as information, in short, the sharing of the un-sharable. Ultimately, to communicate is gives voice to the incommunicable.

*     *     *     *     *     *

“The spoken word is a genuine gesture, and it contains its meaning in the same way as the gesture contains it. This is what makes communication possible.”   So writes Maurice Merleau-Ponty in his 1945 Phenomenology of Perception (translated by Colin Smith, London: Routledge & Kegan Paul, 1962, page 183). Those two sentences occur a bit earlier in the same passage that ends with the line I used for my second epigraph at the beginning of this post. Right after those two sentences, the passage at issue continues as follows (pages 183-184):

In order that I may understand the words of another person it is clear that his vocabulary and syntax must be ‘already known’ to me. But that does not mean that words do their work by arousing in me ‘representations’ associated with them, and which in aggregate eventually reproduce in me the original ‘representation’ of the speaker. What I communicate with primarily is not ‘representations’ or a thought, but a speaking subject, with a certain style of being and with the ‘world’ at which he directs his aim. Just as the sense-giving intention which has set in motion the other person’s speech is not an explicit thought, but a certain lack which is asking to be made good, so my taking up of this intention is not a process of thinking on my part, but a synchronizing change of my own existence, a transformation of my being.

Nevertheless, because to live in the world together is also to live in, with, and by building, “institutions” together, there is a tendency of the spoken word to lose its sonority, as it were—to lose what, favoring the visual over the auditory as our culture has done since the Greeks (that, too, has become institutionalized), we might well call the word’s “shine” or even its “glitter.” The word comes no longer to call attention to itself, but instead sinks down to the level of the commonplace utterance, and language becomes no more than a system of signs. The word no longer calls out to be heard, and to be given thought. Accordingly, the passage from Merleau-Ponty continues:

We live in a world where speech is an institution. For all these many commonplace utterances, we possess within ourselves ready-made meanings. They arouse in us only second order thoughts; these in turn are translated into other words which demand from us no real effort of expression and will demand from our hearers no effort of comprehension. Thus language and the understanding of language apparently raise no problems. The linguistic and intersubjective world no longer surprises us, we no longer distinguish it from the world itself, and it is within a world already spoken and speaking that we think. We become unaware of the contingent element in expression and communication, whether it be in the child learning to speak, or in the writer saying and thinking something for the first time, in short, in all who transform a certain kind of silence into speech. It is, however, quite clear that constituted speech, as it operates in daily life, assumes that the decisive step of expression has been taken. Our view of man will remain superficial so long as we fail to go back to that origin, so long as we fail to find, beneath the chatter of words, the primordial silence, and as long as we do not describe the action which breaks this silence.

Silence is broken by the action of speaking, of sounding the word. Hence, Merleau-Ponty ends his long passage with the line I already used as my second epigraph for this post:

The spoken word is a gesture, and its meaning, a world.

*     *     *     *     *     *

My next post will continue this series on “The Traumatic Word.”

** In future, I may devote one or more posts to how it stands between the word, sound, and peace—especially today, our endless day of global market capitalism. If so, I may call the post/s something such as “Shattering Silence of Peace.”

The Traumatic Word (1)

In the strict sense, the word is not a sign at all. For to say its is a sign is to liken it to something in the field of vision. Signum was used for the standard which Roman soldiers carried to identify their military units. It means primarily something seen. The word is not visible. The word is not in the strict sense even a symbol either, for symbolon was a visible sign, a ticket, sometimes a broken coin or other object the matching parts of which were held separately by each of two contracting parties. The word cannot be seen, cannot be handed about, cannot be “broken” and reassembled.

Neither can it be completely defined.

— Walter J. Ong, S. J.

We would like language to be no more than a system of signs, a means for conveying information. At least since Aristotle, and down past C. S. Pierce to the present day, that view of language has been all but universally taken for granted, just assumed as true. It isn’t, as Walter J. Ong realized.

Ong was a United States professor of English who focused upon linguistic and cultural history—especially the cleft between oral and literary cultures, which was the topic of his most influential work, Orality and Literacy: The Technologizing of the Word, originally published in 1982.  The lines above are taken from an earlier work, however. They are from next to last page of The Presence of the Word: Some Prolegomena for Cultural and Religious History, first published in 1967 but consisting of lectures Ong gave by invitation at Yale in 1964, as the Dwight Harrington Terry Foundation Lectures On Religion in the Light of Science and Philosophy for that year.

Besides being a professor of English, with a Ph.D. in that field from Harvard, Ong had done graduate work in both philosophy and theology, and was also a priest of the Society of Jesus, that is, the Jesuit order, as the “S. J.” after his name indicates. That religious provenance is manifest in his work. In The Presence of the Word, it is especially evident in Ong’s focus not just on any old word, so to speak, but on “the” word in a particular sense. His concern in his Terry Lectures is not just on “words in general,” as the ordinary way of taking his title would suggest. So understood, “the word” in Ong’s title would function the same way “the whale” functions in the sentence, “The whale is a mammal,” which is equivalent to “All whales are mammals,” thus picking out a feature that is common to whales in general, applying indifferently to each and every whale whatever. Ong’s underlying focus in his Terry Lectures, however, is not upon words in general but rather upon the word in the distinctive sense that one might say, for example, that Mount Everest is not just a mountain but rather the mountain, the very embodiment of mountain as such.

Befitting the intent of the grant establishing the Terry Lectures, Ong’s underlying focus in The Presence of the Word, furthermore, is not upon some word that might come out of just anyone’s mouth. It is, rather, upon one uniquely singular word that comes out of one uniquely singular mouth—namely, “the Word of God.” At issue is the Word of which John says in the very opening verse of his version of the Christian Gospel (John 1:1): “In the beginning was the Word, and the Word was with God, and the Word was God.”

Thus, to put it in terms that became traditional within Christianity only long after John but based upon his Gospel, Ong’s underlying focus in The Presence of the Word is on Christ, the Second Person of the Trinity.

*     *     *     *     *     *

Alain Badiou’s seven-session seminar in 1986 was devoted to Malebranche (published in French by Fayard in 2013 as Malebranche: L’être 2—Figure thélogique). In his session of April 29, 1986, Badiou argued that Malebranche, being the committed Christian thinker that he was, found it necessary to think of God’s being (être) in terms of the cleavage (clivage) of God into Father and Son—which, we should note, though Badiou himself calls no special attention to it at this point, is a self-cleavage definitive of the Christian God in that God’s very being, such that God is God only in so self-cleaving.

However, to think of God’s being by thinking it back into his self-cleavage into Father and Son is to empty the thought of God of any substantial content beyond self-cleaving action itself: “In the retroaction of his cleavage,” as Badiou puts it (page 149), “God is empty: he is nothing but his process, his action.” God, so thought, is nothing but the very action set in action by the act of God’s self-cleaving. God voids God-self of any substantively separate self in such self-cleavage, and is only in such vanishing.

*     *     *     *     *     *

It is no accident—and it is deeply resonant with the opening of the John’s Gospel, it bears noting—that Walter Ong, long after Malebranche but more than twenty years before Badiou’s seminar on the latter, says the very same thing of the word. According to Ong (page 9 of The Presence of the Word), the emergence of electronic media in the 20th century “gives us a unique opportunity to become aware at a new depth of the significance of the word.” Not many pages later (on page 18) he expands on that point, writing: “Our new sensitivity to the media has brought with it a growing sense of the word as word, which is to say of the word as sound.” That growing sense of the word as word calls upon us to pay “particular attention to the fact that the word is originally, and in the last analysis irretrievably, a sound phenomenon,” that is, the fact that originally and always the word sounds. The word as word—which is to say the word as saying something—is the word as sound. The word only speaks by sounding.

Not every sound is a word, of course. However, every word is a sound. Or, to put that more resoundingly—that is, to make the sound louder (using the re- of resound not in its sense of “again,” but rather in its intensifying sense, as when we speak of a “resounding success”)—the word as word is nothing but sound, or rather sound-ing. As Malbranche’s God is nothing but his own process or action, so is the word nothing but “how it sounds,” if you will.

The word as sound, Ong insists repeatedly, is pure event. “A word [as spoken sound] is a real happening, indeed a happening par excellence” (page 111). In that sense, we might say that the word never is, but rather forever vanishes. The word as word is a “vocalization, a happening,” as Ong puts it at one point (page 33), adding a bit later (on pages 41-42):

Speech itself as sound is irrevocably committed to time. It leaves no discernable direct effect in space[. . .]. Words come into being through time and exist only so long as they are going out of existence. It is impossible [. . .] to have all of an utterance present to us at once, or even all of a word. When I pronounce “reflect,” by the time I get to the “-flect” the “re-” is gone.* A moving object in a visual field can be arrested. It is, however, impossible to arrest sound and have it still present. If I halt a sound it no longer makes any noise [that is, no longer “sounds” at all].

The word’s sounding is its event-ing, its coming forth in its very vanishing: as sounding, it “does not result in any fixity, in a ‘product,’” but instead “vanishes immediately” (page 95). The word as such is a vanishing that, in so vanishing, speaks, or says something. It speaks or says, as Ong observes (page 73), in the sense “caught in one of the accounts of creation in Genesis (1:3): ‘God said, Let there be light. And there was light.’ ” Such saying is creation itself, as the very letting be of what is bespoken.

In thus vanishing before what it calls forth, just what does the word—not just any old word, but the word as word—say?

It says the world.

*     *     *     *     *     *

More than once in his lecturing and writing, Heidegger addressed a poem by Stefan George entitled “Das Wort” (“The Word”), the closing line of which is: “Kein ding sei wo das wort gebricht.” In German, gebrechen means “to be missing or lacking”; and sei is the subjunctive form of the verb sein, “to be”—as, for example, in the line “If this be love, then . . .”   If we take sei that way in George’s poem, then his closing line says something such as: “no thing may be, where the word is lacking.” It would then express the relatively commonplace idea that, if we don’t have a name for something, as a sort of label to attach to it, then that thing doesn’t really take on full, separate status for us, such that we can retain it clearly in our thought, memory, and discourse with one another. That’s the idea that a thing really and fully “is” for us, separate and distinct from other things, only when we come up with such a name by which to label it—as, for example, an old bit of what passes for popular wisdom has it that we, who do not have a whole bunch of different names for different qualities of snow, such as the Eskimos are said to have, are not really able to see those differences, at least not with the clarity and ease with which the Eskimos are purported to be able to see them.

At the same time, however, sei is also the imperative form of the same verb, sein, “to be”—the form, for instance, a teacher might use to admonish a classroom full of unruly children, “Sei ruhig!” (“Be still!”). Taken that way, George’s closing line would have to be rendered as the imperative, “Let no thing be, where the word is lacking.”

What’s more, gebrechen, “to be missing or lacking,” derives from brechen, “to break,” which is not heard any longer at all in “missing” or “lacking.” At the same time, used as a noun, ein Gebrechen means a more or less lasting debilitation of some sort, such as a chronic limp from an old broken leg, or a mangled hand from an industrial accident (and it is interesting, as a side-note, that “to lack” in German is mangeln). If we were to try to carry over some part of what thus sounds in the German gebrechen, then we might translate the word no longer as “to be missing or lacking,” but instead by something such as “to break” (as the waves break against the shore), or “to break off” (as a softly sounded tone might suddenly be broken off in a piece of music, perhaps to be suddenly replaced or overridden by another, more loudly sounded one—or by a demanding call coming in on a cell-phone with a ringer set on high volume), or “to break up” (as the voices of those stricken by grief might break up when speaking of their losses).

Hearing gebricht along such lines, the closing verse of George’s poem “The Word” would say something to the effect that where the word breaks, or breaks off, or breaks up, there is no thing.

The way I just worded the end of the preceding sentence—“there is no thing”—is intentionally ambiguous, designed to retain some of the rich ambiguity of George’s own line, most especially a part of its ambiguity which is important to what Heidegger would have us hear in that line. To say that where the word breaks, or breaks off, or breaks up, “there is no thing” can be taken two different ways. First, it can be taken to say that no thing “exists.” That way of taking it would fit with the presumably common way of taking George’s line articulated above, whereby that line says that things fully “are” or “exist” for us as distinct and separate things only when we have names for them in their distinctness. However, the same phrase, “there is no thing,” can also be taken in a second way, one in which the first word is emphasized: “there”—that is at such and such a specific place. At what place, exactly, would no thing be? By George’s line, no thing would be exactly there, where the word breaks up, breaks off, just breaks: There, where the word breaks, don’t look for any thing. There, where the word breaks, you will have to look for something else altogether, something that really is no “thing” at all.

Yet if we are not to look for any thing there, where the word breaks, just what are we to look for? What are we to expect to take place there, where the word breaks? Heidegger’s response to that question is that there, where the word breaks, no thing, but rather the “is” itself takes place—the very letting be of whatever may be, as it were, takes place there.

“Thar she blows!” old whalers would call, at least by our stereotypes of them, when a whale broke the water’s surface again after diving when harpooned. “There she be!” they could as well have said, though less colorfully. Well, where the word breaks, there be world.

Just how would the word break—in the sense that the waves break against the beach or Moby Dick breaks the ocean’s surface—if it were not as sound, breaking against silence? Sounding in the silence, the very silence that it breaks, the word is word: It speaks.

As I said before, what the word says—what its says there, where it breaks out, and up, and off as sound—is world.

*     *     *     *     *     *

At this point, I will break off my reflections on “The Traumatic Word,” to resume them, given the breaks to do so, in my next post.

* That is worth repeating. So Ong repeats it almost twenty years later, in Orality and Literacy, just varying his example: instead of using “reflect,” he uses “existence,” and says that by the time I get to the “-tence,” the “exist-” no longer exists. That example especially well suits the word itself, which as word—that is to say, as sound sounding—“exists only at the instant when it is going out of existence,” to use Ong’s way of puting it at one point in The Presence of the Word (page 101).

Shattering Wholes: Creatively Subverting the University and Other Mobs–Final Fragment

After a long interruption, I am resuming work on this blog. The post below is the last of three in a series under the same general title—the last of three “Fragments” of “Shattering Wholes.”

*     *     *     *     *     *

            Every critique of the present has its right only as a mediated illumination of the knowledge of future necessities. All fixation on grievances clouds vision into the essential; it lacks what alone supports critiques: the capacity to differentiate that arises from dedication to something not yet real—that is, present at hand—but therefore all the more originally having the rank of what already is.

— Heidegger, Überlegungen VI, §113 (GA 94)

Only one who has once overcome contempt for others has no further need to feel superior in order to be great—which is to say to be, and let others fall where and how they may.

— Heidegger, Überlegungen VI, §140 (GA 94)

Last fall, on Saturday, November 29, 2014, memorial services in Colorado commemorated the 150th anniversary of the Sand Creek Massacre. On that date in 1864 a large body of Colorado Territory militia under the command of Col. John Chivington, who was also a Methodist preacher, slaughtered around 160 peaceful Cheyenne and Arapahoe Indians, mostly women and children, and then mutilated their corpses for fleshly souvenirs–including vulvas, breasts, and penises to be flown atop flags and pennants as the butchers rode away celebrating their glorious victory.

In addition, on the same day as the 150th anniversary of the Sand Creek Massacre, another event also took place. That day, November 29, 2014, was the day on which an Egyptian court formally dismissed all charges against former Egyptian President Hosni Mubarak, who was overthrown in 2011 during the so-called Arab Spring.

The two events of the Sand Creek Massacre, on the one hand, and the official exoneration of Mubarak, on the other, are separated in time by a century and one-half. Nevertheless, those two events are connected in telling ways, ways much more important than the trivial fact that they both took place on the same day of the same month, though 150 years apart. Above all, the two events, the Sand Creek Massacre in 1864 and the exoneration of Mubarak 150 years later, both embody efforts by powers that be to secure their power.   Both are examples of power “circling the wagons,” as it were, to protect itself.

That image of “circling the wagons” derives, of course, more from the time of the Sand Creek Massacre than from the much more recent times of Mubarak. It comes from what is in effect dominant US culture’s sanctioned narrative of the westward expansion of the United States in fulfillment of its supposed “Manifest Destiny.” That is the narrative in accordance with which the United States was divinely destined to spread itself from the Atlantic to the Pacific, across the whole expanse of North America between Mexico and Canada–or at least what the United States left of them, especially Mexico, after that expansion.

The story of the Sand Creek Massacre is granted a place within that larger narrative. It is usually a small place, as befits what is presented in the meta-narrative as an unfortunately regrettable exception to the generally glorious story of US exceptionalism.

In that broader story, waves of fabled wagon trains carried intrepid settler-families west during the 19th century, across the Great Plains and the Rocky Mountains, to the western edge of California and the Pacific Northwest, fulfilling the United States’ self-proclaimed destiny. As those wagons rolled west, they were subject to attacks by Indians presumptuous enough to resist the fulfillment of that very destiny, no matter how manifest it might have been to those who proclaimed and enacted it. To repel such attacks and overcome such resistance, the westward tending settler-trekkers would “circle the wagons,” as the story goes. They would thereby create a wall of protection for themselves, a wall behind which they could stand to use their massively superior killing technology to mow down the unfriendly “savages” who dared to attack them as invaders.

The 150th anniversary of the Sand Creek Massacre was marked not only by various memorial services—especially but not exclusively in Colorado, where the massacre occurred—but also by various official apologies pertaining to the atrocities performed at Sand Creek on November 29, 1864. To start with the most publicized example, on Wednesday, December 3, four days after the anniversary of the massacre itself, during a memorial ceremony at the State Capital, Colorado Governor John Hickenlooper became, according to his own office, the first Colorado governor to issue an official public apology for the butchery that had occurred at Sand Creek a century and a half before.

Just the other day as I am writing this, a court in South Carolina voided the conviction of the “Friendship 9,” who publicly broke South Carolina’s Jim Crow laws back in 1961 by daring to sit at a lunch-counter designated “Whites only,” and the prosecutor officially apologized for what had been officially done to them back then. Carolina thereby apologized for a wrong it had committed only forty-four years before, which compares favorably with the one-hundred-and-fifty years it took Colorado to apologize for the butchery it inflicted on 160 or so innocent American Indians at Sand Creek in 1864, which took place only a little less than one hundred years before the butchery of justice in the case of the Friendship 9. If those figures are any indication of general human progress, and if the rate of such improvement can be presumed to remain steady across time and countries, then perhaps we can hope that it will take only about 24 years for Egypt to apologize for its whitewashing last November 29 of Mubarak’s various butcheries.

At any rate, no official Egyptian apology for the wrong whereby Egypt officially dismissed all charges against Mubarak can be expected until the officiating power in Egypt feels safe and secure enough to issue it. That, in turn, will only come once the conditions that triggered the commission of the original wrong in the first place have ceased to exist. That is, only once everything that was in play in the Arab spring in 2011 that threatened to subvert Egyptian officialdom has withered away in one fashion or another, will it then be safe for official Egypt to admit to its official wrong, and officially apologize for it.

To put the point generally and simply, it is only when such apologies no longer cost anything to the entities that, through their representative mouthpieces, make them, that they will be made at all. Such official apologies are made, as a rule, only when they no longer really accomplish anything. Or rather, all they really accomplish is further to solidify the coercive power that is apologizing for its own past abuses—to help circle the wagons ever more tightly, as it were.

At issue is not the integrity of the individual mouthpieces through which the apology gets issued. For example, I have no reason to doubt the personal integrity of Colorado Governor John Hickenlooper (at least no reason aside from the fact that he is an elected official of an official state apparatus, which should always make one somewhat sceptical). I have even less reason to doubt the personal integrity of the prosecutor in South Carolina who officially apologized to the Friendship 9 the other day, and least reason of all to doubt that of the judge there who officially voided their convictions and expunged their records. I’m not quite as free of suspicion toward the members of the Egyptian court that dismissed the charges against Mubarak, but even in that case I am not interested in raising any issues of personal integrity. That is simply not my point.

My point, rather, is that we should institutionalize in ourselves suspicion against institutional apologies, and the institutions that sooner or later (most often later) issue such apologies for their own past institutional misbehavior.  We should never just trust an institution when it issues such an apology. Rather, such official apologies should give us even more reason to distrust the institutions issuing them.

Years ago, I used to warn students in my classes never to trust anyone who made a point of telling you how honest he was, since he was probably picking your pocket even while he spoke. That applies even more to institutions than to individuals, and most especially to institutions wielding coercive power of any sort.

Even if I trust Governor Hickenlooper personally, I do not trust the State of Colorado, that “authority” for which, as Governor, he spoke his recent official apology for the Sand Creek Massacre of 1864. The State of Colorado has too much to gain, and nothing to lose, by issuing such an apology—too much to gain and too little to lose for me to take it at its word.

Nor was it only the State of Colorado that apologized recently for the role it played with regard to the Sand Creek Massacre. So did two universities. One of them (the University of Denver) is itself in Colorado. However, the other (Northwestern) is in Illinois. The University of Denver and Northwestern University both issued apologies pertaining to the Sand Creek Massacre because the two schools share a common founder: John C. Evans. Besides going around and founding institutions of higher education, John Evans also preceded John Hickenlooper in the Colorado Governor’s chair—though when Evans was Governor, Colorado was still a Territory, not yet a State. Evans, in fact, was Colorado Territorial Governor back when the Sand Creek Massacre occurred, and the Colorado troops that did all the massacring did so under his final authority. That particular buck stopped with him.

I personally know almost all the faculty members on the University of Denver (DU) committee that researched and wrote the report detailing Evan’s culpability in the massacre, his involvement in which led to the recent DU apology. Over the many years that I taught at DU, I worked with them. I respected and liked them. I still do. I have no doubt whatsoever about their personal integrity, their scholarship, or their ethical commitment. I have read their report, and find it to be a thorough, thoroughly admirable analysis.

Thus, toward the DU committee and their report, I feel no suspicion at all. I trust the committee. I do not, however, trust the University that commissioned their work, nor its pronouncement of regrets with regard to the massacre in which its founder had an important hand. The University has too much to gain, and nothing to lose, by issuing the committee’s report with its official imprimatur, and adding an expression of institutional chagrin at the University’s founder’s complicity in the Sand Creek Massacre.

To an extent, at least, universities are themselves coercive institutions. Even insofar as they are not, however, it was nevertheless to serve such institutions that the University first arose; and ever since it arose the University has continued to provide such service. The University exists for the sake of “authority,” that is, coercive power. We should therefore always be suspicious of universities and their proclamations, most especially when those proclamations tend to cast the University in a good light, as uttering apologies for old wrongs can easily do.

That the University has much to apologize for is a given. The University has committed wrongs aplenty to go around to all the diverse universities that are its individual class-members. There are, for example, many examples of collusion between the University and such more directly and obviously coercive institutions as the army and the police. Many instances have occurred during my own lifetime, and I will mention only a few of the most egregious.

In 1968 at the University of Nanterre, in the France of De Gaulle’s “Fifth Republic,” students went to the streets protesting the American war in Vietnam, French collusion with that war, especially through the University system itself, and in general the whole market-capitalist fabric that underlay such acts of official violence. What began with those protests at Nanterre soon enough culminated in the largest general strike anywhere ever, one that shut the whole of France down—but which has been glossed over since, in the officially sanctioned memory, as no more than a “student revolt,” one seeking to increase such individual liberties as what used to be called “free love,” in Paris in May ‘68.

Back at the beginning of that whole process, when the protesting students first took to the streets of Nanterre, authorities at the University there called out the cops. As Kristin Ross, an American professor of comparative literature, writes in her excellent study, May ’68 and Its Afterlives (University of Chicago Press, 2002, page 28): “The very presence of large numbers of police, called to Nanterre by a rector, Pierre Grappin, who had himself been active in the Resistance [to the Nazis during the German occupation of France in World War II], made the collusion between the university and the police visible to a new degree.”

Not to be outdone by their French counterparts, American University administrators soon followed Grappin’s suit, by calling in police or army to quell student protests at American universities. That included most famously the protests at Kent State University in Ohio in May of 1970, after Nixon and Kissinger unleashed the American bombing of Cambodia. Then Ohio Governor Jim Rhodes called in the Ohio Army National Guard, who soon killed four unarmed Kent State students and wounded nine others, permanently paralyzing one.

That in turn set off waves of student protests at other universities across the country. Among them was what came to be known as “Woodstock West.” That took place at the same University of Denver that recently apologized for its founder’s culpability for the Sand Creek Massacre. In the spring of 1970, the spring of “Woodstock West,” then Chancellor Maurice Mitchell appealed to then Colorado Governor John Love, who called out the Colorado National Guard to rout the protesting DU students who, eschewing violence, had set up a shanty-town of protest on the DU campus–where I joined the faculty myself a little over two years later, returning to my native Colorado after three years being occupied elsewhere.

I began this current series of three posts—three “Fragments” under the same general title of “Shattering Wholes: Creatively Subverting the University and Other Mobs”—with a quote from an essay by Jean-Claude Milner about the University as an institution in service to coercive power, that power that lays claim to being the “authority” in charge of things at any given time. In his essay Milner does a nice job of pointing out how, as the identity of “authority” changes over time, the University undergoes a change in masters, as well as in how exactly it renders those masters service.

The University as we have come to know it first developed during the Middle Ages. At that time the University arose, as Milner points out, in order to produce more priests for the Christian Church, the authority of the day. Especially with its insistence on celibacy for the priesthood, the Church was constantly in need of more priests, and the job of the University was to provide them.

Then in the modern era, Milner explains, as the authority of the Church waned and came to be replaced by the modern nation-state, so did the needs of authority change. What it needed “more” of, was no longer priests. Instead, modern power needed more members of the bourgeoisie. So that became what the University turned out: good bourgeois citizens.

Today, however, things have changed once again. What contemporary authority needs more of today is no longer good bourgeois citizens. What authority needs more of today is broader—and emptier—than that. What the powers that be today need is ever more of what Milner aptly calls “agents of the market,” which above all means good consumers for the products that market markets.

So that is just what the University produces today: all sorts of obedient agents of the global consumer market. As Milner writes (L’Universal en éclats: Court traité politique 3, Verdier: 2013, page 104): “Sellers, buyers, producers, consumers form [what Freud called] a ‘natural mob [or “mass,” crowd,” “group”: all being possible as translations of the French foule, which Milner uses for Freud’s German term Masse, which is itself most often rendered my “group” in the standard English translation of Freud’s works].’ From now on, that is coextensive with the entirety of humanity. It dedicates itself to a constant growth. To that growth of a mob taken for natural, the artificial mob that is the University wishes to offer its assistance.”

Whichever presumably “natural” mob it may serve at a given time, the obviously “artificial” mob of the University turns all into one, both as assembly of persons and as system of knowledges—of all the “arts and sciences,” to use a term that began to become dated about three decades ago, at least at DU, where I spent almost all of my professorial career, and where the old “College of Arts and Sciences” was rendered defunct by the then-resident University authorities in the mid-1980s. Such turning into one of all persons and knowledges only befits the name of the institution charged with that task: University, from Latin unus, “one,” and versus, the past participle of the verb vertere, “to turn.”

Today, in service to the rulers of the global marketplace, the University turns everyone into a good consumer, and everything into a product to be consumed. That includes especially, turning all who attend its classes into good, never sated consumers of “information” and—first, last, and above all—faithful, lifelong “consumers of education,” to use the corporate-market jargon favored by up-to-date University administrators today.

At the very end of his classic, Masse und Macht, first published in German in 1960 and translated into English by Carol Stewart as Crowds and Power (London: Victor Gollancz, 1962), Elias Canetti, who received the Nobel Prize for literature in 1981, writes this:

The system of commands is acknowledged everywhere. It is perhaps most articulate in armies, but there is scarcely any sphere of civilized life where commands do not reach and none of us they do not mark. Their threat of death is the coin of power, and here it is all too easy to add coin to coin and amass wealth. If we would master power [by which Canetti, as I read him, means “break its hold on us”] we must face command openly and boldly, and search for means to deprive it of its sting.

For those who are under the command of the University, as I was for my entire adult life until my recent retirement and elevation to emeritus professor status, the way to heed Canetti’s admonition—if anything, an admonition that calls for heeding even more loudly today than it did 55 years ago, when Canetti first issued it (or even just 21 years ago, when he died)—Milner points the way. It is the way of cheerful, apparently compliant subversion indicated in the quotation with which I began this three-fragment series, and by repeating which I will now end it. The lines come from page 114 of his L’Universal en éclats, which most appropriately means “The universal in pieces” (or “in fragments), in his essay called “De l’Université comme foule,” “On the University as mob”:

The University is not an alma mater, but a milk-cow.   Not just scoundrels can milk it. Neither to believe it, nor to believe in it, nor to serve it, but to serve oneself to it, should be the order of the day. To place in doubt, though it be only by detour, one, several, or all, facile universals—that program is not easy, and not without risk. But being wise doesn’t preclude being sly. It is possible for the wise to shatter the mass.

Shattering Wholes: Creatively Subverting the University and Other Mobs–Another Fragment

“You see, it’s easy for the musicians to feel as if they were serving the conductor. They even call their rehearsals and performances ‘services.’ The very physical structure of the organization—with the orchestra radiating out from a central raised platform and the conductor standing over them—promotes that dynamic. In this kind of an environment, many orchestral musicians feel disconnected.”

“Yes,” I said, nodding. “It’s a perfect setup for ‘Shut up, and do what you’re told.’”

“Exactly. The very context of an orchestra fosters a culture in which the players don’t own the work; the conductor does.”

–Roger Nierenberg

 

There is a difference between trusting someone as a leader, and being dependent on someone. Leadership depends upon trust. What depends upon dependency is something else, however. It is tyranny. Leaders build trust in those they lead. Tyrants build insecurity.

The approach to conducting that Roger Nierenberg models in his Music Paradigm program—as embodied in his novel Maestro: A Surprising Story About Leading by Listening (Portfolio, 2009), from early in which (page 20) the citation above is taken—provides a fine example of genuine leadership. As the citation suggests, the exercise of such leadership may well require working against the grain of the very organizational or institutional setting within which it takes place. That is especially the case whenever that setting is both built upon and designed to foster dependency rather than trust.

Nierenberg makes the connection between leadership—at least the sort he models—and trust explicit in an even earlier passage, near the very start of the novel (page 5). The fictional narrator, a business executive facing a downturn in company business, comes home from work one day and overhears a conversation between his daughter and Robert, her music teacher, about the new conductor in the orchestra to which he belongs. His interest perked by what he hears, the narrator asks Robert what is so special about the new conductor. Robert replies: “When he’s on the podium it’s as if the differences between us [various musicians in the orchestra] somehow magically disappear, which in turn promotes trust and confidence.” “Trust in him?” the narrator asks. After hesitation, Robert replies: “I guess so. But I think we get the feeling that he trusts us. Somehow that makes us work together so much better. It never seems as if he’s dictating. You always feel like you’re contributing toward something bigger than yourself.”

As Nierenberg depicts his sorts of conductors, they, too, are guided by a vision of something bigger than themselves. In the later parts of the brief novel, the maestro of the title repeatedly points to how the good conductor must always be guided by such a vision. In the case of conductors, it is an auditory vision, as it were. That is: a vision of how the score being played here and now by this given orchestra, with all of its diverse parts with diverse talents and degrees of accomplishment, can sound, if all the diverse musician that make up the orchestra can indeed be brought fully to trust themselves and one another, and give themselves over to the piece.

The “eyes” that can see such visions—regardless of whether they be eyes or ears or whatever other organs—are the eyes of love. Leadership guided by such visions, and in turn guiding others to share them, is a loving leadership.   It is creative: it brings into being.

Such leadership is magical.

*     *     *     *     *     *

Mentioning magic, at one point in his book-length analysis of the Harry Potter films, published just this last spring (Harry Potter: À l’école des sciences morales et politique, PUF, 2014, page 51), Jean-Claude Milner remarks that “one might define magic as an integrally anti-capitalist enterprise. Because it can transform objects without labor and without machines, it makes the material base of capitalism, which is to say surplus value and the power of labor, disappear.”

So conceived, magic—as celebrated not only in the Harry Potter novels and films, which might, because their lack of significant Christian references, be accused of blasphemy by those defensive about their Christianity,* but also in Tolkien’s Lord of the Rings and other hobbit” narratives, and even in C. S. Lewis’s blatantly Christian Chronicles of Narnia—is inherently subversive of the ruling power of our endless day. Yet magic, of course, has a power of its own, one that can all too easily be made to undergo a completely non-magical transformation into the snakiest imaginable servant of what the better angels of its nature would have it subvert.

There is a scene towards the beginning of Harry Potter and the Deathly Hallows: Part I—which came out in 2010, the first of the two-part finale to the Harry Potter films—that serves well as a counter-model to the leadership exemplified by Nierenberg’s “maestro.” Voldemort, the Dark Lord of the films, has returned, literally from the other side of the grave, to grasp a second time for unchallenged power over wizards, witches, and “Muggles” (i.e., ordinary mortals) alike. He has called all the heads of the old sorcerer families that supported his return together at one of their castles, and at one point during the proceedings he subjects the entire assembly to a demonstration of his power, and of what awaits any of them who may for whatever reason run afoul of it. Voldemort floats the paralyzed but very much still living and conscious body of Charity Burbage, Professor of Muggle Studies at the Hogwarts school of sorcery who has made the mistake of teaching the equality of Muggles and sorcerers and the legitimacy of marriage between them, above the table where they are all seated. “Dinner!” says Voldemort after speaking a few apt words, therewith unleashing Nagini, the magical snake who is his irreplaceable supporting companion, to devour her as they watch.

The lesson is clear, as Milner notes in his book on the Harry Potter films when he discusses the scene. By his act, writes Milner (pages 107-108), Voldemort lets those who have thought to serve themselves by serving him “see a close-up of what they had chosen to ignore: the power they have worked to put in place accepts no limits to its own exercise.” Such a power will exercise itself, regardless of consequences. By its very nature, it is cruel, such that “even if a cruelty shows itself to have no utility [on its own], that will be no reason not to pursue it to the extreme.” Indeed, “to the contrary,” since the whole point of such egregious acts of cruelty is precisely to display the unlimited nature of the claim to power so exercised. What those who are made to witness such displays have thrust upon their attention is their own impotence in the face of such power. “In a general way,” what Voldemort’s act of wanton cruelty makes clear is that, under such a sovereign power as his, “rational politics will never have the last word, because the last word comes back to Voldemort’s pleasure.”

Milner calls attention to the parallels between the fictional character of Voldemort and the historical one of Hitler. In the case at hand, the parallel is between the “old families” of wizards and witches who help Voldemort rise to power in the story of Harry Potter, on the one hand, and the rich industrialists and other “conservative” elements of German society who did the same for Hitler in the 1930s, on the other. The “old families” in the Potter narratives are enamored of themselves because of what they perceive as the “superiority” their magic powers give them over the Muggles, and protective of the privileges that accrues to them through those magic powers. Just like the rich under the Weimar Republic, merely replacing “magic” with “money” and “Muggles” with “hoi polloi.”

Unfortunately, a sense of superiority easily follows upon the recognition that one has been given special powers, whether those powers be magical, mental, or musical. In turn, that sense of superiority brings in its own train defensiveness against anything perceived as challenging it. Thus, as Milner is quick to point out, the sense of superiority that goes with the recognition that one has unusual talents or gifts is nearly always accompanied by the fear of inferiority—of somehow not being worthy of having the very powers one finds oneself to have.

That is especially so when the special powers at issue are dispensed randomly, without their recipients having done or been anything special to deserve them.   However, that is exactly how it is with most talents, gifts, and powers, of course. They come to those to whom they come by accident, not as a reward for merit.

For instance, in the Harry Potter story Harry’s basic magical capacities—what makes him different from the Muggles who raise him after his parents have been killed during his infancy—are nothing he sought and acquired through his own efforts. He is born with them, inheriting them from his parents. Similarly, physical beauty, musical or other artistic talent, physical prowess, and the intelligence measured by IQ tests, are all based on natural gifts dispensed without regard to antecedent individual merit.

For that matter, so are most of the conditions that account for some individuals becoming aware of their special talents and capacities, whereas others never even come to know they have such talents.   Furthermore, even if circumstances conspire to let one become aware that one has some special gift, they must also conspire to grant one the opportunity to develop that gift. By accident, for instance, a child may learn she has a talent and taste for playing the cello, as our own daughter learned when she was 11. But then it is no less by accident that the same child may be provided with the resources needed to develop that talent and taste—as was, once again, our own daughter, who, when she found she had both a desire and a gift for playing the cello, also found herself living in a reasonably well-funded school system and with a set of reasonably well-paid parents, so that she could be provided the material and educational means to pursue that desire and develop that gift.

Having special powers does not make one somebody special. They do not make those who have them superior to those who don’t. Nevertheless, those so endowed are subject to the temptation to become, as Milner puts it (page 112), “bearers of an ideology of superiority.” The specially gifted “can be seduced, not despite their exceptional talents, but by reason of those talents. Especially if they are ignored or mistreated by their entourage,” as those with special talents often are—again, not despite, but because of, those same talents, we might add, since any gift that makes someone “different” can easily evoke such defensive reactions from those around them, those not so gifted.

Once seduced to such an ideology of superiority, those with special powers can, like Voldemort, also easily succumb to the temptation to exercise those powers over others. They can, like him, come to take pleasure in imposing their will upon others, in the process convincing themselves of their right so to enslave those to whom they have come to consider themselves superior.

However, the underlying, ever-present doubt of their own superiority and their defensiveness about it, grounded in their awareness of having been and done nothing special to deserve their special gifts, continues to carry “a germ of vulnerability” even in the midst of wanton displays of “brutality and terror.” That sense of continuing, inescapable vulnerability sets up such self-styled masters, who delight in subjecting others to their will, to subject themselves in turn to yet others claiming mastery, and indeed to find relief and solace in such submission. For example, Milner writes (p. 113): “Let us suppose that an admired thinker, taken as the greatest of his generation, rallies to an ignorant, belching, hysterical tribune. [Think Heidegger and Hitler, of course!**] Simple folks are astonished; but on the contrary nothing is more normal: this thinker is doubtful of the admiration he knows surrounds him, until it confirms itself in the admiration of which he discovers himself capable.” Thus, imagined superiority doesn’t just lead one to enslave those one takes to be inferior to oneself, it also leads one to let oneself be enslaved in turn.

Against such temptations and perversions of gifts, talents, and powers, Milner suggests, only humility offers any real, final defense. Humility alone would accept gifts as just that—gifts: things for which thanks are be offered.

Humility is not that easy a thing to come by, however.  It is itself a gift, in fact.

What is more, if that gift of humility itself is given, it is also no easy thing truly to give thanks for such a gift. There is a strong, constant tendency to turn thanks for the gift of humility into its very opposite, making of it no more than an exercise in even greater arrogance—the arrogance of thinking oneself humble, like the righteous man at the back of the temple thanking God for making him so superior to the disgusting tax collector beating his chest and weeping in the profession of his guilt down at the altar.

Above all, the way that one properly gives thanks for a gift by accepting and using it. However, just what are the uses of humility? Perhaps Harry Potter can show us something of that, as well. At least it may be worth briefly reflecting upon what Milner calls “the Potterian narrative” with that in mind.

Although that is a direction of reflection that Milner himself does not explicitly pursue, what he says provides good clues. That is especially true of a line in the Potter films to which Milner calls his reader’s attention, one that occurs in more than one of the films and is spoken by more than one of the character, about Harry and to him: “You have your mother’s eyes.”   In explanation of that remark, Milner cites (on page 33) what one of the characters in the narrative says about Harry’s mother Lily Potter’s eyes, which is that they had the power to see the beauty in others, most especially when they weren’t able to see any themselves.

The use of humility is to open eyes like Harry’s mother’s, eyes that in turn open others, calling forth—which is to say creating—the beauty that is in them. The gift of humility is given not for the good of the humble themselves, at least not directly. It is given for the good of others. To give proper thanks for such a gift is to use it by practicing seeing through eyes like Lily Potter’s.***

Such eyes are simply the eyes of love—which brings me back to where I started this fragment, and which is also a good place to end it.

* On page 28 of his Harry Potter book, Milner says that so far he is unaware of any such charges being leveled against the Harry Potter stories, but then adds sarcastically that he “does not despair of learning one day that the Potterian narrative has been banned in part for blasphemy.” In these benighted United States, of course, at least a few such charges and such efforts have indeed been made.

** And appropriately so, at least by one reading of Heidegger’s relationship to Hitler and the Nazis—though not the only reading possible, nor necessarily the one finally to be preferred.

*** Lest one think that is an easy thing to do, one might want to go back and watch the Harry Potter films again. Or read Roger Nierenberg’s Maestro.

 

Shattering Wholes: Creatively Subverting the University and Other Mobs—A Fragment

The University is not an alma mater, but a milk-cow.   Not just scoundrels can milk it. Neither to believe it, nor to believe in it, nor to serve it, but to serve oneself to it, should be the order of the day. To place in doubt, though it be only by detour, one, several, or all, facile universals—that program is not easy, and not without risk. But being wise doesn’t preclude being sly. It is possible for the wise to shatter the mass.

— Jean-Claude Milner, “De l’Université comme foule”

 

When I finally sobered up a bit over a quarter of a century ago, one of the things that first hooked me on sobriety was the sheer freedom of it. No one but a happily abstinent alcoholic can experience the joy of the freedom sobriety brings with it.

One way my newfound sobriety freed me was in my driving.

I am not proud of having done so, but during the years of my drinking I often drove “under the influence.” Once I embraced sobriety I no longer had to contend with at least one constant anxiety that accompanies any dedicated drinker who drives after drinking, even if is that drinker and driver feels no real anxiety about a possible accident. That is the anxiety that, however attentively one minds the road, one might not detect every lurking unmarked (or even marked) police car, and might get pulled over and risk arrest for drunk driving.

In fact, I got so hooked on the wonderful freedom of not having to care about being pulled over by the police, that I even went through a period of challenging them to pull me over.   Most of the time most of us (drinkers or not) will automatically slow down if we are driving along and suddenly notice a police car sitting somewhere up ahead. We have long grown accustomed to doing that even if we are not exceeding the speed limit at the time. So anxious have we become before the representatives of that which claims authority over us that we often relate to ourselves as criminals even when we are being the best-behaved, most law-abiding citizens. If we are indeed breaking the law by driving “under the influence,” that anxiety is exponentially heightened.

Well, for a while not long after I embraced the life of sobriety, when I would come over a hill on, say, the 50-mile drive along the interstate between my home and my office at the university where I taught, and spy a police car waiting down the road a bit, instead of slowing down I would actually speed up. What did it matter if I got pulled over for speeding? At most, I’d have to pay a few (maybe even quite a few) bucks for it, but so what? What did such trivia matter? It mattered nothing to speak of, so far as I was concerned in my newfound exuberance of abstinence. Because I was at last free of the guilt of being me, I was also free of any concern—or at least any crippling concern—for what “the authorities” might do to me.

Thus, sobriety not only set me free not to drink any more. It also set me free to break the law—with, in effect, a good conscience.

I’m glad to report that soon, so soon that I never even got a single speeding ticket from such doings, it dawned on me that sobriety also set me free not to break the law—and to do that, too, with a good conscience. Indeed, I saw how much more important the freedom not to break the law was than the freedom to break it. That was because the freedom not to break it gave me the chance creatively to subvert it.

One way of putting it is that I saw how obeying the letter of the law could be a skillful means for subverting the law’s whole spirit. That is the spirit of subservience. It is the spirit, that is, of spiritlessness.

The point is not subservience. It is subversion—or, rather, the freedom that makes skilful subversion possible.

*     *   *     *     *     *

Only in the freedom recovery brought me was I able clearly to apprehend something of my preceding bondage, and of just what role my addiction itself had played in it. For the powers that be, and that would have us serve them, addiction is a very socially useful tool. It puts us addicts in service to power despite ourselves, however hard we may try to make ourselves unserviceable. It puts us at the mercy of power. Especially in our consumer society today, addicts make perfect subjects: obedient to the laws even in their very efforts to disobey.

*     *     *     *     *     *

At one point in Ghandi’s Truth (New York: W. W. W. Norton and Co., 1969) Erik Erikson describes how challenging it was for Gandhi to maintain the vow of vegetarianism he made to his Jain mother when he left India for England, that land of ubiquitous beef and mutton, to study at Oxford. Erikson writes that, to preserve his vow, Gandhi had to learn to do something more—and, indeed, completely different from—just resisting the temptation to eat meat. He had to learn, instead, to make not-eating meat itself into a definitive positive goal all on its own. As Erickson puts it (on page 145, emphasis in original), Gandhi “had to learn to choose actively and affirmatively what not to do—and ethical capacity not to be confused with the moralistic inability to break a prohibition.”

As I have pointed out before (in my Addiction and Responsibility, page 143), using that same reference: “The only proof against addiction in general is the sort of active and affirmative choice of ‘what not to do’ that Erikson mentions, the sort of choice involved in Gandhi’s vegetarianism or genuine calls to celibacy.” After noting (on the next page) that abstinence is “the general term for refraining from some common practice or pursuit,” I go on to observe:

What allows us to transform abstinence (whether from meat, from genital sex, from heroin, from child molestation, or whatever) from negative avoidance into positive embrace is this element of self-restraint at the heart of all abstinence. If we abstain from doing something merely because we fear the consequences of doing it, either on practical or moral grounds (Erikson’s “moral inability to break a prohibition” . . .), then we remain at the level of negative avoidance. However, once we begin to abstain from something for the sake of exercising our own self-restraint, we pass over from a negative abstinence to a positive one. From that point on, abstaining becomes its own, ever-growing reward.

 

Then it’s just for fun.

*     *     *     *     *     *

The citation from Milner with which I began this post is from the third of his “short political treatises,” L’universel en éclats: Court traité politique 3 (Verdier, 2014, page 114). The quoted lines are the closing ones of the fourth of six essays in that book. We might translate the title of the essay as “The University as Mob”—in the sense, for example, that organized crime is called “the Mob.” Foule, the French term Milner uses, is the same one used in the standard French translation of Freud’s Massenpsychologie und Ich-Analyse. Freud’s work provides Milner with a basis for his thinking about the University.

The standard English translation of the same work is called Group Psychology and the Analysis of the Ego. Etymologically, the German Masse and the English mass are the same word. Die Massen would be translated by “the masses.” The translation of Freud’s title by “group” can weaken his meaning. The French foule, which can be translated by “crowd,” “mob,” or “mass,” depending on context, comes closer.

What Freud is talking about in the essay at issue, as he tells us there, is not just any grouping of diverse individuals. Rather, what concerns him are assemblages that arise when diverse individuals come to identify themselves with some group, and with others insofar as they also so identify themselves. Above all, in his essay Freud is concerned with such assemblages insofar as they arise from diverse individuals coming to identify with one another insofar as each in turn identifies with one and the same leader, who comes through such identification to take over the role of what Freud calls the “ego ideal” for each individual.

Freud’s own discussion focuses on two “mobs” or “masses” as paradigms: the Army and the Church. Both are examples of what he calls “artificial masses.”   An artificial mass, as the name implies, is one that has to be brought about and then maintained by some external force—with all the hierarchical organization and directorial leadership that typically entails. The Nazi Party (NSDAP, from the German for “National Socialist German Workers Party”), the rise to power of which was eventually to drive Freud out of Vienna in 1938, seventeen years after his book about mass psychology and ego-analysis first appeared, would be another example, to go along with the Army and the Church.

Freud distinguishes such artificial masses from “natural masses,” which form spontaneously of themselves and, left to themselves, eventually dissolve. Often natural masses do not last for very long. We could use the mob that stormed the Bastille in 1789 to inaugurate the French Revolution as an example of such a natural mass of relatively brief duration. Another example would be the crowd that congealed in Cairo’s Tahrir Square and overthrew Mubarak in the Arab Spring of 2011.

*     *     *     *     *     *

Just while composing this post, I came across an interesting case of what strikes me as a creative subversion of one “artificial mass (though we don’t normally think of it that way): an orchestra. On the third page of the arts section of the New York Times for September 18, 2014, is a piece by critic James R. Oestreich about conductor Roger Nierenberg bringing his “Music Paradigm” program to the Lincoln Center for the Arts, before “an audience of nursing directors from New York-Presbyterian Hospital.”

Mr. Nierenberg began (“without apparent irony,” writes Mr. Oestreich) by remarking: “An orchestra is a great place to model organizational dysfunction.” According to Mr. Oestreich, the conductor, 67, had only rehearsed the 26 string players he brought with him for an hour before the performance—of Samuel Barber’s Adagio for Strings—but had otherwise left them unprepared for what was going to happen next, which was that “he continued to rehearse them in public, running through snippets and discussing those with players and audience alike, drawing lessons in leadership from the work of the conductor and the interactions of the players.” In the process, says Mr. Oestreich, Mr. Nierenberg did indeed “model dysfunction,” by showing “how a performance might be adversely affected if the conductor micromanaged with his baton, eyes and gestures, or if the conductor were simply disengaged or fidgety.”

But then he went on to model something else—or at least so it seems to me, though Mr. Oestreich does not himself say this: He modeled a fine, creative alternative to the organizational dysfunction by way of bad leadership that he had already displayed. Instead of having all the players focus their attention on his augustly conducting—albeit potentially micromanaging and/or disengaged and/or fidgety—self. Mr. Oestreich writes:

He had the players shift their focus to a particular colleague and attune their playing to complement one another’s. He had them perform with a conductor, then without a conductor and with eyes closed, to show how adept they were at intuitively adjusting to others on their own.

He had them start the piece at different tempos of their choice and alter tempos spontaneously, slowing down, perhaps, in midstream. The musicians were called on to speak as well as play, and audience members were occasionally drafted into action.

The watchword throughout was listening: players listening to one another and to the conductor, but just as much, the conductor listening to the players, how they sound, what they said.

This went on for some 75 minutes. Then the orchestra, with Mr. Nirenberg in place, performed the Adagio complete, beautifully, and departed to huge applause.

Later, toward the very end of his review, Mr. Oestreich quotes these lines from Mr. Nierenberg’s Maestro: A Surprising Story About Leading by Listening (Portfolio, 2009), an attempt to present his Music Paradigm idea in the form of a novel. Mr. Oestreich quotes the maestro of the novel as saying: “Every word I speak, every inflection in my tone of voice, every gesture is directed toward the goal of creating a feeling of community. A community simply acts faster, more intelligently, more creatively and with more joy than a group that is primarily focused on its leader.”

Since even before I ever started my own career as a teacher, I’ve always thought that the job of teachers was to make themselves unnecessary as soon as possible. To me, that’s always been a corollary of Nietzsche’s great line that students who always remain only students are repaying their teachers poorly. Taken at his own word (as well as Mr. Oestreich’s), in his Music Paradigm program Roger Nierenberg is in effect modeling how conductors in turn can—and should—model themselves on what I would call Nietzschean teachers.

What a wonderfully creative way to subvert the orchestra as mob! What a way to lead out of dependence on leaders!

What a way, too, to turn a mob into a community—but more on that in my next fragment.

Pulling Out of the Traffic: The Après-Coups After The Coup (3)

This is the third and final post of a series.

*     *     *     *     *     *

Third After-Shock: Flashes of Imagination

I do not, in the conventional sense, know many of these things. I am not making them up, however. I am imagining them. Memory, intuition, interrogation and reflection have given me a vision, and it is this vision that I am telling here. . . . There are kinds of information, sometimes bare scraps and bits, that instantly arrange themselves into coherent, easily perceived patterns, and one either acknowledges those patterns, or one does not. For most of my adult life, I chose not to recognize those patterns, although they were patterns of my own life as much as Wade’s. Once I chose to acknowledge them, however, they came rushing toward me, one after the other, until at last the story I am telling here presented itself to me in its entirety.

For a time, it lived inside me, displacing all other stories until finally I could stand the displacement no longer and determined to open my mouth and speak, to let the secrets emerge, regardless of the cost to me or anyone else. I have done this for no particular social good but simply to be free.

— Russell Banks, Affliction

 

What a great distinction! Making up vs. imagining! To “make up” is to confabulate, to cover, to lie. So, for example, do those who claim power over others make up all sorts of ways in which the usurpation of such power is necessary “for the common good” or the like. In contrast, to imagine is to make without making up. It is to create, which is to say to open out and draw forth sense and meaning. Making up is telling stories in the sense of fibs and prevarications. Imagining is telling stories in the sense of writing fiction. The former is a matter of machinations and manipulations; the latter is a matter of truth and art.

The passage above comes early in Affliction (on pages 47-48). The words are spoken in the voice of the fictional—which means the imagined—narrator of the novel, Rolfe Whitehouse. Rolfe is telling the story of his brother Wade’s life, and therewith of his own life, too, as he remarks in the passage itself.

*     *     *     *     *     *

A mere symmetry, a small observed order, placed like a black box in a corner of one’s turbulent or afflicted life, can make one’s accustomed high tolerance of chaos no longer possible.

— Russell Banks, Affliction (page 246)

 

Imagine, for example, a big black cube, surrounded by a neon glow, appearing in the sky over Oakland, setting off car horns and causing dogs to bark throughout the city in what soon ceases to sound like sheer cacophony, and becomes a new, hitherto unheard of harmony, in the sounding of which everyone is invited to join, each in each’s own way. Such a thing might all of a sudden make those who witnessed it no longer suited to tolerate the chaos in which, they now suddenly see, they had been living till then, without even knowing it.

*     *     *     *     *     *

. . . facts do not make history; facts do not even make events. Without meaning attached, and without understanding causes and connections, a fact is an isolate particle of experience, is reflected light without a source, planet with no sun, star without constellation, constellation beyond galaxy, galaxy outside the universe—fact is nothing. Nonetheless, the facts of a life, even one as lonely and alienated as Wade’s, surely have meaning. But only if that life is portrayed, only if it can be viewed, in terms of its connections to other lives: only if one regard it as having a soul, as the body has a soul—remembering that without a soul, the human body, too, is a mere fact, a pile of minerals, a bag of waters: body is nothing.

— Russell Banks, Affliction (page 339)

 

Ever since my mid-teens I have kept a sort of philosophical journal. That is, I’ve kept notebooks in which I’ve jotted down passages from what I was reading at the time that made me think, along with some of the thoughts they brought to me, or brought me to. For various periods of varied lengths I’ve let that practice lapse since then, but I always pick it up again eventually. For the last few years, there have been no lapses of any duration; and, in fact, my blog posts almost always arise from things I’ve already written more briefly about in my philosophical journals.

On our recent trip to San Francisco to watch our daughter work with The Coup, I carried my current philosophical journal along. Here’s what I wrote one morning while we were still out in the Bay area.

“The Essence of Accident, the Accident of Essence.”

That came to me this morning as the title for a possible blog post in which I’d explore the idea that the essential—or, more strictly speaking, the necessary—is itself essentially accident. That “accident,” the “accidental,” is precisely “essence,” the “essential.”

That goes with the idea of truth as event (and not, as Milner would say, as possible predicate of an event, a pro-position—to give an accidental connection, via my current reading and other experiences, its essential due). It was itself suggested to me by the accidental conjunction of a variety of factors, coming together with/in our trip out here to see [our daughter] perform with “Classical Revolution” (the name of the “group” from which the quartet with her on cello came) at/in conjunction with/as part of The Coup’s performance on Saturday, two days ago. Among those diverse but accidentally/essentially (i.e., as insight-bringing) connected factors are: (1) my reading in Heidegger’s Überlegungen [Reflections: from Heidegger’s so called “Black Notebooks,” which only began to be published this past spring in the Gesamtausgabe, or Complete Edition, of his works] this morning; (2) my ongoing reflection and talk (with [my daughter] and/or [my wife]) about Saturday’s “Coup” event; (3) my noticing yesterday one of the stickers on [my daughter’s] carbon-cello case, which sticker has a quote from Neal Cassady: “Art is good when it springs from necessity. This kind of origin is the guarantee of its value; there is no other.” That third factor was the catalytic one: the “necessity” Cassady is talking about has nothing to do with formal rules or mechanisms, but is precisely a matter of the “accidental,” which is to say be-falling (like a robber on the road), coalescence into a single work/flash/insight of all the diversity of factors that otherwise are chaotically just thrown together as a simultaneous series, as it were. . . . There’s another major factor so far not recorded as such: (4) attending The Coup’s performance at the Yerba Buena Center for the Arts in San Francisco on Saturday. That is the real arch-piece/factor here.

Which brings me to another possible blog post, which [my wife and daughter] yesterday suggested I should do, before the one on accidental essence and essential accidentality suggested itself to me this morning. That is a post about the impact of Saturday night’s event [that is, The Coup’s Shadowbox].

 

As readers of this current series of three posts to my blog already know, of course, I took my wife’s and daughter’s suggestion. But I expanded upon it, doing three posts about my experience of The Coup, rather than just one. And I was also able to incorporate it with my idea for a post on accident and essence, which became my preceding post, the second of the three of this series.

Whether there is any necessity to all that will have to speak for itself. (I can confidently say, at any rate, that it is not art.) All I know for sure is that my journal entry, and this subsequent series of three posts, came about from the accidental conjunction of the four facts I mention in the passage above, taken from my philosophical journal. That entry tells the tale of that conjunction, from which tale alone derives whatever significance or meaning those otherwise isolated particles of my experience may have.

*     *    *     *     *     *

I’ve just recently begun reading Wendy Doniger’s The Hindus: An Alternative History (New York: Penguin Press, 2009), a book that has been on my list to read ever since it first appeared, and that I’m finally getting around to. So far, I’m still in the first chapter, which is an introductory discussion. One of the lines that already especially struck me is this (on page 8): “This is a history, not the history of the Hindus.”

One reason that struck me when I read it was that earlier the same day I’d noted a remark Heidegger makes in his Überlegungen (on page 420 of Gesamtausgabe 94) about the “idols” we worship today (which is still the same day, really, as when Heidegger wrote his remark, back in the Nazi period). Today, among the idols we are most tempted to fall prey to worshipping are, by his partial listing: Science (with a capital ‘S’: “ ‘die’ Wissenschaft”), Technology (with a capital ‘T’: “‘die’ Technik”), “the” common good, (“‘die’ Gemeinnutzen), “the” people (“ ‘das’ Volk”), Culture (with a capital ‘C’: “ ‘die’ Kultur”). In all those cases, idolatry happens when we turn what are themselves really ways or paths of our life in the world with one another—including knowledges (“sciences”), know-hows (“technologies”), shared benefits (“common goods”), and cultivations (“cultures”)—into “ ‘purposes’ and ‘causes’ and ‘agents,’ all the forms and ‘goals’ of wheeling and dealing.”

When we restrict the term knowledge only to what can be con-formed to the one form we have come to call “science”—the paradigm of which is taken to be physics and the other so called “natural sciences”—and confine all other forms of knowledge to mere “opinion” (to which, of course, everyone has a right, this being America and all), then we become idolators. In the same way we fall into idolatry when we try to make the rich multiplicity of varied ways of doing things conform to our idea of some unitary, all embracing thing we call techonology—especially insofar as the idea of technology is connected for us with that of science, to create one great, Janus-faced über-idol. No less do we fall into idolatry when we buy into thinking that there is any such thing as “the” one and only one universal “common good,” which itself goes with the idea that there is some one universal “people” to which we all belong, as opposed to a rich diversity of distinct peoples, in the plural, with no “universal” to rule over them all. In turn, the idea of “culture” as itself some sort of goal or purpose that one might strive to attain—such that some folks might come to have “more” of it than others, for example—turns culture itself, which includes all those made things (made, but not made up: so we might even name them “fictions”) we call science, and technology, and common goods, and the like, into idols. No longer cherished as what builds up and opens out, what unfolds worlds, opening them out and holding them open, such matters gets perverted into service to the opposite sort of building, which closes everything down and shuts it away safe.

A few pages later in the same volume of his Überlegungen (on page 423), Heidegger mentions, in passing, “the working of an actual work.” That sounds better in the German: “die Wirkung eines wirklichen Werkes.” To preserve something of the resonance of the line in translation, we might paraphrase: “the effectiveness of an effective work”—keeping in mind that “to work” in English sometimes means “to bring about an effect” (as in the saying, “That works wonders!”). Or, to push the paraphrase even a bit further, we might even say: “the acting of an actual act.”

At any rate, in the remark at issue Heidegger says that “the working of an actual work” is that “the work be-works [or “effects”: the German is “das Werk erwirkt”]—when it works—the transposition [namely, of those upon whom it works] into the wholly other space that first ground itself through it [namely, grounds itself through the very work itself, an artwork, for instance].”

What I have translated as “transposition” is the German tern Versetzung, which comes from the verb setzen, “to place, put, or set.” Heidegger says that the work of the working work—the work of the work insofar as the work works, and doesn’t go bust—is to grab those upon whom it works and to set them down suddenly elsewhere. That is the shock of the work, as he calls it in “The Origin of the Work of Art,” from the same general period. It is the blow or strike, that is, the coup, that the work delivers to us, and in the delivery of which the work delivers us somewhere else. In the face of the work, at least when the working of that works strikes us in the face, then, as Dorothy said to Toto, we are not in Kansas anymore.

Such transposition is indeed shocking. It can be terrifying, in fact; and it is worth remarking that in German one word that can be translated as “to terrify” is Entsetzen, from the same root as Versetzen, “to transpose.” It is challenging to keep ourselves open to such terrifying transposition, such suddenly indisposing re-disposition of ourselves. We tend to close down toward it, trying to bar ourselves against it, withdrawing into safe places. Idolatry is no less than the endeavor so to enclose ourselves within safe places, rather than keeping ourselves open to such transpositions.*

*   *     *     *     *     *

From the beginning of my interest in them, I have known that the politics of The Coup is communist, at least in one good definition of that term (the definition Boots Riley, cofounder of the group, uses). As I have said before in this blog series, I am not certain about the complexion either of The Coup’s erotics or of their scientificity. However, I have now come to have it on good authority that The Coup are culinary anarchists.

The conjunction of the communist slant of their politics with the anarchist bent of their culinary persuasions gives me nothing but esteem for The Coup. On the other hand, that esteem would have been lessened not one bit if I had learned that they were, in reverse, culinary communists and political anarchists. The point is that neither in their politics nor in their food choices are The Coup into following the dictates of who or what lays claim to authority and power.

Adolf Hitler, who was no slouch when it came to claiming authority and power (all in the name of the common good of “das Volk,” of course), is just one of many claimers to authority from Aristotle on down to today who have cited for their own purposes this line from Homer’s Illiad: “The rule of many is not good, one ruler let there be.” Hitler was into that sort of thing. The Coup are into something different.

So is the Yerba Buena Center for the Arts in San Francisco, where my wife and I attended the world premier of The Coup’s Shadowbox. Making good on the promise I delivered toward the start of my second post of this three-post series on the after-shocks of that attendance, I want to come back to the “Note from the Curators” that opens the brochure I also mentioned there, the one about the Shadowbox premier. In it, the curators at issue write that YBCA “is in process of coalescing more consistently” with what they call “the energetic and aesthetic trajectories” of “local [aristic] ecologies,” especially the “local dance and music ecologies” of the Bay Area. By engaging in such a process, they write, YBCA, while “identifying itself as a physical place,” is also “aspiring to define itself as something more than brick and mortar.” YBCA is, of course, a physical place, and an imposing one at that, right in the heart of downtown San Francisco. More importantly, however, it “aspires,” as I read the curators’ note, to be a place that gives place to the taking place of works of art. As the two YBCA curators go on to write on behalf of the Center: “We aspire to hold firmly onto our institutional status while softening our institutional walls, locating the joy of less formal performance structure within our particularly austere architecture.” Pursuing that worthy—and, I would say, wonderfully anarchical, chaos-empowering—goal, they go on to write at the end of their note: “We plan to have hella fun** in this enterprise, to reposition participatory sweat as currency, to build momentum through the mechanism of witness, to celebrate the too often unseen, to make serious work of taking ourselves not too seriously while fixing our gaze on the exemplary unsung.”

Given that curators’ note, it strikes me that The Coup is right at home in such a venue as YBCA. So, for that matter, is Classical Revolution, which is the outfit (to use a word that seems to me to be appropriate to the case) from which came the quartet in which our daughter played one of her cellos as part of the world premier of The Coup’s Shadowbox at YBCA recently—and whose website (http://classicalrevolution.org/about/) I encourage my readers to consult, to check my just expressed judgment.

Nor is YBCA the only place-opening place where the performances of place-makers such as The Coup—and Classical Revolution and the other groups with whom The Coup shared their Shadowbox spotlight at the recent premier performance—are given a place to take place. Another such place in the Bay Area, one my wife and I also discovered thanks to our daughter during our recent trip to the West Coast, is The Revolution Café in San Francisco’s Mission District (http://www.revolutioncafesf.com/). That, it turns out, is the place where Classical Revolution was founded back in November 2006 by violist Charith Premawardhana, and where performances by Classical Revolution musicians take place every Monday night. There are many more such places, too, not only throughout the rest of the Bay Area, but also throughout the rest of the United States—and, I dare say, the whole, wide world.

To which I can only say: Amen! Which is to say: So be it!

 

 

*In reading Doniger’s words shortly after reading Heidegger’s, one thought that struck me was the question of whether Heidegger himself might not have succumbed to a sort of idolatry regarding “history,” Geschichte in German. Just as it is idolatry to think that there is any such thing as “the” common good or “the” people, isn’t it idolatrous to think that there is any such thing as “the” human story—“History,” with the capital ‘H’—as opposed to multiple, indeed innumerable, human stories, in the plural—“histories,” we might say, following Doniger’s lead? Yet Heidegger throughout his works talks about die’ Geschichte” (which, by the way, also means “story” in German, in addition to “history,” the latter in the sense of “what happened,” was geschiet), not just multiple Geschichten (“histories” or “stories,” in the plural). Perhaps that was at play in his involvement with the Nazis, despite the fact that, as the passage I’ve cited shows, he knew full well that it was mere idolatry to think in terms of “the” people, “das” Volk, as the Nazis so notoriously and definitively did. That, at least, was the question that came to my mind when I read Doniger’s line so soon after reading Heidegger’s. Even to begin to address that question adequately would take a great deal of careful thought, at least one upshot of which would surely be, in fact, that it is necessary to keep the matter open as a true question—rather than seeking the safety of some neatly enclosed, dismissive answer.

** As out of such things as I am, I don’t know if that is a mistake, or a way currently fashionable in some circles (or “ecologies,” if one prefers) of saying “have a hell of a lot of fun.” Whatever!

 

Pulling Out of the Traffic: The Après-Coups After The Coup (2)

Second After-Shock*: Accidental Strokes of Necessity

Art is good when it springs from necessity. This kind of origin is the guarantee of its value; there is no other.

— Neal Cassady

Our daughter has two cellos. To go with them, she has two cello-cases. Both cases are pretty well covered with various stickers and posts-ups that have struck her fancy from time to time. When we went to San Francisco recently to watch her play the cello in a quartet representing Classical Revolution, as part of The Coup’s Shadowbox premier, I noticed a new sticker on one of her cello cases. It had the lines above, from Neal Cassady.

That’s the same Neal Cassady who inhabited the heart of the Beat movement. Later he was not only “on the bus,” but even drove it. He drove the bus—namely, the psychedelic bus filled with Ken Kesey and his Merry Pranksters, the same bus Tom Wolfe eventually rode to fame in 1968 with the publication of TheElectric Kool-Aid Acid Test, that foundational text of the “New Journalism” that already long ago became old hat.

I didn’t notice our daughter’s new (to me at least) Neal Cassady sticker till a day or two after we’d attended Shadowbox, and when I read Cassady’s remark it resonated for me with my experience of the concert. That resonance was deepened when, even later, I noticed a brochure our daughter had lying on a bookshelf—an advertisement for the concert we had just attended. Put out by the Yerba Buena Center for the Arts and by Bay Area Now, the brochure started with “A Note from the Curators”—Marc Bamuthi Joseph, YBCA Director of Performing Arts, and Isabel Yrigoyen, Associate Director of Performing Arts—to which I’ll eventually return. That was followed by “A Note from the Artist,” in which an explanation, of a certain sort, was given for titling the concert Shadowbox. It read:

Late one night in the skies over Oakland, a strange object appeared. A cube. Perfectly still, 200 feet in the air. A reflective black box, with a neon glow surrounding it. Thousands of people hurriedly got out of bed, or filed out of bars and house parties, or left the cash register unattended—to stand on the street and gaze at the sight. Dogs barked and howled, louder and louder, in various pitches and timbres until it was clear that there was a consistent melody and harmony to their vocalizations. The cube started trembling, sending out a low vibration that made the asphalt shake, windows rattle, and car alarms across the city go off. Thousands of car alarms went off in a tidal wave of honks, beeps, and bleeps until they formed a percussive rhythm that accompanied the dogs’ beautiful howling. From the cube, a kick drum was heard that tied it together. A spiral staircase descended from the box. Only a few dared enter. What those few experienced has been the subject of several poorly made documentaries, an article in US Weekly, and three half-assed anthropology dissertations. What you will see tonight is a re-enactment of that experience.

I suggest that the “re-enactment” at issue be taken in the sense of an enacting again, as legislators are said to re-enact a law that will otherwise expire, rather than in the more ordinary sense of a miming, an acting out, as a community theatre group might re-enact Tennessee Williams’ A Streetcar Named Desire or Walt Disney’s Dumbo, or as bunch of court stooges might re-enact a crime in a courtroom at the behest of a prosecuting attorney, let’s say.   The Coup’s Shadowbox doesn’t just represent or mime the enactment of community that seems to have proven necessary following the sudden, unaccountable appearance—“fictitiously,” of course (and I’ll eventually return to that, too)—of a strange, black cube suddenly hovering in the sky over Oakland one night.

After all, The Coup—although it may be erotically capitalist and even, for all I know, scientifically fascist—is “politically communist,” as Wikipedia has it; and what The Coup is trying to do in Shadowbox, at least if we are to believe (as I do) Coup front-man and co-founder Boots Riley, is to get everybody moving. And although the movement at issue may be a dance, it is a dance that even such dance-dysfunctional still-standers as myself can join into, as I also wrote about last time. It is a political dance.

Which brings me to Jean-Claude Milner.

*     *     *     *     *     *

According to Jean-Claude Milner, ever since the ancient Greeks, politics—which term is itself derived from a Greek word, of course: polis, “city”—has been a hostage of mimesis, which is to say of just the sort of acting-out, of play-acting, that “represents” the action it mimes without re-presenting it, that is, without committing that action again. The mimetic re-enactment of a murder as part of a courtroom trial does not culminate in a second murder. In the same way, politics as the mimetic re-enactment of whatever acts mimetic politics re-enacts does not result in any new enactments of those original acts.

The acts that mimetic politics re-enacts are acts whereby the polis or “city” itself–which for the Greeks meant, in effect, the place where all real, truly human be-ing took place, to use again a way of speaking I favor—is first opened and set up, then kept open and going after that. From the days of the ancient Greeks until relatively recently, in one way or another such decisive political acts were taken not by everyone together, but only by a few.

Of course, those few invariably found it useful to represent themselves as making their decisions for the good of “all.” As Milner points out, however (3rd treatise, page 58**): “It is always in the name of all that each is mistreated.”

For the few who did make the decisions, and then impose them on everybody else, to keep their claim to be acting for the good of all even remotely plausible it always also helped to get “the people”—as we’ve grown long used to calling those the rulers rule over, though the term is supposedly inclusive of both—to believe that they were somehow actually participants in the decision-making itself. Those who were being decided over needed to be kept down on the farm, as it were, regardless of whether they ever got a chance to see Paree or not. The decided-over needed to be given the impression that somehow they were themselves deciders—as President George W. Bush once in/famously called himself.

Milner argues that classically, among the ancient Athenians, the theatre, specifically as staged in the great public performances of tragedies, was the crucial device that permitted the governors to govern those they governed—that is, permitted those who exercised power over others to keep those others in line. It did so by regularly bringing together all those who counted as “the people”*** to witness re-enactments, by actors behind masks, of the heroic deeds that were taken originally to have defined the people as the very people they were (with running commentaries provided by choruses that took over the job of being mouth-pieces for “the people,” who were thereby relieved of any need to speak for themselves). By so convening to witness such re-enactments, the citizenry—the public, the people—actually constituted itself as such.

Furthermore, in being brought openly together as an audience to witness the re-enactments of the original, originating tragic acts of the great heroes of Greek tradition, religion, and mythology, the people were also brought, through empathy, to vicarious identification with those people-defining heroes themselves, and their suffering for the people’s sake. Through such identification the people as audience were allowed to process the terror and pity with which the mimetic re-enactments of tragedy filled them, achieving catharsis, as Aristotle observed. That also helped keep them down on the farm.

Precisely because they were assembled as such an otherwise passive audience for the spectacle of decisive acts re-enacted or mimed in front of them, the people were effectively distanced from the underlying definitive decisions and actions being so mimed. They were allowed to feel a part of what was being re-enacted before them, in the sense of being mimed or “acted out,” while they were simultaneously being distanced from all the underlying genuine action itself. They could marvel and weep as “destiny” unfolded itself in the actions being mimed before them, while being dispensed from the need to undergo that destiny themselves.

As Milner puts it (2nd treatise, page 59):) “That distanced object, which in the crucial tradition of tragedy was called destiny, carries in politics, of course, the names: power, state, liberty, justice, or quite simply government.” What is more, he says, in our times the role that used to be played by tragic theatre is now played by—political discussion: the endless expression of opinions compulsively formed about political matters. Such discussion permits the discussants to think that they are really part of the political action, when in fact they are distanced effectively from it by the endless palaver about it. They are merely playing at politics, the way children play at being adults. They are “actors” only it that mimetic sense, not in the sense of decisive agents.

The difference, however, is that today, unlike in ancient Athens, everybody is reduced to the status of such a mere play-actor. That even includes the few who presumably, in the days of the ancient Greeks and for a long while thereafter, used actually to govern—to be genuine agents or “deciders.”

The reality today is simply this: No one decides, decisions just get made. Things of themselves get decided, as though things themselves are dictating the decisions—hence the name of Milner’s first short political treatise, which translates as The Politics of Things—but without anyone doing the actual deciding.

Accordingly, as I already indicated in my previous series of posts on “The Future of Culture,” no possibility of clearly assigning responsibility for decisions remains. Even more importantly, there are therefore no identifiable political pressure points, points where political pressure might be exerted in order to effect significant change. Everything just keeps on chugging along, with no one directing anything, despite how deluded some may still be into thinking they have some impact (for example, the President of the United States, whoever that may happen to be at any given time). The whole thing is no more than a dumb-show. Nobody is in charge of anything.

*     *     *     *     *     *

Sometimes, though, lightning strikes. Or suddenly a huge black cube with a neon glow appears in the sky. The Coup comes, and folks get moving.

*     *     *     *     *     *

Necessity is not causality. For necessity to emerge, in fact, the causal chain must actually be broken. Causality brings inevitability, Nietzsche’s “eternal recurrence of the same”—always the same old same old, never anything truly new under the sun (or the moon and stars at night). The necessity that Neal Cassidy says is the only guarantee of real worth in art is not causal inevitability. It is the necessity, the need, of creativity—the need of a pregnancy brought full term finally to burst and bring forth new life.

Any child born of such necessity always comes unexpected. The child always comes as an unexpected, un-expectable surprise, even for parents long filled with the knowledge that they are “expecting.” What can be expected is at most a child, one or another of the innumerably substitutable instances of the class of children, but never this child, the very one who so suddenly, so urgently, so imperiously, insistently comes into the world, and who, once come into it, simply demands, by its very being there, to be named.

Giving a name in the sense of what we call a “proper” name—which is to say “insofar as it is not just another name” (as, for example, dog, Hund, or chien are just three names for the same thing), that is, a name “insofar as it [names] not just anyone,” as Milner writes at one point (3rd treatise, page 75)—always “appears as an obstacle” to whatever or whomever claims to act in the name of “all.” What Milner means in that context is “all” taken in the sense of a closed totality, such as what is ordinarily called a “nation,” for example, the “borders” of which must be secured and protected. The singular, the radically unique, what escapes number, substitutability, and, therewith, any capacity to be “represented” by another, always constitutes a threat to all claims to special authority in the name of any such totalizing “all.”

However, universal quatification, as logicians call it, over “us” or over “human being”—as in “all of us,” or “all human beings”—need not be the move to any such totality as a “nation.” The “all” need not be taken in any such collective sense. Instead, the “all” can be taken in the distributive sense of “each and every single one,” so that “all of us” means each and every one of us as someone who has been given, or at least cries out to be given, a proper name, a name by which that singular one, and that one alone, no other, can be called.

The name by which the singular individual is called, however, calls that one as just that very one, and not as no more than an instance of what that one has in common with a bunch of other ones—for example, being black, white, brown, or yellow, young or old, educated or uneducated, employed or unemployed, American, Mexican, Honduran, Syrian, Iranian, or Indian. The bearer of a proper name—by which I would like above all to mean a name that is truly just that, a genuine name, and not a mere place-holder for a description—is no mere instance of a type, replaceable with any other. The bearer of a proper name is, rather, irreplaceable. (Regular readers of my blog might think of Fluffy, my daughter’s childhood pet guinea pig, for instance.)

*     *     *     *     *     *

As cacophonous as it may initially sound—like the sound of multiple dogs howling and multiple horns blowing in the night—to say so, it is only such an irreplaceable singularity that can be “necessary” in the way Neal Cassady says the authentic work of art is necessary. The necessity of artistic work is the same as the necessity of seizing one’s one and only opportunity to become who one is, when that opportunity suddenly presents itself. It is the same as the necessity of joining the fight against injustice into the reality of which one is suddenly given clear insight, or the necessity of giving oneself over completely to a suddenly awakened love. In short, it is the necessity of selling everything one owns for the sake of pursing what one is given to see is priceless.

Necessity is order, to be sure. However, it is the order that comes from the unexpected emergence of connection between what theretofore seemed to be no more than a randomly thrown together bunch of discreet, isolated facts. Necessity gives birth to the cosmos. That word is from the Greek word for “ordered whole,” but which originally meant “ornament,” which is why we also get cosmetic from the same word.  Cosmos is the “all” of everything insofar as everything has been brought together into one coherent whole, like an ornament. Cosmos is the ornamental whole of everything emerging out of chaos itself, which also a Greek word, which originally meant something like “yawning gap.” Necessity is the origin of that genuine cosmos which is the coming into an ordered whole of chaos itself. Necessity is the origin of that order that is not imposed upon chaos from without, as though by some ruler, but that arises, instead, of necessity, from chaos itself.

Among the same ancient Greeks to whom we owe tragic drama, the emergence of cosmos from chaos was attributed to Zeus. However, Zeus, the god of thunder and the thunder-bolt, was not himself without genesis. King of the gods he might have been, but Zeus himself came from the chaos; and if he came to order the latter, he still came at its bidding, and from within. He came of necessity, which origin demonstrates the authenticity of his glory.

*     *     *     *     *     *

Coming from out of the Greek chaos, Zeus also came from out of the Greek imagination, that same imagination from which sprang all the gods of Greek mythology. The order that the Greek imagination attributed to Zeus was itself anything but an imaginary order. Nevertheless, its origin—and its guarantee of worth, which is also to say its real necessity—lay in the Greek imagination.

Imagine that!

*     *     *    *     *     *

I will try to imagine something of it, in my next post, which will continue—and, I think, end—this present series on the after-coups of The Coup.

* Only while writing this post did it occur to me to call the separate posts of this series not “Parts,” as I had it when I put up the series’ first post a few days ago, but “After-Shocks,” which is much more appropriate. So I went back and edited my first post a couple of days ago. First, I slightly changed the title. Originally, I had used après-coup, French for “after-shock,” in the singular. I turned that into the plural, après-coups. Then I changed the title of the first series’ post itself from “Part One” to “First After-Shock.” Thus, it was only by one of the smaller après-coups of the coup delivered to me by attending The Coup concert that I was coincidentally struck by the need to change my titles a bit. Appropriate indeed!

** Milner has published three “short political treatises,” all brought out in France by Verdier: La Politique des Choses is his Court traité politique 1 (20011), followed by Pour une politique des êtres parlant as treatise 2 (2011) and L’Universal en éclats as treatise 3 (2014). I will give references in the text of this post, when needed, by the number of Milner’s treatise, followed by the page number at issue.

*** That is, the “citizens,” which means literally the habitants of the “city” as such, the polis, the place where human being took place. So, of course, that left out slaves, women, and all the other others who simply didn’t count—including counting as fully human, since they were not “citizens,” not full-fledged inhabitants of the place human beings as such inhabit. As non-citizens, those other others didn’t need to be brought on board the city boat because they were simply subject to force, with no need to rely on subterfuge—conscious and deliberate or not, who cares?—to make them think they were free even while they were being coerced.

Pulling Out of the Traffic: The Après-Coups After The Coup (1)

 

This is the first in a new series of posts.

*     *     *     *     *     *

First After-Shock:  The Coup and I

Just a week or so ago, my wife and I flew all the way across the country from New Jersey, where we are summering, to California. We made the trip in order to attend Shadowbox, a new multimedia project put together by the hip-hop group The Coup, which was having its world premier at the Yerba Buena Center for the Arts (YBCA) in San Francisco.

If you have no idea who The Coup may be, don’t feel alone. I had no idea either, until I attended the concert. Only then did I begin to get an idea of who The Coup may be—an ideational process very definitely still in progress.

We went to The Coup performance in order to see our daughter, a cellist who lives in northern California, perform in a string quartet from one of the other musical groups that The Coup had given a role in their concert. The Coup does that.

Indeed, that is one good place to start knowing who The Coup is—or at least it is for me, given what I saw that night. The Coup is a group of musicians that goes out of its way, whenever and wherever it performs, to share the spotlight, whose shine its presence generates, with other, lesser-known, more “local” groups. Rather than laying claim to all the glory for itself, The Coup would seem to glory in sharing the glory with others.

So who The Coup is, is a group that builds up groups. At least judging from Shadowbox, a Coup performance is the opening up of a place, a space, where groups of musicians, including The Coup itself, can play music together. At the event my wife and I attended, those who played music along with The Coup, on the three stages set up for the purpose, with The Coup on the center-stage, were what some of the YBCA promotional material describes as “up-and-coming Oakland experimental soul act Mortar & Pestle, new wave folksters Snow Angel, NOLA-style second line outfit Extra Action Marching Band, and neo-chamber orchestra Classical Revolution,” the group that included our daughter on cello. Also playing music were some “special guests,” including “longtime Riley co-collaborators and fellow revolutionary hip-hop torchbearers dead prez.” Then there was also “alternative puppet troupe Eat the Fish Presents” (which, as the name suggests, provided puppetry as well as music)—as well as various other musical participant-guests, of both Bay-area and broader provenance.

The same space The Coup opens for musicians to come and play along, is also open to others, besides musicians—others who are also invited to enter and play along, each after each’s own fashion. In the case of Shadowbox, those “others” included visual artist Jon-Paul Bail, who created the noteworthy graphic-art murals that hung on all four sides of the performance space, and production designer David Szlasa, as well as comedian W. Kamau Bell. The “others” also included all the members of the audience who attended the two sold-out premier performances on August 16.   Most of that audience played along by dancing, hopping, jumping, writhing, gyrating, hand-lifting, gesticulating, waving, and in other ways noticeably moving around physically. Some did that more than others, of course. Then, too, there were other “others” who just stood there pretty much immobile. I was one of those other others (and I’ll return to me soon, as I always like to do). In one way or another, musicians or muralists, puppets (and puppeteers) or comedians, gyrators or still-standers, “artists” or “audience,” we all took part in the performance itself, becoming, at least for that few hours, a richly diverse community of our own.

Indeed, judging from my experience of Shadowbox, a Coup performance is precisely that: the creation of a space, an opening, where community can—and in one manner or another actually does—occur. Thus, one might say that a Coup performance creates a communizing space.

That is not a bad way to put it, “a communizing space.” Boots Riley, front-man and lead for The Coup, who co-founded the group back in the beginning of the 1990s, self-identifies as a “communist.” According to Wikipedia (http://en.wikipedia.org/wiki/The_Coup), The Coup itself is “politically communist.”*

The end of the Cold War had at least one good side-effect: It made it possible even for Americans to use the term “communist” in a positive way and still find wide popularity, as the success of The Coup attests. The Wikipedia entry for The Coup also tells us how the “communism” at issue for Riley and The Coup is to be defined. It quotes Riley as saying: “I think that people should have democratic control over the profits that they produce. It is not real democracy until you have that. And the plain and simple definition of communism is the people having democratic control over the profits that they create.”

Not a bad definition. Not a bad idea.

Correlated to that idea is something else Riley said at a couple of points during Shadowbox itself, when there would be a pause in the music and other action and he would briefly just speak into the microphone. That was how, when we find ourselves part of a movement—such as the Occupy movement, in which Riley himself has played a part, especially in the Oakland area, or the “communist” movement to give “the people” themselves control over the profits their own efforts create—then we no longer act and live just as isolated individuals, but as parts of a whole, of an “us” in effect.

One of the times he said that sort of thing, Riley added that such movements are the genuine way to address the real problems that we face, which, he affirmed, are not just a bunch isolated, individual problems. “Our” real problems are not just my problems, plus your problems, plus his, her, and their problems, as our global-economy simulacrum of a culture would have us believe (my words there, not his—though I doubt he’d spit them out in disgust). Rather, “our” real problems are group problems, problems that “we” have together (and that we therefore must also address together, in “movements”).

So the message he was delivering nicely matched the delivery-system he was using to deliver it, that is, the delivery-system of The Coup’s Shadowbox project itself, which was such an inclusive, “all of us” sort of thing, as I’ve tried to make clear, and as it so powerfully struck me as being. That effectively effected creation of a new body of which I experienced myself to be a part was a coup The Coup strongly delivered to me, at least. Yet at another of the times Riley said the thing about movements, a bit earlier that same night, I seemed to receive a counter-coup, as it were. The way I was struck by something else he went on to say at that point ended up in-cluding me personally as part of the “us” of the community at/of the performance only, paradoxically, by ex-cluding me. I’ll try to explain.

What Riley said on the occasion in question was to the effect that trying to address the real problems we all face together by trying to maintain our perceived, precious “independence” in refusing to let ourselves become involved in any “movements,” was “like going to a Coup concert and not dancing.” The only way you really could “attend” a Coup concert, he said, was by joining the dancing. Otherwise, you weren’t really in attendance at all. In my words: Your body might have been there, but you weren’t.

My problem, however, is that, you see, I don’t dance. Often, no-longer-drinking drunks such as myself share with one another how they never danced when they were sober, but that once they belted a few drinks they were disinhibited enough to do so. Well, as I will often tell such other now-abstinent drinkers, not only did I not dance when I was sober. I also did not dance when I was drunk. (“But when I drank, I didn’t give a shit,” I always like to add.)

Well, I could tell you that on Saturday night, August 16, in the Yerba Buena Center for the Arts during The Coup’s Shadowbox, when Boots Riley said what he did about how the only way to attend a Coup concert is to join in the dancing, he inadvertently threatened my nearly 28-year sobriety! I could tell you that. But I won’t. It would be a lie.

What he did do, though, was to challenge (say “threaten,” if you like, it doesn’t matter) my sense of being part of the “we” who were all there together in the Yerba Beuna Center attending the Coup concert that night. If you have to dance in order really to attend a Coup concert, then it seemed I was not in attendance, despite my physical presence, my mental presence, and even my shock from the coup The Coup was delivering to me. That left me uneasy and uncertain, since my desire, grounded in my multidimensional presence to the presentation that night, was to be one of “us,” and not just some isolated, dis-involved “me.”

My uneasiness and uncertainty did not last long, however. It found itself dispelled when, a bit later, Boots Riley spoke again about “movements,” and said what I recounted first above—the business about our problems really being our problems, a matter of the group, and not just the personal, individual problems of each one of us. Hearing him say that, and appreciating its truth, suddenly gave me the insight that my own lifelong, total, immobilizing disability/disinclination/dis-capacity to dance—and therewith my very isolation and exclusion—was, if you will, not my fault. I was not to blame for it. My problem was in that sense not just my problem any longer, it was our problem.

I believe that I’ve shared before on this blog a line I treasure from the literature of Narcotics Anonymous. NA is a Twelve-Step group for which I lack the qualifications for membership, insofar as narcotics were never my thing at all. Nevertheless, I easily identify with NA members and have nothing but respect for NA as a group. In fact, it would not be at all off the mark to say that, when it comes to NA, I feel myself to be of the group even if I am not in it, as it were.

That is itself an example of what I’m trying to describe about my non-dancer’s relation to the dance requirement for membership in the group/community constituted by and in participation in The Coup concert I’m addressing: an example of how ex-clusion itself, properly undergone, can be a vehicle for a new, more inclusive in-clusion of its own. But that’s not why I brought NA and its literature up. Rather, I brought it up because of the line from that literature that, as I’ve already mentioned, I treasure. In that line the NA member-authors say, with regard to their being hooked on narcotics, “We are not to blame for our own addictions, but we are responsible for our own recovery.”

Well, what struck me when Boots Riley made his remarks about how our problems are group problems, and not just individual problems was that I was not to blame for my own dance-disability, but that I was responsible for my own recovery from it.

Recovery from dance-disability does not consist in all of a sudden miraculously acquiring the capacity to go out and dance, dance, dance the night away. If it did, then it would not be my own responsibility at all. It would be God’s responsibility, or the responsibility of the dance-doctors, or of whatever other higher authority took care of such things, if there is any such authority. Recovery from dance-disability consists of making and then keeping the decision not to let one’s inability to dance exclude one from the party. There’s more than one way to dance, and the challenge to those who would recover from dance-disability is to find how to make not-dancing into its own way to dance.

As it happens, what came to my mind on the recent evening of August 16 as I stood listening to The Coup in the Yerba Buena Center for the Arts in San Francisco, and I heard Boots Riley remind us that our problems are really our problems, was not that fine line from NA. It was, instead, Russell Banks’ 1989 novel Affliction.

I used Banks’ later novel Cloudsplitter, about the abolitionist John Brown, in the final post of the preceding series on this blog. I finished writing that post just a day or so before my wife and I took off for San Francisco to attend The Coup’s concert. (Our internet connection went down just before I finished writing the post, so even though it was already written before we left, I did not actually post it until just the other day, after we got back to New Jersey and were able to get our internet service back.) Because of using Cloudsplitter in that preceding post, I had decided to go back and read Affliction, which I’ve meant to read for years, but never got around to till now. So I downloaded the e-version of the book and took it with me to read while we visited California and attended The Coup’s concert.

Affliction is the story of Wade Whitehouse, a 41-year-old man. Interestingly, Boots Riley is roughly the same age now, by the way, so perhaps my mental pairing up of the two on August 16 at the concert was in part affected by that analogy. At any rate, Wade Whitehouse is an American male who is afflicted by a not uncommon American male condition. He comes from a home with an abusive, alcoholic father and a passive, acquiescent mother, and hasn’t a clue about how to own his own feelings, ambitions, aspirations, or, in short, life. Wade is robbed of himself, through no fault of his own. He is no more to be blamed for his affliction than narcotics addicts are to be blamed for their addictions.

Nor does the narrow, rural New England world in which he lives offer Wade any real possibility of escape. Indeed (and this is really the same thing, just put a bit differently), it offers him no real possibility even to become fully aware of his own condition. Thus—unlike narcotics addicts fortunate enough not only to bottom out into desperation, but also to find a new option, unavailable to them until then, through NA or the equivalent—Wade is never given so much as the opportunity to assume responsibility for recovering from his afflicted condition.

As a result, he gets locked into repeating the very cycle of violence and abusive parenting (only with differences, of course, as is always the case in such cases) that he so longs to escape. But there is no escape for him, and Banks’ novel (at least read at the surface level, which is what I am doing in my account of it here) chronicles his relentless spiraling downward into violence and murder.

Wade Whitehouse came to my mind on the night of August 16, just a bit over a week ago, when I was feeling so left out of things at the Coup concert and heard Boots Riley talk about our problems being group problems, and not just individual problems. Hearing his remarks triggered my memory of Russell Banks’ novel, which so caringly details how Wade Whitehouse’s problems were, just as Riley was saying, not just Wade’s individual problems, but were generated by the whole constellation of factors that made up Wade’s world: They were “group” problems.

Unlike Wade Whitehouse, who was offered no options, I have found myself offered options for recovering from the afflictions with which I have myself been beset. I have been offered such options more than once, for more than one affliction—or at least for more than one manifestation of my affliction, if there is really only one, in the final analysis. On the night of August 16, 2014, I was offered an option for recovering from the affliction of my radical, total, and irremediable dance-disability. I was shown that my very not-dancing could become, if I would have it be so, a dancing of its own.

For that, I would like to thank The Coup.

*     *     *     *     *     *

Next time, Part Two.

* I must confess I’m not sure whether Wikipedia means that the politics of The Coup is communist, or wants to suggest that there are non-political ways of being communist, such that The Coup might not be communist in those other, non-political ways (maybe The Coup is politically communist but erotically capitalist, for example—whatever that would mean). Either way, the remark raises some questions worth a thought or two—questions that could be summed up under two, using two richly ambiguous expressions: Just what is the politics of art? And just what is the art of politics?

Pulling Out of the Traffic: The Future of Culture (5)

This is the final post in a series of five under the same title.

*     *     *     *     *     *

In my lifetime up to that point and for many years before, despite our earnest desires, especially Father’s, all that we had shared as a family—birth, death, poverty, religion, and work—had proved incapable of making our blood ties mystical and transcendent. It took the sudden, unexpected sharing of a vision of the fate of our Negro brethren to do it. And though many times prior to that winter night we had obtained glimpses of their fate, through pamphlets and publications of the various anti-slavery societies and from the personal testimonies given at abolitionist meetings by Negro men and women who had themselves been slaves or by white people who had travelled into the stronghold of slavery and had witnessed firsthand the nature of the beast, we had never before seen it with such long clarity ourselves, starred at it as if the beast itself were here in our kitchen, writing before us.

We saw it at once, and we saw it together, and we saw it for a long time. The vision was like a flame that melted us, and afterwards, when it finally cooled, we had been hardened into a new and unexpected shape. We had been re-cast as a single entity, and each of us had been forged and hammered into an inseparable part of the whole.

. . . .

Father’s repeated declarations of war against slavery, and his asking us to witness them, were his ongoing pronouncement of his lifelong intention and desire. It was how he renewed and created his future.

— Russell Banks, Cloudsplitter: A Novel

 

There is a way of building that closes down, and there is a way of building that opens up. Correspondingly, there is a way of preserving that checks locks and enforces security, and there is a way of preserving that springs locks and sets free.

Cloudsplitter is Russell Banks fine 1998 novel of the life of the great American/abolitionist John Brown, as told through the narrative voice of Brown’s third son, Owen. What Banks/Owen describes in the passage above is a building and then a preservation of the second sort, the sort of building that opens up, then the sort of preservation that keeps open.

The passage comes from relatively early on in the long novel, in the second chapter. What is at issue is at one level a very minor, everyday thing (everyday, at least, in 19th century American families such as John Brown’s): a shared family reading, begun by John himself, then continued by other family members in turn, each reading aloud from the same book, passed on from one to the other.

What the Browns are reading at that point in the narrative is a book recounting the horrors of American slavery. The book does that very simply and straightforwardly. It just presents page after page of the contents of ads of a type often placed, at the time, in newspapers—throughout the slave-holding states, at least. They are ads in which property owners who have suffered thefts of a certain kind solicit help, mainly for monetary reward, to track down and retrieve their stolen property. The property at issue consists of human beings owned as slaves, and the thefts at issue have been committed by that property itself—that is, by slaves who have tried to steal themselves away from their lawful owners, by running off. In ad after ad, slaveholders detail the scars that they have inflicted to the faces, backs, limbs, and torsos of their slaves. The slave-owners catalogue such traces of whippings, cuttings, burnings, and other abuses they have inflicted on their slaves, in order that those traces might now serve, in effect, as brand-marks by which their (self-)stolen goods can be identified, in order to be returned, it is to be hoped, to its rightful owners.

The experience of listening together to such genuinely reportorial reading during the evening at issue galvanizes the Brown family into a “new body,” to borrow an exactly apt term from Alain Badiou’s seminar on “images of the present times” (in which at one point he cites Cloudsplitter, and praises Banks).   Until that uneventful event of such simple family reading of an evening, the Browns had been, despite all family relations, affection, and sharing, no more than a collection of individuals—just instances of a family named “Brown,” as it were. “It took,” as Banks has Owen tell us in the passage above, “the sudden, unexpected sharing of a vision,” a vision “like a flame that melted us,” truly to meld them together and “re-cast” them “as a single entity,” in which each one of them “had been forged and hammered into an inseparable part of the whole.”

In the quiet of their family kitchen, their shared reading that evening brings the Brown family—brings that family as a whole and in each of its family members—to a point of decision. In the fire of that experience the family, each and all, is brought to decision; it gets decided as it were. That night, the family gets resolved. And so it will remain, one way or another.

Lapses will continue to remain possible, of course. In fact, they will all too often actually occur. One or another family member—now Owen, now one of his brothers or sisters, now even “the Old Man,” John himself—will lose his or her resolve, becoming irresolute again. But that will no more rescind the resolution than the breaking of a marriage vow rescinds that vow.

Broken vows and lapses in resolve are betrayals and acts of infidelity. As such, they do not cancel out the original vows or resolutions. Rather, they call for acts of contrition, repentance, and expiation, and, above all, a return to fidelity—that is, they call to renewed faithfulness to the vow or resolve that was betrayed.

*     *     *     *     *     *

In Toward a Politics of Speaking Beings: Short Political Treatise II—Pour une politique des êtres parlant: Courte traité politique II (Verdier, 20011), page 56—Jean-Claude Milner cites the 1804 remark, often attributed to Talleyrand, “It’s worse than a crime, it’s a mistake.” As Milner points out, a “mistake” is, at most, a significant “error in calculation.” It is therefore the sort of thing that may indeed sorely need to be corrected. However, unlike a crime, “it does not need to be expiated.”

*     *     *     *     *     *

“We blew it!”

That’s said by Dennis Hopper’s character in Easy Rider, the classic 1960s buddy-movie about two hippies’ cross-country motorcycle journey together—costarring Peter Fonda, who also directed the film. Hopper delivers the line as the two are riding along a country road side by side on their two bikes, after doing their thing in New Orleans for a while. It comes just before Hopper’s character gets blown away with a shotgun by a Southern cracker in a pick-up.

The moral of the story? Don’t blow it—or you’ll be blown away!

*     *     *     *     *     *

Exactly how the two hippie bikers in Easy Rider “blew it” is open to diverse interpretations. However, by any interpretation worth considering, “blowing it”—whether done by the characters played by Hopper and Fonda in that movie, or by the members of the Brown family in Banks’ Cloudsplitter, or by whomever in whatever circumstances—is not a matter of an error in calculation. It is no omission or oversight in cost-benefit analysis, no limitation in one’s capacities for “rational decision-making.” In short it is not a mistake.

It is a crime.

“Blowing it” is not necessarily—or even in any important case—a crime in the sense of a violation of any law of any such state as Louisiana. It is a crime, rather, in the sense of a breach of faith, a failure to keep faith—above all, a failure to keep faith with oneself. As such, it cries out not for correction, but for expiation.

*     *     *     *     *     *

The institution of American slavery was a crime, not a mistake. It was a human betrayal of humanity, not an error in calculations or a failure in “rational decision-making.” By the passage I have cited from Banks’ novel, John Brown’s third son Owen and the rest of John Brown’s family were brought together—which should itself be read in a double sense here, to mean both that the whole bunch of them were brought, and that the bunch of them were brought no longer to be just a bunch, but to be a true whole—by an insight into the reality of that institution, American slavery.   Given such insight by nothing more than the everyday event of an evening’s family reading, they were thereby brought together to a point where they no longer had any choice but to join the family patriarch in his declared war against that criminal institution. They either had to join John Brown, the family patriarch, or betray him—and, along with him, themselves.

To find oneself at such a point of decision—but what am I saying? To be brought to such a point of decision is precisely to find oneself! So I should have said that to find oneself at last, by being brought to a point of decision, is precisely in such a way to be given no choice. At such a point, one “can do no other” than one is given as one’s own to do, as Luther said at the Diet of Worms in affirming his continuing defiance of the Church hierarchy and its self-claimed “authority.” One can do no other at such a point than what one finds oneself, at and in that point, called to do.

If one does not heed that call, then one lapses back into loss of oneself, lost-ness from oneself, again. Thus, as I have written in this series of posts before, at a point of decision, one is not given two equally passable options, between which one must choose. Rather, one is given one’s one and only opportunity, the opportunity at last to become oneself, to make one’s life one’s own.*

When one is faced with such an opportunity, such a moment of clarity, such a point of decision, if one even bothers to “count the costs” before declaring oneself, then one has already declared oneself—already declared oneself, namely, to be a coward and a criminal. By counting the costs before one makes up one’s mind in such a situation, at such a point, one has already lost one’s opportunity, and, with it, any mind worth keeping, no matter how “rational” that mind may be. One has blown it.

*     *    *     *     *     *

In 1939 Random House published a new novel by William Faulkner. Faulkner had given his work the title If I Forget Thee, Jerusalem. In the novel Faulkner interwove two stories, each of which could perfectly well stand on its own, as each—one of the two, especially—has often been made to do, in anthologies and other later re-publications of Faulkner’s works. One such potentially autonomous story is called “Wild Palms,” and the other one, which is the one most often published just by itself alone, is called “Old Man.”

Faulkner took the title he gave the combined whole of the two tales from Psalm 137 (136 in the Septuagint numbering), which sings out Israel’s own vow not to forget Jerusalem during Israel’s long captivity in Babylon. It is an intemperate psalm, declaring an intemperate vow, which is intemperately sealed by a prayer that the singer’s right hand might wither, and the singer’s tongue cleave to the roof of the singer’s mouth, if that vow is not kept. The psalm then intemperately ends by calling down wrathful vengeance on the Babylonians, blessing those of that city’s enemies who might one day, as the psalmist fervently hopes they do, seize the Babylonians children and bash their brains out on the rocks.

Especially today, decent, rational folks are shocked by such sentiments.

They didn’t seem to shock Faulkner, however. Or, if they did, it would seem to have been with the shock of insight and recognition, since he not only chose a crucial line from the psalm as the title to his double-storied 1939 novel, but was also chagrinned—and protested, to no avail—when Random House, on the basis of its own cost-benefit analyses no doubt, made the quite rational decision to refuse to bring the book out under the title Faulkner had given it. Instead, they took the title of one story (with ironic justice, it turned out to be the title of the story that has subsequently “sold” far less well than the other, in the long run, judging from subsequent re-printings/anthologizings) and published the whole as The Wild Palms. Not until 1990, twenty-eight years after Faulkner’s death, did an edition come out under the title Faulkner originally chose.

The Wikipedia entry for If I Forget Thee, Jerusalem (http://en.wikipedia.org/wiki/If_I_Forget_Thee,_Jerusalem) characterizes the novel as “a blend of two stories, a love story and a river story,” identifying “Wild Palms” as the former and “Old Man” as the latter. However, the entry goes on to point out that “[b]oth stories tell us of a distinct relationship between a man and a woman.” Indeed they do, and I would say that, in fact, both are love stories—only that one is the story of a love kept, and the other the story of a love thrown away. Or perhaps it would be more accurate to say that one, “Wild Palms,” is the story of a decision to love, a decision boldly taken and faithfully maintained, regardless of the cost, whereas the other, “Old Man,” is the story of refusal to decide to love, and a cowardly clinging to security instead.   The first is a story of love enacted; the second, a story of love betrayed.

I would say that, read with ears tuned for hearing, the Wikipedia entry brings this out very nicely, actually, in the following good, short synopsis:

Each story is five chapters long and they offer a significant interplay between narrative plots. The Wild Palms tells the story of Harry and, Charlotte, who meet, fall in forbidden love, travel the country together for work, and, ultimately, experience tragedy when the abortion Harry performs on Charlotte kills her. Old Man is the story of a convict who, while being forced to help victims of a flood, rescues a pregnant woman. They are swept away downstream by the flooding Mississippi, and she gives birth to a baby. He eventually gets both himself and the woman to safety and then turns himself in, returning to prison.

To be sure! Whoever refuses the opportunity to love does indeed return to prison!

That’s just how it is with decisions, whether they be decisions to love, or to take to the streets in protest of injustice, or to hole oneself up in a room and read, read, read, in order to write, write, write—or, perhaps, the decision never to forget.

Faulkner’s story of Harry and Charlotte’s decision to love one another whatever the cost, especially when that story is read in counterpoint to his story of the old man who prefers the security of prison to the risks of love (and who is made “old,” regardless of his chronological age, precisely by so preferring), shows that such decisions can have serious, even fatal, consequences. Yet it also shows, even more strongly, that only an old coward would count such costs before deciding to love, when the opportunity to do so presents itself.

Most of us most of the time are old cowards. Far too often, all of us are. None of us never is. That, however, is no excuse.

*     *     *     *     *     *

Making a genuine decision is something very different from choosing between brands of beer, political parties, or walks of life–all of which are subject to the sorts of cost-benefit analysis that pertains to what is, in our longstanding “Newspeak,” called “rational decision-making.” In sharp contrast, making a genuine decision is nothing “rational.” Rather, it is taking one’s one and only chance to live, and to do it abundantly—rather than just going on surviving, hanging on and waiting around until one can finally “pass away.”

It is just because that is the nature of genuine decision that there is always an ongoing need, past the point of decision, after one has decided oneself, from then on to continue regularly admonishing oneself to stay faithful to one’s decision, to keep one’s resolution.   For the same reason, it is essential, having made a decision, to continue regularly to ask for, and accept, whatever help one can get from others to keep to one’s decision—and, in turn, willingly to help others who have joined one in one’s decision to do the same: to “keep the faith,” as the old saying goes. **

It was in just such a way, “in repeated declarations of war against slavery,” and in repeatedly “asking [his family] to witness them,” and thereby making “ongoing pronouncement of his lifelong intention and desire,” his life-defining intention and desire, that John Brown “renewed and created his future,” as Banks has Brown’s son Owen say at the end of the passage cited above. So must it be not only for John Brown, but also for us all. Only with such help and such repetitions of our own declarations of whatever may demand such declaration from each and all of us, can we have any hope of “renewing and creating” our own future.

*     *     *     *     *     *

Since the ancient Greeks, the work of art has been taken as a paradigmatic cultural product, in the sense that I have been giving that latter expression. In 1935, when he first delivered his lectures on “The Origin of the Work of Art,” Heidegger argued that the work of the work of art, as it were—what the artwork does, we could put it—is to bring those on whom it works to a point of decision, to use my way of articulating it. The work of art, says Heidegger, this time still using his own terms, opens up a world, and sets that world up by anchoring or grounding it in the earth. The artwork is the very place where that takes place. As such, it is not interchangeable with any other place. Rather, it is absolutely singular, utterly unique: something truly new under the sun, something the like of which has not been seen before, nor will ever be seen again. It is one of a kind—namely, one of that very kind of kind that is really no “kind” at all, since it has only one “instance,” to use one of my ways of speaking from earlier in this series of blog posts.

The shock of such a work as such a place, the shock that such a work, such a place, is there at all, calls upon those whom it shocks to make a decision. That’s the work of works of culture, the produce of cultural production. So shocked, one can enter into the work of the work itself—as John Brown’s family in Banks’ novel entered into the work of John Brown (though he was no work of art, to be sure), when that family was suddenly shocked into seeing reality. Or one can decline so to enter into such work—and, in so declining, enter, despite oneself, into one’s own decline.

If one does not decline, but joins the work in its work—as John Brown’s family joined him in his—then one preserves the work. That does not mean, as Heidegger insists it does not, that one takes the artwork and locks it away safe somewhere. Rather, one preserves the work by doing what one must to keep open the world that the work first opened up. That is, one preserves the work of art by persevering in the work of that work, regardless of whether that work of art itself even continues to be around. Only in that way does one truly keep or preserve the work.

That includes keeping or preserving it “in mind,” that is, remembering it. To remember a work of art properly—that is, as the very work one seeks to remember—is not recurrently to call up any “memory-images” of it that one keeps locked away in one’s memory banks somewhere, whether those banks are in one’s brain or in one’s smart-phone or wherever they may be. Rather, properly to remember a work of art is to keep open the world that the work first opened, or at least open to it.

In just the same way, to stick with the analogy I’ve been using, those who preserved John Brown’s memory, once he was arrested by Federal forces and then hanged by the state of Virginia, did so not by erecting memorials to him at Harper’s Ferry or anywhere else. Nor did they preserve his memory by recurrently spending time looking at old pictures or other images of the man himself. Rather, those who preserved John Brown’s memory—those who did not forget John Brown’s body as it lay “moldering in the grave,” as the song says—did so by continuing to carry on the very war he had declared against American slavery. Well, just as John Brown continued to call people to decision even after his death, so can works of art call those who encounter them even after have ceased to be at work themselves.

What is more, John Brown can continue to call us to decision even today. Even now—long after John Brown’s body has moldered completely away, and nearly as long since the war he waged morphed into the Civil War that eventually brought the institution of American slavery as he knew it to an end—we can still be moved by being reminded of him. It no longer makes sense to speak today of joining John Brown in his war against the institution of American slavery, of course. The world in which that did make sense is no longer our world today. Nevertheless, we can still continue to be moved (even moved to join new wars declared in our own day) by the memory of John Brown—moved that way by reading Russel Banks’ retelling of Brown’s story today in Cloudsplitter, for example, or perhaps by visiting memorials to the sacrifice he and the others who carried out the raid at Harper’s Ferry made.

In just the same way, the world that was opened up by and in the works of art of the ancient Greeks has been dead for a long time now, far longer than John Brown. Yet we can still be moved by visiting the remains of such works in the museums of our own day. The world those works themselves opened up is no longer there for us to keep open, any more than the war John Brown declared against the institution of American slavery is any longer one in which we can enlist. But being reminded that there once was such a world, just as being reminded that there was once such a war as John Brown’s to fight, can still bring us to a point of decision of our own, a point where we are at last given our “one opportunity,” as Knausgaard was once given his. Even reminders of long dead worlds brought to us by mere fragments of what were once genuine works of art, genuinely still at work as works in opening up such worlds, can deliver to us the message that an “archaic torso Apollo,” according to Rilke in a poem of that name, delivers to those with eyes to see who visit it in the museum—the message, “You must change your life!”

The future of culture is dependent upon no more, and no less, than keeping alive the memory of such works. It does not even depend on the possibility that new works of such a kind-less kind will continue to be created. Even if they are not, the future still has culture—and, far more importantly, there still continues to be the future “of “culture, the future culture itself opens and holds open, which is to say the future as such—just so long as we keep on doing the work of preservation. There will be a future of culture so long as we truly do, but only so long as we truly do, “never forget.”

If we don’t remember, and do forget, then our right hands will wither, and our tongues will cleave to the roofs of our mouths, regardless of whether we pray it may be so or not.

*     *     *     *     *     *

In my next post, which will have the same main title as this series (“Pulling Out of the Traffic”) but a different subtitle, I plan to discuss an example of how we can “keep our memories green,” as it were.

 

* As Knausgaard found himself given his one opportunity, as he describes in the passage I cited at the beginning of my preceding post in this series.

 

** That, in turn, is something very different from demonstrating one’s “fidelity” to some “brand,” such as Coors or Budweiser when it comes to drinking beer, or Republicans or Democrats when it comes to electing politicians.

 

Follow

Get every new post delivered to your Inbox.

Join 102 other followers