Pulling Out of the Traffic: The Après-Coups After The Coup (2)

Second After-Shock*: Accidental Strokes of Necessity

Art is good when it springs from necessity. This kind of origin is the guarantee of its value; there is no other.

– Neal Cassady

Our daughter has two cellos. To go with them, she has two cello-cases. Both cases are pretty well covered with various stickers and posts-ups that have struck her fancy from time to time. When we went to San Francisco recently to watch her play the cello in a quartet representing Classical Revolution, as part of The Coup’s Shadowbox premier, I noticed a new sticker on one of her cello cases. It had the lines above, from Neal Cassady.

That’s the same Neal Cassady who inhabited the heart of the Beat movement. Later he was not only “on the bus,” but even drove it. He drove the bus—namely, the psychedelic bus filled with Ken Kesey and his Merry Pranksters, the same bus Tom Wolfe eventually rode to fame in 1968 with the publication of TheElectric Kool-Aid Acid Test, that foundational text of the “New Journalism” that already long ago became old hat.

I didn’t notice our daughter’s new (to me at least) Neal Cassady sticker till a day or two after we’d attended Shadowbox, and when I read Cassady’s remark it resonated for me with my experience of the concert. That resonance was deepened when, even later, I noticed a brochure our daughter had lying on a bookshelf—an advertisement for the concert we had just attended. Put out by the Yerba Buena Center for the Arts and by Bay Area Now, the brochure started with “A Note from the Curators”—Marc Bamuthi Joseph, YBCA Director of Performing Arts, and Isabel Yrigoyen, Associate Director of Performing Arts—to which I’ll eventually return. That was followed by “A Note from the Artist,” in which an explanation, of a certain sort, was given for titling the concert Shadowbox. It read:

Late one night in the skies over Oakland, a strange object appeared. A cube. Perfectly still, 200 feet in the air. A reflective black box, with a neon glow surrounding it. Thousands of people hurriedly got out of bed, or filed out of bars and house parties, or left the cash register unattended—to stand on the street and gaze at the sight. Dogs barked and howled, louder and louder, in various pitches and timbres until it was clear that there was a consistent melody and harmony to their vocalizations. The cube started trembling, sending out a low vibration that made the asphalt shake, windows rattle, and car alarms across the city go off. Thousands of car alarms went off in a tidal wave of honks, beeps, and bleeps until they formed a percussive rhythm that accompanied the dogs’ beautiful howling. From the cube, a kick drum was heard that tied it together. A spiral staircase descended from the box. Only a few dared enter. What those few experienced has been the subject of several poorly made documentaries, an article in US Weekly, and three half-assed anthropology dissertations. What you will see tonight is a re-enactment of that experience.

I suggest that the “re-enactment” at issue be taken in the sense of an enacting again, as legislators are said to re-enact a law that will otherwise expire, rather than in the more ordinary sense of a miming, an acting out, as a community theatre group might re-enact Tennessee Williams’ A Streetcar Named Desire or Walt Disney’s Dumbo, or as bunch of court stooges might re-enact a crime in a courtroom at the behest of a prosecuting attorney, let’s say.   The Coup’s Shadowbox doesn’t just represent or mime the enactment of community that seems to have proven necessary following the sudden, unaccountable appearance—“fictitiously,” of course (and I’ll eventually return to that, too)—of a strange, black cube suddenly hovering in the sky over Oakland one night.

After all, The Coup—although it may be erotically capitalist and even, for all I know, scientifically fascist—is “politically communist,” as Wikipedia has it; and what The Coup is trying to do in Shadowbox, at least if we are to believe (as I do) Coup front-man and co-founder Boots Riley, is to get everybody moving. And although the movement at issue may be a dance, it is a dance that even such dance-dysfunctional still-standers as myself can join into, as I also wrote about last time. It is a political dance.

Which brings me to Jean-Claude Milner.

*     *     *     *     *     *

According to Jean-Claude Milner, ever since the ancient Greeks, politics—which term is itself derived from a Greek word, of course: polis, “city”—has been a hostage of mimesis, which is to say of just the sort of acting-out, of play-acting, that “represents” the action it mimes without re-presenting it, that is, without committing that action again. The mimetic re-enactment of a murder as part of a courtroom trial does not culminate in a second murder. In the same way, politics as the mimetic re-enactment of whatever acts mimetic politics re-enacts does not result in any new enactments of those original acts.

The acts that mimetic politics re-enacts are acts whereby the polis or “city” itself–which for the Greeks meant, in effect, the place where all real, truly human be-ing took place, to use again a way of speaking I favor—is first opened and set up, then kept open and going after that. From the days of the ancient Greeks until relatively recently, in one way or another such decisive political acts were taken not by everyone together, but only by a few.

Of course, those few invariably found it useful to represent themselves as making their decisions for the good of “all.” As Milner points out, however (3rd treatise, page 58**): “It is always in the name of all that each is mistreated.”

For the few who did make the decisions, and then impose them on everybody else, to keep their claim to be acting for the good of all even remotely plausible it always also helped to get “the people”—as we’ve grown long used to calling those the rulers rule over, though the term is supposedly inclusive of both—to believe that they were somehow actually participants in the decision-making itself. Those who were being decided over needed to be kept down on the farm, as it were, regardless of whether they ever got a chance to see Paree or not. The decided-over needed to be given the impression that somehow they were themselves deciders—as President George W. Bush once in/famously called himself.

Milner argues that classically, among the ancient Athenians, the theatre, specifically as staged in the great public performances of tragedies, was the crucial device that permitted the governors to govern those they governed—that is, permitted those who exercised power over others to keep those others in line. It did so by regularly bringing together all those who counted as “the people”*** to witness re-enactments, by actors behind masks, of the heroic deeds that were taken originally to have defined the people as the very people they were (with running commentaries provided by choruses that took over the job of being mouth-pieces for “the people,” who were thereby relieved of any need to speak for themselves). By so convening to witness such re-enactments, the citizenry—the public, the people—actually constituted itself as such.

Furthermore, in being brought openly together as an audience to witness the re-enactments of the original, originating tragic acts of the great heroes of Greek tradition, religion, and mythology, the people were also brought, through empathy, to vicarious identification with those people-defining heroes themselves, and their suffering for the people’s sake. Through such identification the people as audience were allowed to process the terror and pity with which the mimetic re-enactments of tragedy filled them, achieving catharsis, as Aristotle observed. That also helped keep them down on the farm.

Precisely because they were assembled as such an otherwise passive audience for the spectacle of decisive acts re-enacted or mimed in front of them, the people were effectively distanced from the underlying definitive decisions and actions being so mimed. They were allowed to feel a part of what was being re-enacted before them, in the sense of being mimed or “acted out,” while they were simultaneously being distanced from all the underlying genuine action itself. They could marvel and weep as “destiny” unfolded itself in the actions being mimed before them, while being dispensed from the need to undergo that destiny themselves.

As Milner puts it (2nd treatise, page 59):) “That distanced object, which in the crucial tradition of tragedy was called destiny, carries in politics, of course, the names: power, state, liberty, justice, or quite simply government.” What is more, he says, in our times the role that used to be played by tragic theatre is now played by—political discussion: the endless expression of opinions compulsively formed about political matters. Such discussion permits the discussants to think that they are really part of the political action, when in fact they are distanced effectively from it by the endless palaver about it. They are merely playing at politics, the way children play at being adults. They are “actors” only it that mimetic sense, not in the sense of decisive agents.

The difference, however, is that today, unlike in ancient Athens, everybody is reduced to the status of such a mere play-actor. That even includes the few who presumably, in the days of the ancient Greeks and for a long while thereafter, used actually to govern—to be genuine agents or “deciders.”

The reality today is simply this: No one decides, decisions just get made. Things of themselves get decided, as though things themselves are dictating the decisions—hence the name of Milner’s first short political treatise, which translates as The Politics of Things—but without anyone doing the actual deciding.

Accordingly, as I already indicated in my previous series of posts on “The Future of Culture,” no possibility of clearly assigning responsibility for decisions remains. Even more importantly, there are therefore no identifiable political pressure points, points where political pressure might be exerted in order to effect significant change. Everything just keeps on chugging along, with no one directing anything, despite how deluded some may still be into thinking they have some impact (for example, the President of the United States, whoever that may happen to be at any given time). The whole thing is no more than a dumb-show. Nobody is in charge of anything.

*     *     *     *     *     *

Sometimes, though, lightning strikes. Or suddenly a huge black cube with a neon glow appears in the sky. The Coup comes, and folks get moving.

*     *     *     *     *     *

Necessity is not causality. For necessity to emerge, in fact, the causal chain must actually be broken. Causality brings inevitability, Nietzsche’s “eternal recurrence of the same”—always the same old same old, never anything truly new under the sun (or the moon and stars at night). The necessity that Neal Cassidy says is the only guarantee of real worth in art is not causal inevitability. It is the necessity, the need, of creativity—the need of a pregnancy brought full term finally to burst and bring forth new life.

Any child born of such necessity always comes unexpected. The child always comes as an unexpected, un-expectable surprise, even for parents long filled with the knowledge that they are “expecting.” What can be expected is at most a child, one or another of the innumerably substitutable instances of the class of children, but never this child, the very one who so suddenly, so urgently, so imperiously, insistently comes into the world, and who, once come into it, simply demands, by its very being there, to be named.

Giving a name in the sense of what we call a “proper” name—which is to say “insofar as it is not just another name” (as, for example, dog, Hund, or chien are just three names for the same thing), that is, a name “insofar as it [names] not just anyone,” as Milner writes at one point (3rd treatise, page 75)—always “appears as an obstacle” to whatever or whomever claims to act in the name of “all.” What Milner means in that context is “all” taken in the sense of a closed totality, such as what is ordinarily called a “nation,” for example, the “borders” of which must be secured and protected. The singular, the radically unique, what escapes number, substitutability, and, therewith, any capacity to be “represented” by another, always constitutes a threat to all claims to special authority in the name of any such totalizing “all.”

However, universal quatification, as logicians call it, over “us” or over “human being”—as in “all of us,” or “all human beings”—need not be the move to any such totality as a “nation.” The “all” need not be taken in any such collective sense. Instead, the “all” can be taken in the distributive sense of “each and every single one,” so that “all of us” means each and every one of us as someone who has been given, or at least cries out to be given, a proper name, a name by which that singular one, and that one alone, no other, can be called.

The name by which the singular individual is called, however, calls that one as just that very one, and not as no more than an instance of what that one has in common with a bunch of other ones—for example, being black, white, brown, or yellow, young or old, educated or uneducated, employed or unemployed, American, Mexican, Honduran, Syrian, Iranian, or Indian. The bearer of a proper name—by which I would like above all to mean a name that is truly just that, a genuine name, and not a mere place-holder for a description—is no mere instance of a type, replaceable with any other. The bearer of a proper name is, rather, irreplaceable. (Regular readers of my blog might think of Fluffy, my daughter’s childhood pet guinea pig, for instance.)

*     *     *     *     *     *

As cacophonous as it may initially sound—like the sound of multiple dogs howling and multiple horns blowing in the night—to say so, it is only such an irreplaceable singularity that can be “necessary” in the way Neal Cassady says the authentic work of art is necessary. The necessity of artistic work is the same as the necessity of seizing one’s one and only opportunity to become who one is, when that opportunity suddenly presents itself. It is the same as the necessity of joining the fight against injustice into the reality of which one is suddenly given clear insight, or the necessity of giving oneself over completely to a suddenly awakened love. In short, it is the necessity of selling everything one owns for the sake of pursing what one is given to see is priceless.

Necessity is order, to be sure. However, it is the order that comes from the unexpected emergence of connection between what theretofore seemed to be no more than a randomly thrown together bunch of discreet, isolated facts. Necessity gives birth to the cosmos. That word is from the Greek word for “ordered whole,” but which originally meant “ornament,” which is why we also get cosmetic from the same word.  Cosmos is the “all” of everything insofar as everything has been brought together into one coherent whole, like an ornament. Cosmos is the ornamental whole of everything emerging out of chaos itself, which also a Greek word, which originally meant something like “yawning gap.” Necessity is the origin of that genuine cosmos which is the coming into an ordered whole of chaos itself. Necessity is the origin of that order that is not imposed upon chaos from without, as though by some ruler, but that arises, instead, of necessity, from chaos itself.

Among the same ancient Greeks to whom we owe tragic drama, the emergence of cosmos from chaos was attributed to Zeus. However, Zeus, the god of thunder and the thunder-bolt, was not himself without genesis. King of the gods he might have been, but Zeus himself came from the chaos; and if he came to order the latter, he still came at its bidding, and from within. He came of necessity, which origin demonstrates the authenticity of his glory.

*     *     *     *     *     *

Coming from out of the Greek chaos, Zeus also came from out of the Greek imagination, that same imagination from which sprang all the gods of Greek mythology. The order that the Greek imagination attributed to Zeus was itself anything but an imaginary order. Nevertheless, its origin—and its guarantee of worth, which is also to say its real necessity—lay in the Greek imagination.

Imagine that!

*     *     *    *     *     *

I will try to imagine something of it, in my next post, which will continue—and, I think, end—this present series on the after-coups of The Coup.

* Only while writing this post did it occur to me to call the separate posts of this series not “Parts,” as I had it when I put up the series’ first post a few days ago, but “After-Shocks,” which is much more appropriate. So I went back and edited my first post a couple of days ago. First, I slightly changed the title. Originally, I had used après-coup, French for “after-shock,” in the singular. I turned that into the plural, après-coups. Then I changed the title of the first series’ post itself from “Part One” to “First After-Shock.” Thus, it was only by one of the smaller après-coups of the coup delivered to me by attending The Coup concert that I was coincidentally struck by the need to change my titles a bit. Appropriate indeed!

** Milner has published three “short political treatises,” all brought out in France by Verdier: La Politique des Choses is his Court traité politique 1 (20011), followed by Pour une politique des êtres parlant as treatise 2 (2011) and L’Universal en éclats as treatise 3 (2014). I will give references in the text of this post, when needed, by the number of Milner’s treatise, followed by the page number at issue.

*** That is, the “citizens,” which means literally the habitants of the “city” as such, the polis, the place where human being took place. So, of course, that left out slaves, women, and all the other others who simply didn’t count—including counting as fully human, since they were not “citizens,” not full-fledged inhabitants of the place human beings as such inhabit. As non-citizens, those other others didn’t need to be brought on board the city boat because they were simply subject to force, with no need to rely on subterfuge—conscious and deliberate or not, who cares?—to make them think they were free even while they were being coerced.

Pulling Out of the Traffic: The Après-Coups After The Coup (1)

 

This is the first in a new series of posts.

*     *     *     *     *     *

First After-Shock:  The Coup and I

Just a week or so ago, my wife and I flew all the way across the country from New Jersey, where we are summering, to California. We made the trip in order to attend Shadowbox, a new multimedia project put together by the hip-hop group The Coup, which was having its world premier at the Yerba Buena Center for the Arts (YBCA) in San Francisco.

If you have no idea who The Coup may be, don’t feel alone. I had no idea either, until I attended the concert. Only then did I begin to get an idea of who The Coup may be—an ideational process very definitely still in progress.

We went to The Coup performance in order to see our daughter, a cellist who lives in northern California, perform in a string quartet from one of the other musical groups that The Coup had given a role in their concert. The Coup does that.

Indeed, that is one good place to start knowing who The Coup is—or at least it is for me, given what I saw that night. The Coup is a group of musicians that goes out of its way, whenever and wherever it performs, to share the spotlight, whose shine its presence generates, with other, lesser-known, more “local” groups. Rather than laying claim to all the glory for itself, The Coup would seem to glory in sharing the glory with others.

So who The Coup is, is a group that builds up groups. At least judging from Shadowbox, a Coup performance is the opening up of a place, a space, where groups of musicians, including The Coup itself, can play music together. At the event my wife and I attended, those who played music along with The Coup, on the three stages set up for the purpose, with The Coup on the center-stage, were what some of the YBCA promotional material describes as “up-and-coming Oakland experimental soul act Mortar & Pestle, new wave folksters Snow Angel, NOLA-style second line outfit Extra Action Marching Band, and neo-chamber orchestra Classical Revolution,” the group that included our daughter on cello. Also playing music were some “special guests,” including “longtime Riley co-collaborators and fellow revolutionary hip-hop torchbearers dead prez.” Then there was also “alternative puppet troupe Eat the Fish Presents” (which, as the name suggests, provided puppetry as well as music)—as well as various other musical participant-guests, of both Bay-area and broader provenance.

The same space The Coup opens for musicians to come and play along, is also open to others, besides musicians—others who are also invited to enter and play along, each after each’s own fashion. In the case of Shadowbox, those “others” included visual artist Jon-Paul Bail, who created the noteworthy graphic-art murals that hung on all four sides of the performance space, and production designer David Szlasa, as well as comedian W. Kamau Bell. The “others” also included all the members of the audience who attended the two sold-out premier performances on August 16.   Most of that audience played along by dancing, hopping, jumping, writhing, gyrating, hand-lifting, gesticulating, waving, and in other ways noticeably moving around physically. Some did that more than others, of course. Then, too, there were other “others” who just stood there pretty much immobile. I was one of those other others (and I’ll return to me soon, as I always like to do). In one way or another, musicians or muralists, puppets (and puppeteers) or comedians, gyrators or still-standers, “artists” or “audience,” we all took part in the performance itself, becoming, at least for that few hours, a richly diverse community of our own.

Indeed, judging from my experience of Shadowbox, a Coup performance is precisely that: the creation of a space, an opening, where community can—and in one manner or another actually does—occur. Thus, one might say that a Coup performance creates a communizing space.

That is not a bad way to put it, “a communizing space.” Boots Riley, front-man and lead for The Coup, who co-founded the group back in the beginning of the 1990s, self-identifies as a “communist.” According to Wikipedia (http://en.wikipedia.org/wiki/The_Coup), The Coup itself is “politically communist.”*

The end of the Cold War had at least one good side-effect: It made it possible even for Americans to use the term “communist” in a positive way and still find wide popularity, as the success of The Coup attests. The Wikipedia entry for The Coup also tells us how the “communism” at issue for Riley and The Coup is to be defined. It quotes Riley as saying: “I think that people should have democratic control over the profits that they produce. It is not real democracy until you have that. And the plain and simple definition of communism is the people having democratic control over the profits that they create.”

Not a bad definition. Not a bad idea.

Correlated to that idea is something else Riley said at a couple of points during Shadowbox itself, when there would be a pause in the music and other action and he would briefly just speak into the microphone. That was how, when we find ourselves part of a movement—such as the Occupy movement, in which Riley himself has played a part, especially in the Oakland area, or the “communist” movement to give “the people” themselves control over the profits their own efforts create—then we no longer act and live just as isolated individuals, but as parts of a whole, of an “us” in effect.

One of the times he said that sort of thing, Riley added that such movements are the genuine way to address the real problems that we face, which, he affirmed, are not just a bunch isolated, individual problems. “Our” real problems are not just my problems, plus your problems, plus his, her, and their problems, as our global-economy simulacrum of a culture would have us believe (my words there, not his—though I doubt he’d spit them out in disgust). Rather, “our” real problems are group problems, problems that “we” have together (and that we therefore must also address together, in “movements”).

So the message he was delivering nicely matched the delivery-system he was using to deliver it, that is, the delivery-system of The Coup’s Shadowbox project itself, which was such an inclusive, “all of us” sort of thing, as I’ve tried to make clear, and as it so powerfully struck me as being. That effectively effected creation of a new body of which I experienced myself to be a part was a coup The Coup strongly delivered to me, at least. Yet at another of the times Riley said the thing about movements, a bit earlier that same night, I seemed to receive a counter-coup, as it were. The way I was struck by something else he went on to say at that point ended up in-cluding me personally as part of the “us” of the community at/of the performance only, paradoxically, by ex-cluding me. I’ll try to explain.

What Riley said on the occasion in question was to the effect that trying to address the real problems we all face together by trying to maintain our perceived, precious “independence” in refusing to let ourselves become involved in any “movements,” was “like going to a Coup concert and not dancing.” The only way you really could “attend” a Coup concert, he said, was by joining the dancing. Otherwise, you weren’t really in attendance at all. In my words: Your body might have been there, but you weren’t.

My problem, however, is that, you see, I don’t dance. Often, no-longer-drinking drunks such as myself share with one another how they never danced when they were sober, but that once they belted a few drinks they were disinhibited enough to do so. Well, as I will often tell such other now-abstinent drinkers, not only did I not dance when I was sober. I also did not dance when I was drunk. (“But when I drank, I didn’t give a shit,” I always like to add.)

Well, I could tell you that on Saturday night, August 16, in the Yerba Buena Center for the Arts during The Coup’s Shadowbox, when Boots Riley said what he did about how the only way to attend a Coup concert is to join in the dancing, he inadvertently threatened my nearly 28-year sobriety! I could tell you that. But I won’t. It would be a lie.

What he did do, though, was to challenge (say “threaten,” if you like, it doesn’t matter) my sense of being part of the “we” who were all there together in the Yerba Beuna Center attending the Coup concert that night. If you have to dance in order really to attend a Coup concert, then it seemed I was not in attendance, despite my physical presence, my mental presence, and even my shock from the coup The Coup was delivering to me. That left me uneasy and uncertain, since my desire, grounded in my multidimensional presence to the presentation that night, was to be one of “us,” and not just some isolated, dis-involved “me.”

My uneasiness and uncertainty did not last long, however. It found itself dispelled when, a bit later, Boots Riley spoke again about “movements,” and said what I recounted first above—the business about our problems really being our problems, a matter of the group, and not just the personal, individual problems of each one of us. Hearing him say that, and appreciating its truth, suddenly gave me the insight that my own lifelong, total, immobilizing disability/disinclination/dis-capacity to dance—and therewith my very isolation and exclusion—was, if you will, not my fault. I was not to blame for it. My problem was in that sense not just my problem any longer, it was our problem.

I believe that I’ve shared before on this blog a line I treasure from the literature of Narcotics Anonymous. NA is a Twelve-Step group for which I lack the qualifications for membership, insofar as narcotics were never my thing at all. Nevertheless, I easily identify with NA members and have nothing but respect for NA as a group. In fact, it would not be at all off the mark to say that, when it comes to NA, I feel myself to be of the group even if I am not in it, as it were.

That is itself an example of what I’m trying to describe about my non-dancer’s relation to the dance requirement for membership in the group/community constituted by and in participation in The Coup concert I’m addressing: an example of how ex-clusion itself, properly undergone, can be a vehicle for a new, more inclusive in-clusion of its own. But that’s not why I brought NA and its literature up. Rather, I brought it up because of the line from that literature that, as I’ve already mentioned, I treasure. In that line the NA member-authors say, with regard to their being hooked on narcotics, “We are not to blame for our own addictions, but we are responsible for our own recovery.”

Well, what struck me when Boots Riley made his remarks about how our problems are group problems, and not just individual problems was that I was not to blame for my own dance-disability, but that I was responsible for my own recovery from it.

Recovery from dance-disability does not consist in all of a sudden miraculously acquiring the capacity to go out and dance, dance, dance the night away. If it did, then it would not be my own responsibility at all. It would be God’s responsibility, or the responsibility of the dance-doctors, or of whatever other higher authority took care of such things, if there is any such authority. Recovery from dance-disability consists of making and then keeping the decision not to let one’s inability to dance exclude one from the party. There’s more than one way to dance, and the challenge to those who would recover from dance-disability is to find how to make not-dancing into its own way to dance.

As it happens, what came to my mind on the recent evening of August 16 as I stood listening to The Coup in the Yerba Buena Center for the Arts in San Francisco, and I heard Boots Riley remind us that our problems are really our problems, was not that fine line from NA. It was, instead, Russell Banks’ 1989 novel Affliction.

I used Banks’ later novel Cloudsplitter, about the abolitionist John Brown, in the final post of the preceding series on this blog. I finished writing that post just a day or so before my wife and I took off for San Francisco to attend The Coup’s concert. (Our internet connection went down just before I finished writing the post, so even though it was already written before we left, I did not actually post it until just the other day, after we got back to New Jersey and were able to get our internet service back.) Because of using Cloudsplitter in that preceding post, I had decided to go back and read Affliction, which I’ve meant to read for years, but never got around to till now. So I downloaded the e-version of the book and took it with me to read while we visited California and attended The Coup’s concert.

Affliction is the story of Wade Whitehouse, a 41-year-old man. Interestingly, Boots Riley is roughly the same age now, by the way, so perhaps my mental pairing up of the two on August 16 at the concert was in part affected by that analogy. At any rate, Wade Whitehouse is an American male who is afflicted by a not uncommon American male condition. He comes from a home with an abusive, alcoholic father and a passive, acquiescent mother, and hasn’t a clue about how to own his own feelings, ambitions, aspirations, or, in short, life. Wade is robbed of himself, through no fault of his own. He is no more to be blamed for his affliction than narcotics addicts are to be blamed for their addictions.

Nor does the narrow, rural New England world in which he lives offer Wade any real possibility of escape. Indeed (and this is really the same thing, just put a bit differently), it offers him no real possibility even to become fully aware of his own condition. Thus—unlike narcotics addicts fortunate enough not only to bottom out into desperation, but also to find a new option, unavailable to them until then, through NA or the equivalent—Wade is never given so much as the opportunity to assume responsibility for recovering from his afflicted condition.

As a result, he gets locked into repeating the very cycle of violence and abusive parenting (only with differences, of course, as is always the case in such cases) that he so longs to escape. But there is no escape for him, and Banks’ novel (at least read at the surface level, which is what I am doing in my account of it here) chronicles his relentless spiraling downward into violence and murder.

Wade Whitehouse came to my mind on the night of August 16, just a bit over a week ago, when I was feeling so left out of things at the Coup concert and heard Boots Riley talk about our problems being group problems, and not just individual problems. Hearing his remarks triggered my memory of Russell Banks’ novel, which so caringly details how Wade Whitehouse’s problems were, just as Riley was saying, not just Wade’s individual problems, but were generated by the whole constellation of factors that made up Wade’s world: They were “group” problems.

Unlike Wade Whitehouse, who was offered no options, I have found myself offered options for recovering from the afflictions with which I have myself been beset. I have been offered such options more than once, for more than one affliction—or at least for more than one manifestation of my affliction, if there is really only one, in the final analysis. On the night of August 16, 2014, I was offered an option for recovering from the affliction of my radical, total, and irremediable dance-disability. I was shown that my very not-dancing could become, if I would have it be so, a dancing of its own.

For that, I would like to thank The Coup.

*     *     *     *     *     *

Next time, Part Two.

* I must confess I’m not sure whether Wikipedia means that the politics of The Coup is communist, or wants to suggest that there are non-political ways of being communist, such that The Coup might not be communist in those other, non-political ways (maybe The Coup is politically communist but erotically capitalist, for example—whatever that would mean). Either way, the remark raises some questions worth a thought or two—questions that could be summed up under two, using two richly ambiguous expressions: Just what is the politics of art? And just what is the art of politics?

Pulling Out of the Traffic: The Future of Culture (5)

This is the final post in a series of five under the same title.

*     *     *     *     *     *

In my lifetime up to that point and for many years before, despite our earnest desires, especially Father’s, all that we had shared as a family—birth, death, poverty, religion, and work—had proved incapable of making our blood ties mystical and transcendent. It took the sudden, unexpected sharing of a vision of the fate of our Negro brethren to do it. And though many times prior to that winter night we had obtained glimpses of their fate, through pamphlets and publications of the various anti-slavery societies and from the personal testimonies given at abolitionist meetings by Negro men and women who had themselves been slaves or by white people who had travelled into the stronghold of slavery and had witnessed firsthand the nature of the beast, we had never before seen it with such long clarity ourselves, starred at it as if the beast itself were here in our kitchen, writing before us.

We saw it at once, and we saw it together, and we saw it for a long time. The vision was like a flame that melted us, and afterwards, when it finally cooled, we had been hardened into a new and unexpected shape. We had been re-cast as a single entity, and each of us had been forged and hammered into an inseparable part of the whole.

. . . .

Father’s repeated declarations of war against slavery, and his asking us to witness them, were his ongoing pronouncement of his lifelong intention and desire. It was how he renewed and created his future.

– Russell Banks, Cloudsplitter: A Novel

 

There is a way of building that closes down, and there is a way of building that opens up. Correspondingly, there is a way of preserving that checks locks and enforces security, and there is a way of preserving that springs locks and sets free.

Cloudsplitter is Russell Banks fine 1998 novel of the life of the great American/abolitionist John Brown, as told through the narrative voice of Brown’s third son, Owen. What Banks/Owen describes in the passage above is a building and then a preservation of the second sort, the sort of building that opens up, then the sort of preservation that keeps open.

The passage comes from relatively early on in the long novel, in the second chapter. What is at issue is at one level a very minor, everyday thing (everyday, at least, in 19th century American families such as John Brown’s): a shared family reading, begun by John himself, then continued by other family members in turn, each reading aloud from the same book, passed on from one to the other.

What the Browns are reading at that point in the narrative is a book recounting the horrors of American slavery. The book does that very simply and straightforwardly. It just presents page after page of the contents of ads of a type often placed, at the time, in newspapers—throughout the slave-holding states, at least. They are ads in which property owners who have suffered thefts of a certain kind solicit help, mainly for monetary reward, to track down and retrieve their stolen property. The property at issue consists of human beings owned as slaves, and the thefts at issue have been committed by that property itself—that is, by slaves who have tried to steal themselves away from their lawful owners, by running off. In ad after ad, slaveholders detail the scars that they have inflicted to the faces, backs, limbs, and torsos of their slaves. The slave-owners catalogue such traces of whippings, cuttings, burnings, and other abuses they have inflicted on their slaves, in order that those traces might now serve, in effect, as brand-marks by which their (self-)stolen goods can be identified, in order to be returned, it is to be hoped, to its rightful owners.

The experience of listening together to such genuinely reportorial reading during the evening at issue galvanizes the Brown family into a “new body,” to borrow an exactly apt term from Alain Badiou’s seminar on “images of the present times” (in which at one point he cites Cloudsplitter, and praises Banks).   Until that uneventful event of such simple family reading of an evening, the Browns had been, despite all family relations, affection, and sharing, no more than a collection of individuals—just instances of a family named “Brown,” as it were. “It took,” as Banks has Owen tell us in the passage above, “the sudden, unexpected sharing of a vision,” a vision “like a flame that melted us,” truly to meld them together and “re-cast” them “as a single entity,” in which each one of them “had been forged and hammered into an inseparable part of the whole.”

In the quiet of their family kitchen, their shared reading that evening brings the Brown family—brings that family as a whole and in each of its family members—to a point of decision. In the fire of that experience the family, each and all, is brought to decision; it gets decided as it were. That night, the family gets resolved. And so it will remain, one way or another.

Lapses will continue to remain possible, of course. In fact, they will all too often actually occur. One or another family member—now Owen, now one of his brothers or sisters, now even “the Old Man,” John himself—will lose his or her resolve, becoming irresolute again. But that will no more rescind the resolution than the breaking of a marriage vow rescinds that vow.

Broken vows and lapses in resolve are betrayals and acts of infidelity. As such, they do not cancel out the original vows or resolutions. Rather, they call for acts of contrition, repentance, and expiation, and, above all, a return to fidelity—that is, they call to renewed faithfulness to the vow or resolve that was betrayed.

*     *     *     *     *     *

In Toward a Politics of Speaking Beings: Short Political Treatise II—Pour une politique des êtres parlant: Courte traité politique II (Verdier, 20011), page 56—Jean-Claude Milner cites the 1804 remark, often attributed to Talleyrand, “It’s worse than a crime, it’s a mistake.” As Milner points out, a “mistake” is, at most, a significant “error in calculation.” It is therefore the sort of thing that may indeed sorely need to be corrected. However, unlike a crime, “it does not need to be expiated.”

*     *     *     *     *     *

“We blew it!”

That’s said by Dennis Hopper’s character in Easy Rider, the classic 1960s buddy-movie about two hippies’ cross-country motorcycle journey together—costarring Peter Fonda, who also directed the film. Hopper delivers the line as the two are riding along a country road side by side on their two bikes, after doing their thing in New Orleans for a while. It comes just before Hopper’s character gets blown away with a shotgun by a Southern cracker in a pick-up.

The moral of the story? Don’t blow it—or you’ll be blown away!

*     *     *     *     *     *

Exactly how the two hippie bikers in Easy Rider “blew it” is open to diverse interpretations. However, by any interpretation worth considering, “blowing it”—whether done by the characters played by Hopper and Fonda in that movie, or by the members of the Brown family in Banks’ Cloudsplitter, or by whomever in whatever circumstances—is not a matter of an error in calculation. It is no omission or oversight in cost-benefit analysis, no limitation in one’s capacities for “rational decision-making.” In short it is not a mistake.

It is a crime.

“Blowing it” is not necessarily—or even in any important case—a crime in the sense of a violation of any law of any such state as Louisiana. It is a crime, rather, in the sense of a breach of faith, a failure to keep faith—above all, a failure to keep faith with oneself. As such, it cries out not for correction, but for expiation.

*     *     *     *     *     *

The institution of American slavery was a crime, not a mistake. It was a human betrayal of humanity, not an error in calculations or a failure in “rational decision-making.” By the passage I have cited from Banks’ novel, John Brown’s third son Owen and the rest of John Brown’s family were brought together—which should itself be read in a double sense here, to mean both that the whole bunch of them were brought, and that the bunch of them were brought no longer to be just a bunch, but to be a true whole—by an insight into the reality of that institution, American slavery.   Given such insight by nothing more than the everyday event of an evening’s family reading, they were thereby brought together to a point where they no longer had any choice but to join the family patriarch in his declared war against that criminal institution. They either had to join John Brown, the family patriarch, or betray him—and, along with him, themselves.

To find oneself at such a point of decision—but what am I saying? To be brought to such a point of decision is precisely to find oneself! So I should have said that to find oneself at last, by being brought to a point of decision, is precisely in such a way to be given no choice. At such a point, one “can do no other” than one is given as one’s own to do, as Luther said at the Diet of Worms in affirming his continuing defiance of the Church hierarchy and its self-claimed “authority.” One can do no other at such a point than what one finds oneself, at and in that point, called to do.

If one does not heed that call, then one lapses back into loss of oneself, lost-ness from oneself, again. Thus, as I have written in this series of posts before, at a point of decision, one is not given two equally passable options, between which one must choose. Rather, one is given one’s one and only opportunity, the opportunity at last to become oneself, to make one’s life one’s own.*

When one is faced with such an opportunity, such a moment of clarity, such a point of decision, if one even bothers to “count the costs” before declaring oneself, then one has already declared oneself—already declared oneself, namely, to be a coward and a criminal. By counting the costs before one makes up one’s mind in such a situation, at such a point, one has already lost one’s opportunity, and, with it, any mind worth keeping, no matter how “rational” that mind may be. One has blown it.

*     *    *     *     *     *

In 1939 Random House published a new novel by William Faulkner. Faulkner had given his work the title If I Forget Thee, Jerusalem. In the novel Faulkner interwove two stories, each of which could perfectly well stand on its own, as each—one of the two, especially—has often been made to do, in anthologies and other later re-publications of Faulkner’s works. One such potentially autonomous story is called “Wild Palms,” and the other one, which is the one most often published just by itself alone, is called “Old Man.”

Faulkner took the title he gave the combined whole of the two tales from Psalm 137 (136 in the Septuagint numbering), which sings out Israel’s own vow not to forget Jerusalem during Israel’s long captivity in Babylon. It is an intemperate psalm, declaring an intemperate vow, which is intemperately sealed by a prayer that the singer’s right hand might wither, and the singer’s tongue cleave to the roof of the singer’s mouth, if that vow is not kept. The psalm then intemperately ends by calling down wrathful vengeance on the Babylonians, blessing those of that city’s enemies who might one day, as the psalmist fervently hopes they do, seize the Babylonians children and bash their brains out on the rocks.

Especially today, decent, rational folks are shocked by such sentiments.

They didn’t seem to shock Faulkner, however. Or, if they did, it would seem to have been with the shock of insight and recognition, since he not only chose a crucial line from the psalm as the title to his double-storied 1939 novel, but was also chagrinned—and protested, to no avail—when Random House, on the basis of its own cost-benefit analyses no doubt, made the quite rational decision to refuse to bring the book out under the title Faulkner had given it. Instead, they took the title of one story (with ironic justice, it turned out to be the title of the story that has subsequently “sold” far less well than the other, in the long run, judging from subsequent re-printings/anthologizings) and published the whole as The Wild Palms. Not until 1990, twenty-eight years after Faulkner’s death, did an edition come out under the title Faulkner originally chose.

The Wikipedia entry for If I Forget Thee, Jerusalem (http://en.wikipedia.org/wiki/If_I_Forget_Thee,_Jerusalem) characterizes the novel as “a blend of two stories, a love story and a river story,” identifying “Wild Palms” as the former and “Old Man” as the latter. However, the entry goes on to point out that “[b]oth stories tell us of a distinct relationship between a man and a woman.” Indeed they do, and I would say that, in fact, both are love stories—only that one is the story of a love kept, and the other the story of a love thrown away. Or perhaps it would be more accurate to say that one, “Wild Palms,” is the story of a decision to love, a decision boldly taken and faithfully maintained, regardless of the cost, whereas the other, “Old Man,” is the story of refusal to decide to love, and a cowardly clinging to security instead.   The first is a story of love enacted; the second, a story of love betrayed.

I would say that, read with ears tuned for hearing, the Wikipedia entry brings this out very nicely, actually, in the following good, short synopsis:

Each story is five chapters long and they offer a significant interplay between narrative plots. The Wild Palms tells the story of Harry and, Charlotte, who meet, fall in forbidden love, travel the country together for work, and, ultimately, experience tragedy when the abortion Harry performs on Charlotte kills her. Old Man is the story of a convict who, while being forced to help victims of a flood, rescues a pregnant woman. They are swept away downstream by the flooding Mississippi, and she gives birth to a baby. He eventually gets both himself and the woman to safety and then turns himself in, returning to prison.

To be sure! Whoever refuses the opportunity to love does indeed return to prison!

That’s just how it is with decisions, whether they be decisions to love, or to take to the streets in protest of injustice, or to hole oneself up in a room and read, read, read, in order to write, write, write—or, perhaps, the decision never to forget.

Faulkner’s story of Harry and Charlotte’s decision to love one another whatever the cost, especially when that story is read in counterpoint to his story of the old man who prefers the security of prison to the risks of love (and who is made “old,” regardless of his chronological age, precisely by so preferring), shows that such decisions can have serious, even fatal, consequences. Yet it also shows, even more strongly, that only an old coward would count such costs before deciding to love, when the opportunity to do so presents itself.

Most of us most of the time are old cowards. Far too often, all of us are. None of us never is. That, however, is no excuse.

*     *     *     *     *     *

Making a genuine decision is something very different from choosing between brands of beer, political parties, or walks of life–all of which are subject to the sorts of cost-benefit analysis that pertains to what is, in our longstanding “Newspeak,” called “rational decision-making.” In sharp contrast, making a genuine decision is nothing “rational.” Rather, it is taking one’s one and only chance to live, and to do it abundantly—rather than just going on surviving, hanging on and waiting around until one can finally “pass away.”

It is just because that is the nature of genuine decision that there is always an ongoing need, past the point of decision, after one has decided oneself, from then on to continue regularly admonishing oneself to stay faithful to one’s decision, to keep one’s resolution.   For the same reason, it is essential, having made a decision, to continue regularly to ask for, and accept, whatever help one can get from others to keep to one’s decision—and, in turn, willingly to help others who have joined one in one’s decision to do the same: to “keep the faith,” as the old saying goes. **

It was in just such a way, “in repeated declarations of war against slavery,” and in repeatedly “asking [his family] to witness them,” and thereby making “ongoing pronouncement of his lifelong intention and desire,” his life-defining intention and desire, that John Brown “renewed and created his future,” as Banks has Brown’s son Owen say at the end of the passage cited above. So must it be not only for John Brown, but also for us all. Only with such help and such repetitions of our own declarations of whatever may demand such declaration from each and all of us, can we have any hope of “renewing and creating” our own future.

*     *     *     *     *     *

Since the ancient Greeks, the work of art has been taken as a paradigmatic cultural product, in the sense that I have been giving that latter expression. In 1935, when he first delivered his lectures on “The Origin of the Work of Art,” Heidegger argued that the work of the work of art, as it were—what the artwork does, we could put it—is to bring those on whom it works to a point of decision, to use my way of articulating it. The work of art, says Heidegger, this time still using his own terms, opens up a world, and sets that world up by anchoring or grounding it in the earth. The artwork is the very place where that takes place. As such, it is not interchangeable with any other place. Rather, it is absolutely singular, utterly unique: something truly new under the sun, something the like of which has not been seen before, nor will ever be seen again. It is one of a kind—namely, one of that very kind of kind that is really no “kind” at all, since it has only one “instance,” to use one of my ways of speaking from earlier in this series of blog posts.

The shock of such a work as such a place, the shock that such a work, such a place, is there at all, calls upon those whom it shocks to make a decision. That’s the work of works of culture, the produce of cultural production. So shocked, one can enter into the work of the work itself—as John Brown’s family in Banks’ novel entered into the work of John Brown (though he was no work of art, to be sure), when that family was suddenly shocked into seeing reality. Or one can decline so to enter into such work—and, in so declining, enter, despite oneself, into one’s own decline.

If one does not decline, but joins the work in its work—as John Brown’s family joined him in his—then one preserves the work. That does not mean, as Heidegger insists it does not, that one takes the artwork and locks it away safe somewhere. Rather, one preserves the work by doing what one must to keep open the world that the work first opened up. That is, one preserves the work of art by persevering in the work of that work, regardless of whether that work of art itself even continues to be around. Only in that way does one truly keep or preserve the work.

That includes keeping or preserving it “in mind,” that is, remembering it. To remember a work of art properly—that is, as the very work one seeks to remember—is not recurrently to call up any “memory-images” of it that one keeps locked away in one’s memory banks somewhere, whether those banks are in one’s brain or in one’s smart-phone or wherever they may be. Rather, properly to remember a work of art is to keep open the world that the work first opened, or at least open to it.

In just the same way, to stick with the analogy I’ve been using, those who preserved John Brown’s memory, once he was arrested by Federal forces and then hanged by the state of Virginia, did so not by erecting memorials to him at Harper’s Ferry or anywhere else. Nor did they preserve his memory by recurrently spending time looking at old pictures or other images of the man himself. Rather, those who preserved John Brown’s memory—those who did not forget John Brown’s body as it lay “moldering in the grave,” as the song says—did so by continuing to carry on the very war he had declared against American slavery. Well, just as John Brown continued to call people to decision even after his death, so can works of art call those who encounter them even after have ceased to be at work themselves.

What is more, John Brown can continue to call us to decision even today. Even now—long after John Brown’s body has moldered completely away, and nearly as long since the war he waged morphed into the Civil War that eventually brought the institution of American slavery as he knew it to an end—we can still be moved by being reminded of him. It no longer makes sense to speak today of joining John Brown in his war against the institution of American slavery, of course. The world in which that did make sense is no longer our world today. Nevertheless, we can still continue to be moved (even moved to join new wars declared in our own day) by the memory of John Brown—moved that way by reading Russel Banks’ retelling of Brown’s story today in Cloudsplitter, for example, or perhaps by visiting memorials to the sacrifice he and the others who carried out the raid at Harper’s Ferry made.

In just the same way, the world that was opened up by and in the works of art of the ancient Greeks has been dead for a long time now, far longer than John Brown. Yet we can still be moved by visiting the remains of such works in the museums of our own day. The world those works themselves opened up is no longer there for us to keep open, any more than the war John Brown declared against the institution of American slavery is any longer one in which we can enlist. But being reminded that there once was such a world, just as being reminded that there was once such a war as John Brown’s to fight, can still bring us to a point of decision of our own, a point where we are at last given our “one opportunity,” as Knausgaard was once given his. Even reminders of long dead worlds brought to us by mere fragments of what were once genuine works of art, genuinely still at work as works in opening up such worlds, can deliver to us the message that an “archaic torso Apollo,” according to Rilke in a poem of that name, delivers to those with eyes to see who visit it in the museum—the message, “You must change your life!”

The future of culture is dependent upon no more, and no less, than keeping alive the memory of such works. It does not even depend on the possibility that new works of such a kind-less kind will continue to be created. Even if they are not, the future still has culture—and, far more importantly, there still continues to be the future “of “culture, the future culture itself opens and holds open, which is to say the future as such—just so long as we keep on doing the work of preservation. There will be a future of culture so long as we truly do, but only so long as we truly do, “never forget.”

If we don’t remember, and do forget, then our right hands will wither, and our tongues will cleave to the roofs of our mouths, regardless of whether we pray it may be so or not.

*     *     *     *     *     *

In my next post, which will have the same main title as this series (“Pulling Out of the Traffic”) but a different subtitle, I plan to discuss an example of how we can “keep our memories green,” as it were.

 

* As Knausgaard found himself given his one opportunity, as he describes in the passage I cited at the beginning of my preceding post in this series.

 

** That, in turn, is something very different from demonstrating one’s “fidelity” to some “brand,” such as Coors or Budweiser when it comes to drinking beer, or Republicans or Democrats when it comes to electing politicians.

 

A Brief Interruption: Breaking into “The Future of Culture” to Silence “The Guns of August” (and Defend the Wisdom of 16-Year-Olds)

Two pieces in The New York Times for this morning (Thursday, August 7, 2014) caught my attention—and raised my ire.  Being the good American-born Baby Boomer that I am, I am interrupting my current series of posts under the title “Pulling Out of the Traffic:  The Future of Culture” to seek immediate gratification of my ire by posting this.  My next post will resume—and finish—the series this post interrupts (or such, at any rate, do I intend at the moment).

*     *     *     *     *     *

The last of three officially sanctioned New York Times editorials in this morning’s paper is under the title “The Guns of August.”  One of the relatively rare so-sanctioned editorials attributed to any named author, this one is attributed to a Mr. Serge Schmemann.  It addresses the diversity of perspectives represented by remarks made that by various world politicians during the last few days, to mark the centenary of the start of World War I.  Specifically, Mr. Schmemann mentions France’s François Hollande and Russia’s Vladimir Putin, and points to how illustrative the remarks of those two are of the diversity of interpretations of “the Great War,” as it is often still called.  After briefly discussing Hollande’s and Putin’s two takes on “the war to end all wars,” to use another name by which the war at issue went during its day, Mr. Schmemann draws this sub-conclusion:  “These diverse interpretations underscore the inherent hazard of drawing parallels from history or of assigning responsibility and guilt for war.”* Mr. Schmemann then goes on, by a nice rhetorical sleight-of-hand, disguising it under the cover of the warning he has just issue against “assigning responsibility and guilt for war,” to do just that—offer an interpretation of his own in which he assigns responsibility and guilt for war (assigning it to certain factors, if not to certain people or peoples).  Thus, in his final paragraph, under cover of the sub-conclusion just given, he first writes:

            That does not mean we should not study and learn from the great war.  [I leave it up to each reader to decide just which war, if not all of them, he means by that last expression, in which he uses neither caps nor quotes.]   The lessons may not be in parallels [to the present] like those made by Mr. Hollande or Mr. Putin, but rather in reflection on intolerance, political expedience, tribal passions, ambition and all other forces that combined to lead Europe sleepwalking to self-destruction.

Then, to polish everything off, he ends his closing paragraph by cutting off all possible disagreement with him, at least all disagreement that is not willing to seem ungrateful and disrespectful towards all the dead of the Great War (if not all war, great as all war may be, from Mr. Schmemann’s perspective, as he expresses it in his editorial).  Hence, he ends his paragraph and his whole editorial by writing:  “And if nothing else, we owe it to our grandparents and great-grandparents and the millions of others who suffered and died on the battlefields of Europe not to forget their awesome sacrifice.”**

Might we wonder why Mr. Schmemann never mentions such things as economic greed for markets, and all the colonialism and empire-building that accompanies it, as causes of war—especially but by no means only World War I?  Might we wonder why he seems completely to disregard all such “forces,” which not only then but still now—in different disguises than back then, perhaps, yet still the very same forces—“combine to lead” to great, ongoing war, even to the point that, today, the very difference between war and peace has been erased?  Might we wonder why he instead confines himself to mentioning such things as “intolerance, political expedience, tribal passions, [and] ambitions”—all things that any decent, right-thinking person would of course reject out of hand?  Or would that just be degenerating into playing the blame game?

How about mentioning that one can hardly think of a better way precisely to forget the “awesome sacrifice” of those who did so sacrifice themselves “on the battlefields of Europe” during World War I than by going through the motions of “memorializing” them on the anniversary of the start (or at least what serves to mark the start) of their still ongoing immolation?  Dare one even so much as hint that those whose sacrifice of themselves was so “awesome” (though some might question the use of such dated Valley-girl speech in such a context) made such sacrifice of themselves entirely in vain, for no good reason at all—at least if we are to heed such witnesses as Wilfred Owen, who both suffered and died as a British soldier on the battlefields of Europe during World War I, or his friend Siegfried Sassoon, who managed to suffer but not die on those same battlefields during that same war?

Or would all that just be in poor taste?

*     *     *     *     *     *

Earlier in this morning’s Times, in a front-page article that ends on a later page, not far from the op-ed section from which “The Guns of August” comes, is the second piece that caught my eye and kindled my ire.  This one is presented as a news-piece, rather than an editorial, though the difference is often hard to tell, without the helpful clues provided by the paper’s section-labels.  It occurs under the byline of Sabrina Tavernise, a journalist no better known to me than Mr. Schmemann the editorialist, and addresses how a group of “public health experts,” as Ms. Tavernise identifies them, have taken exception to “a little-known cost-benefit calculation” that is “[b]uried deep in the federal governments new tobacco regulations.”  The deeply buried, little-known calculation is one “that public health experts see as potentially poisonous:  the happiness quotient,” which “assumes that the benefits from reducing smoking—fewer early deaths and diseases of lungs and heart—have to be discounted by 70 percent to offset the loss in pleasure that smokers suffer when they give up their habit.”  For the public health experts at issue, who have collectively published a paper warning against such poisonous calculations, that idea itself does not have a very high happiness quotient of its own, apparently.  Various quotations and paraphrases from those experts follow.  Included is one attributed to “Kenneth E. Warner, one of the paper’s authors and a professor of public health [so:  a publicly certified public health expert] at the University of Michigan.”  Addressing the fact that most smokers begin smoking before they are 16 (or so we are told in the piece, at any rate—and who am I to contest it?), Professor Warner is quoted as saying:  “It would be ridiculous to admit that a 16-year-old kid who has no idea what addiction means and feels immortal is a rational decision-maker when it comes to smoking.”

Well, I certainly neither can nor want to claim any special expertise pertaining to 16-year-olds as such.  However, I was one once myself.  I turned 17 in the middle of what was for me my one and only banner year during the days of my enforced K-12 schooling.  It was my senior year in high school.  (The school system that walled me in for all of my K-12 years skipped me over the 5th grade for some reason, so that I graduated from high school at 17, the youngest in my class).  The spring before, at the end of my junior year when I was only a few months into my 16th year, I had been elected student-body president of my high school—to my own utter surprise.  (I was never a member of any of the regularly “popular” segments of the student population.  However, I’m proud to say that I was a bit of a rabble-rouser when it came to saying yes and amen to such self-claimed “authorities” as my high school administration.  Such “speaking truth to power,” to use an anachronistic phrase, was what got me elected.)  In the fall of my senior year, thanks to my new-found popularity (or at least my election) I began to run with a better class of friends, thanks to which I gratefully began regular heavy alcohol use and, a bit later, at least as regular and as heavy cigarette-smoking.  Shortly after beginning to smoke, I’d worked myself up—actually, it took no real work on my part—to a three-pack-a-day habit, which I enthusiastically maintained for the next 24 years.

So though, as I said already, I neither can nor want to claim any special expertise on 16-year-olds, I can at least speak of my own experience as a 16 year old, and as someone who began smoking heavily at that same young age.  I can attest that in my own case it was not at all my ignorance of “what addiction means,” nor any feeling that I was “immortal,” that hooked me on nicotine.

I will certainly accept the judgment that at 16 I was not yet a “rational decision-maker.”  What is more, I hope that I am not one yet, and pray I never become one.  By all the experts on “rational decision-making” with whom I have any familiarity, such decision-making as they call “rational” is precisely and only making a decision based on a “cost-benefit calculation.”  Neither my smoking nor my drinking was based in the slightest on any such calculation.  For any decision-making based on such cost-benefit analysis I had at 16, and hope I still have now, nothing but contempt.  It was precisely to say “goodbye to all that” (to borrow a line from Robert Graves, another British soldier who, like his friend Sassoon but unlike their mutual friend Owen, suffered but did not die on the battlefields of Europe during World War I) that I so gladly and enthusiastically devoted myself to alcohol and nicotine.

It was because booze and cigarettes had nothing whatever to do with such tripe that they had such tremendous appeal for me.  In what the “post-war” America into which I was born offered me as a “world,” what alcohol and nicotine offered me was the closest I ever found myself coming to any genuine life.

For that matter, the offer they made me was, in one crucially important sense, one on which they fully made good!  They did so, however, in altogether surprising, utterly unexpected way, as is true for the fulfillment of all such promises, in fact.  They kept their promise only by finally, after a quarter of a century (but who’s counting the cost, except all the despicable “rational decision-makers” out there), bringing me to my “bottoming-out,” as the saying has it, and thereby at last bringing me to a point of decision, where I finally was given a real choice, the choice between saying yes to my own life as a whole, or turning my back on it.

That remark about “points of decision” brings me back, however, to what I was saying in the series of posts that this one interrupts.  So I will now end the interruption.

*     *     *     *     *     *

I must say—that felt good!  I do so like immediate gratification!  However, in my next post I intend to resume, and finish, my series of posts under the title “Pulling Out of the Traffic:  The Future of Culture.”

* I admire how adroitly Mr. Schmemann—suddenly and without acknowledgement, let alone any attempt at justification—moves here from talk of a supposedly specific single war, “World War I,” to talk simply of “war” as such, as a universal.  What a wonderfully useful multi-purpose tool that gives him!  Now he can, if he chooses, use it to warn us against any attempt to “assign responsibility and guilt” for such much more recent wars as the still on-going one in Afghanistan, or the one set off after September 11, 2001, in Iraq, or the current war in Gaza.  Who knows?  He might even want to use it to go against the current grain of making World War II, the one that won’t get to celebrate the centenary of its start till September of 2039, an attributable war, for which those to whom it is typically attributed still pay monetary reparations to those, or at least the heirs of those, who were made to pay the heaviest price for that subsequent, 2nd World War—heaviest, at least, as measured by what has recently been called “the happiness quotient” (on which see some of my own later remarks in this very blog post).
** I don’t know if Mr. Schmemann thereby means to exclude those who suffered and died elsewhere, away from the battlefields, and even altogether away from Europe, in World War I.  Or perhaps he is using the term battlefield in a broad, metaphorical sense, to include anywhere that the war at issue—whatever war that may be (and, perhaps, in whatever sense of war)— brought suffering and death.  And perhaps, for that matter, he is using of Europe to mean something more/other than “within the geographical confines of the continent of Europe.”

Published in: on August 7, 2014 at 10:55 pm  Comments (1)  
Tags: , , , ,

Pulling Out of the Traffic: The Future of Culture (4)

This is the fourth in a series of posts under the same general title.

*     *     *     *     *     *

All sorts of things transpire—but nothing any longer happens—that is, no more decisions fall . . .

– Martin Heidegger, Überlegungen IV (in GA 94), ¶219

 

. . . it’s neither here, nor elsewhere . . .

– Alain Badiou, Images du temps present (January 14, 2014)

 

I had one opportunity. I had to cut out all ties with the flattening, thoroughly corrupt world of culture where everyone, every single little upstart, was for sale, cut all my ties with the vacuous TV and newspaper world, sit down in a room and read in earnest, not contemporary literature but literature of the highest quality, and then write as if my life depended on it. For twenty years if need be.

But I couldn’t grasp the opportunity. I had a family . . . And I had a weakness in my character . . . that was so afraid of hurting others, which was so afraid of conflict and which was so afraid of not being liked that it could forgo all principles, all dreams, all opportunities, everything that smacked of truth, to prevent this happening.

I was a whore. This was the only suitable term.

– Karl Ove Knausgaard, My Stuggle. Book Two: A Man in Love

 

Points of decision are crisis points. “Critical condition” in the medical sense is the condition of a patient who is at the decision point between survival and demise, where the body—with, it is to be hoped, the assistance of the medical staff—must marshal all its resources to sustain life, in the minimal, zoological sense. In the passage cited above, Knausgaard describes how he came to stand at a critical point of decision for or against life in the full, no longer merely biological sense of the term—the truly live-ly sense, we might say, in contrast to the rather deadening sense of bare survival.

Actually, that way of putting it, “ a critical point of decision for or against life,” won’t quite work. Rather, Knausgaard describes coming to a point where he was faced with the need and opportunity at last actually and fully to make a decision in the first place and, by and in making it, to become truly alive at last. At that point he was faced with either “choosing to choose,” as Heidegger puts it in Being and Time, or else just going on going on, literally just surviving (“living-through” or “-over”) his own life, having already outlived himself, as it were, by letting his moment of opportunity slip by, in failing or refusing to decide at all.

The way that Alain Badiou puts it in his seminar on “images of the present times” (in the session of November 27, 2003) is that what he calls simply a “point” is “the moment where you make the world [as such and as a whole] manifest in the yes or the no of a decision. . . . It is the manifestation of the world in the figure of the decision.” He adds right away that “[o]ne is not always in the process of dealing with points, thank God!” Badiou, a self-proclaimed atheist proud of his atheistic family heritage, adds that ejaculation of thanks because, as he goes on to say: “It is terribly astringent, this imperative necessity that suddenly the totality of your life, your world, comes to be the eye of a needle of yes or no. Do I accept or do I refuse? That is a point.”

*    *     *     *     *     *

Early in the second of the six volumes of the long story of his “stuggle”—Kampf in German, it is worth remembering, as in Hitler’s Mein Kampf—Knausgaard himself has already noted how challenging it is actually to have to decide to live one’s life, rather than just to keep on living through it. Toward the very beginning of that second volume—toward the very end of which comes the passage already cited –he writes: “Everyday life, with its duties and routines, was something I endured, not a thing I enjoyed, nor something that was meaningful or that made me happy.” The everyday life at issue for him during the time he is addressing was one of an at-home husband of an employed wife, and a father taking care of his young children while his wife was at work. Thus, it was a life filled with such things as washing floors and changing diapers. However, Knausgaard immediately tells us that his mere endurance rather than enjoyment of such a life “had nothing to do with a lack of desire to wash floors or change diapers.” It was not that he disdained such activities, or regarded them as beneath him, or anything else along such lines. It had nothing to do with all that, “but rather,” he continues, “with something more fundamental: the life around me was not meaningful. I always longed to be away from it. So the life I led was not my own.”

Knausgaard immediately goes on to tell us that his failure to make his everyday life his own was not for lack of effort on his part to do just that. In the process of telling us of his efforts, he also offers at least one good explanation for giving his massive, six-volume, autobiographical novel the title it bears. “I tried to make it mine,” he writes, “this was my struggle, because of course I wanted it . . .”

He loved his wife and his children, and he wanted to share his life with them all—a sharing, it is to be noted, that requires that one first have one’s life as one’s own to share. Thus, “I tried to make it mine,” he writes, “ . . . but I failed.” That failure was not for lack of effort but because: “The longing for something else undermined all my efforts.”

Conjoining the two passages, one from near the start of the book and one from near its very end, suggests that Knausgaard’s long struggle has been of the same sort as that of St. Augustine, as the latter depicted it in his Confessions. That is, the “struggle” at issue derives from the ongoing condition of not yet having made a real decision, one way or another. In such struggles, the struggle itself comes to an end only in and with one’s finally making up one’s mind, finally coming to a resolution, finally deciding oneself.

In the passage at the start of today’s post, coming more than 400 pages of “struggle’ after the one just cited, Knausgaard gives the fact that he “had a family” as the first reason he “couldn’t grasp” the “one opportunity” that he says he had.   Nevertheless, what is really at issue cannot be grasped in terms of choosing between two equally possible but conflicting options, either living the life of a family man or living the life of an artist. Rather, what is at issue is something only Knausgaard’s subsequent remarks really bring to focus: what kept him from seizing his sole opportunity was nothing but himself. It was not the love of his family that hindered him. It was the love of his own comfort—or at least the desire not to disturb his own comfort by disturbing the comfort of others nearby.

I can identify! It was really not my love of my daughter that tripped me up when her childhood pet, Fluffy the guinea pig, died one day, causing me to tempt my own daughter to betray her love for her pet by rushing out to buy a replacement, as I recounted in my preceding post. I did love my daughter, to be sure, as I still do. But, as I already revealed when first discussing the episode, what tripped me up was really not my love for her. Rather, it was my discomfort with my own discomfort over her discomfort over Fluffy’s death. I betrayed myself out of love of my own comfort, not out of love for her. So my betrayal as such was not done out of any genuine love at all; it was done just out of fear—the fear of dis-comfort. That is how clinging to one’s precious comfort always manifests itself, in fact: in Knausgaard’s case no less than my own.

Now, there may truly be cases in which points of decision manifest as what we might call “Gauguin moments.” That is, there may really be cases in which, in order to make one’s life one’s own, one must indeed leave behind one’s family and one’s home and go off into some other, far country, as Gauguin did in the 19th century for the sake of his art (or as Abraham does in the Bible, though not, of course, for the sake of art).

What truly marks points as points of decision, however, is not a matter of the difference in content between two equally possible life-options (let alone the romantic grandiosity of the choices suggested by Gauguin’s, or Abraham’s, model). What defines them (including in such dramatic examples) is just that they are points at which one confronted with the necessity at last truly to decide, that is to resolve oneself—to say yes or no to one’s world, and one’s life in it, as a whole, as Badiou puts it.

*     *     *     *     *     *

German for “moment” is Augenblick—literally, “the blink of an eye.” Heidegger likes to note that etymologically Blick, an ordinary German word for look, glance, view, or sight, is the same as Blitz, the German for lightning-flash, lightning-bolt. Points of decision, in the sense that I am using that expression, are moments that proffer what Heidegger calls an “Einblick in das, was ist,” an in-sight or illuminating in-flash into that which is. Points of decision are moments of illumination of what is there and has been there all along, though we are only now, in a flash, given the opportunity to see it. They are those points in our lives that offer us the chance to make our lives our own: to come fully alive ourselves—at last and for firsts.

In common with Blitzen in the everyday sense of lightning-bolts, moments or points of decisive in-sight/in-flash sometimes come accompanied by loud thunderclaps, or the equivalent. God may come down and talk to us as God did to Moses as the burning bush, or come in a whirlwind, or with bells and whistles. At least as often, however, moments or points of decision come whispering to us in a still, small voice, one easily and almost always drowned out by all the noise of the everyday traffic with which we everywhere surround ourselves (even if only in the space between our ears), for very fear of hearing that voice . . . and being discomfited by it.

Points of decision may break the surface of our the everyday lives—those lives that, like Knausgaard, we endure without enjoying—as suddenly and dramatically as the white whale breaks the surface at the end of Melville’s Moby Dick. Or they may come upon us slowly, and catch up on us all unawares, such that we waken one morning and realize that for a long while now, we have not been in, say, Kansas any longer, but have no idea of just where and when we might have crossed the border into whatever very different place we are now.

All such differences make no difference, however. What counts is only that we come to a moment, a point of clarity, where we are struck, as though by a bolt of lightning, with the realization that we do indeed have a choice, but only one choice. We have a choice, not in the sense that we can pick between two different options, as we might pick between brands of cereal to buy for our breakfast. Rather, we have a choice in the sense that, like Knausgaard, we realize that we do indeed have one and only one opportunity, which we can either take, or fail to take. We are faced with the choice, as the Heidegger of Being and Time put it, of choosing to choose, choosing to have a choice to exercise, rather than continuing just to let ourselves live through our own lives, without ever having to live them. The choice is either to live, or just to go on living.

An acquaintance of mine once came to such a point of decision in his own life, and who did indeed decide to make his life his own at that point. When asked about it, he says that up until that point it had always been as though his life was running on alongside him, while he was just sort of standing there observing it. What his moment of decision offered him, he says, was precisely the opportunity to “take part in” his own life, rather than just continue to let it run itself next to him. In a certain sense, he may have “had” a life up to that point, but only at that point did he come to live it himself.

*     *     *     *     *     *

In The Politics of Things (La politique des choses, first published in France in 2005 by Navarin, then in a slightly revised, updated edition in 2011 by Verdier) contemporary French philosopher Jean-Claude Milner traces the global processes driving inexorably, in what passes for a world in what passes for today, toward eliminating the very possibility of there being any genuine politics at all. That goal is being achieved above all through the development of ever more new techniques of “evaluation,” and the ubiquitous spread of processes of such evaluationinto ever more new dimensions of individual and collective life. (In the United States, we might add, the deafening demand for incessant development and promulgation of ever more new ways and means of evaluating everything and everyone is typically coupled with equally incessant palaver about the supposed need for “accountability.”)

What Milner calls “the politics of things” aims at what he calls “government by things.” At issue is the longstanding global drive to substitute what is presented as the very voice of “things” themselves—that is, what is passed off for “reality,” and its supposed demands—for any such messy, uncertain politics or government as that which requires actual decisions by human beings.

Thus, for example, “market mechanisms” are supposed to dictate austerity according to one set of “experts,” or deficit spending according to another set. Whichever set of experts and whichever direction their winds may blow doesn’t really make any difference, however. What counts, as Milner says, is just that it be one set or another, and one direction or another.

That’s because, he observes in his fourth and final chapter, “Obedience and Liberties” (in French, “Obéissance ou libértes”), the real aim of the whole business is simply the former: sheer obedience—what is indeed captured in the English word “obeisance,” derived from the French term. He writes (page 59) that, “contrary to appearances, the government of things does not place prosperity at the summit of its preoccupations; that is only a means to its fundamental goal: the inert tranquility of bodies and souls.”

To achieve that goal, the government of things plays upon human fears—two above all: the fear of crime, and the fear of illness. Under the guise of “preventing” crime and/or illness, the government of things reduces us all to un-protesting subservience. We prove always willing to do just as we’re told, as unpleasant as we may find it, because we have let ourselves be convinced that it is all for the sake of preventing crime or illness.

I will offer two examples of my own.  The first is how we line up docilely in long queues in airports, take our shoes (and, if requested, even our clothes) off, subject ourselves to pat-downs and scan-ups, delays and even strip-searches—all because we are assured that otherwise we run the risk, however slight, of opening ourselves to dreaded terrorist attacks. My second example is how we readily subject ourselves to blood-tests, digital rectal examinations, breast ex-rays, hormone treatments, and what not, all the tests, checks, and re-checks that our medical experts tell us are necessary to prevent such horrors as prostate or breast or colon or skin cancer, or whatever. We readily subject ourselves to all these intrusive procedures, only to be told sooner or later by the very same experts that new evidence has changed their collective expert thinking, and that we must now stop subjecting ourselves to the same evaluation procedures, in order to prevent equally undesirable outcomes. In either case, we do just as we’re told, without complaint.

We do as we’re told, whatever that may be at the moment, to prevent crime and/or illness because, as Milner writes (page 61): “Under the two figures of crime and illness, in effect one and the same fear achieves itself, that one which, according to Lucretius, gives birth to all superstition: the fear of death.” In fact, we are all so afraid of death and so subject to manipulation through that fear that we fall easy prey to the “charlatans,” as Milner appropriately calls them (on page 62), through whom the government of things seeks to universalize what amounts (page 64) to the belief in Santa Claus (Père Noël in France, and in Milner’s text)—a belief, finally, that “consists of supposing that in the last instance, whether in this world or in the next, the good are rewarded and the evil are punished.”

The government of things strives to make everyone believe in such a Santa Claus “with the same effect” that it fosters the development and universalization of techniques and procedures of evaluation: the effect of “planetary infantilization.” Furthermore:

One knows that no Santa Claus is complete without his whip. Indefectible solidarity of gentle evaluation and severe control [our American Santa making up his lists of who’s naughty and nice, then rewarding the latter with goodies and punishing the former with lumps of coal, for instance]! The child who does not act like a child [by being all innocent and obedient, sleeping all nice and snug in her bed, with visions of sugar-plumbs dancing away in her head] is punished; that is the rule [and we must all abide by the rules, musn’t we?]. All discourse not conducive to infantilization will be punished by the evaluators, that is the constant. Among its effects, control also carries this one: the promise of infantilization and the initiation of transformation into a thing.

After all, the desideratum is a government not only of things, but also by things and for things (pace Lincoln—at least it we grant him the charity of thinking that’s not what he really meant all along).

In the closing paragraphs of his little book (pages 66-67), Milner issues a call for resistance and rebellion against all such pseudo-politics and pseudo-government of things, and in affirmation of a genuine politics. It is a call, quite simply, for there to be again decision.

“If the name of politics has any meaning,” Milner writes, “it resolutely opposes itself to the government of things.” In rejecting the pretense of a politics of things, real politics “supposes that the regime of generalized subordination can be put in suspense.” A politics worthy of the name can emerge only if at last an end is put to all the endless chatter about how we all need to show “respect for the law,” “respect for authority,” and the like, all of which is just code for doing what we’re told.

Such suspension of generalized subordination and end of infantilizing chatter may not last long: “Maybe only for an instant . . .” But that instant, that moment, that blink of an eye, “that’s already enough, if that instant is one of decision. What’s needed is that there again be decision.”

That’s all that’s needed, but that’s everything. As Milner writes, “politics doesn’t merit the name unless it combats the spirit of subordination. One doesn’t demand that everyone be generous, or fight for the liberties of everyone; it is quite enough if each fights for her own freedom.” The return of a genuine politics requires that we stop relinquishing our own choices to “the order of things.” It requires, instead, “[t]hat at times we decide for ourselves . . .”

There is no future of politics otherwise. Nor, without decision, is there any future of culture in any form, be it political, artistic, philosophical, or whatever. But that just means that, without decision, there really is no future at all.

*     *     *     *     *     *

I intend my next post to be the last in this current series on “Pulling Out of the Traffic: The Future of Culture.”

Pulling Out of the Traffic: The Future of Culture (3)

This is the third in a series of posts under the same general title.

*     *     *     *     *     *

Getting things to run smoothly, working to achieve a lack of resistance, this is the antithesis of art’s essence, it is the antithesis of wisdom, which is based on restricting or being restricted. So the question is: what do you choose? Movement, which is close to life, or the area beyond movement, which is where art is located, but also, in a certain sense death?

                                    –Karl Ove Knausgaard, My Stuggle. Book Two: A Man in Love*

 

Just where is art “located”?

That interrogative sentence may be grammatically well formed, but the question it tries to pose may not be. One thing (one of many, really) on which Alain Badiou and Martin Heidegger are in agreement is that it is more nearly art that does the locating, rather than itself being located. The work of art is not, properly regarded, at some place, according to them both. Rather, the work of art is itself a place–and a place-ment—in the strongest sense.

Plato somewhere mentions the common case of the child to whom some adult holds out two closed hands, in each of which is a desirable gift, and asks the child to choose. Any self-respecting child in such a situation will, of course, want both. Plato uses that as a metaphor for the philosopher. The philosopher, he says, is the child who, made such an offer of two good things and told to choose between them, always begs for both.

As deficient a philosopher as I may be in other regards, I am a still a good enough one to meet at least that particular Platonic standard—which I would like to call the standard of the essential childishness** of philosophy. Just so, in the present case I want to have both my Knausgaard and my Badiou (and my Heidegger!) too.

In the passage I quoted above, Knausgaard speaks of art itself being located somewhere. He locates it in a certain “area.” That is the area—or to show, as usual, my own Heideggerian underwear (“foundation garments”), what might better be called the region—“beyond movement.” That same area/region is also where one is to find, Knausgaard says, “death,” at least “in a certain sense.” That last phrase—at any rate, in the English translation—can be read, I want childishly to suggest, to apply both to a certain sense of death and to a certain sense of location. The death in the vicinity of which art is located is not just any old sort of death, but only a certain sort of death. At the same time, art and death themselves can be located in one another’s vicinity not in just any old sort of location (or any old sort of vicinity, for that matter), but only in a certain sort of location.

The certain sort of place or location in which a certain sort of death or end of life lies near to art is like no place at all in the entire world (which itself is only in a certain sense world) of our day (which is only in a certain sense day). In our globally collective present times—which are both present and times only in a certain sense—neither art nor death can be located at all. In our present times, there is neither art nor death.

*     *     *     *     *     *

Nor is the area, region, or realm in which art and death come into one another’s vicinity any place we can reach from our own certain sort of day’s certain sort of area, region, or realm, even though the latter is all-inclusive, both geographically and socially speaking—all-inclusive, that is, with regard both to such places as the states of Afghanistan, Iraq, Syria, the Sudan, the Ukraine, etc., and with regard to such places as the states of poverty, intolerance, illegal/undocumented-immigrant-hood, etc. (to allude to some remarks I made in my preceding post).

The place where art and death draw near to one another?

You can’t get there from here.

The only places you can get to from here, that is, from where we are today, in these present times, are such places as points on the globe. Or, we could also add, points “off-globe,” in interstellar space.

Most of us, of course, will never be able to get to any extra-global places from here, of course, since most of us are nowhere near rich enough to pay for a seat on one of the commercially driven spaceships now being readied for a very few of us to go to some such places. But that doesn’t matter. It doesn’t affect the fact that such places can still be reached from here by some of us, even if not by 99.9% of us. Nor does it affect the status of all of us actual or only logically possible potential travelers universally, insofar as we all without exception count as citizens of democracies, actual or even only logically possible, where everyone is equal.

That’s because however rich or however poor any of us may be, the only places any of us at all can get to from where we are now are, anyway, places such that it really doesn’t make any difference whether we are there or somewhere else. They are all alike places the place of which doesn’t matter. After all, it you’ve seen one McDonalds, you’ve seen them all.

That indifference of the difference in go-to-able places stems from the underlying basic fact that the only sense of place for which our world today makes any room—the only sort of place that has any place in such a place—is that of what can be placed at some point in the grid of spatial coordinates that applies indifferently to any and every place alike in the one and only, all-inclusive cosmic space of physics and the other sciences (which are never guilty of childishness, by the way).

Thus, in the world of our present times today, what in my preceding post I called the “flattening” that transpires with the concept of war also transpires with the concept of space. Indeed, that same flattening also transpires with regard to the concept of time, as it does yet again with the concept of a person and even, finally, with the concept of an event.

Badiou is good on that.  So is Heidegger. Let us choose both.

*     *     *     *     *     *

In the third session—held on December 4, 2002—of his three-year seminar on “images of the present times,” Badiou begins by addressing how the movement of reactionary endeavor is always toward “the installation of the idea that the world is not transformable, that the world is as it is, and that it’s fitting to accommodate oneself to it.” That can take the form either of presenting the world as never changing, or of presenting it as ever changing—that is, as changing constantly. In the former case all effort to change the world is futile. In the latter case, for fear of falling behind one cannot ever dare even to pause long enough to take stock of what things even can be changed, let alone should.  So either, like Zeno’s arrow, one can never take flight at all, no matter how fast one flies. Or, like a certain Rabbit, one must always just keep on running, running, running . . . to go nowhere.

All that perfectly fits what Badiou goes on to call the “general tendency of the present times,” which is “manifestly the dissolution of the present in a general regime that is that of communication, [in the sense, standard today,] of circulation”—just as money and the merchandise it is used to buy must be kept constantly in circulation to keep things running smoothly everywhere today.

Thus, the “general tendency” at issue is toward the reduction of time as suchto a never-present present. At issue is the reduction of the “present” (itself taken to define time, as Aristotle said so long ago) to what is, in effect, no particular time, but just any old time. In such times as ours, any given time is interchangeable with any other–just like the money that, as the old cliché rightly has it, time today is. Time today is reduced to what, in effect, has no particular time—“has no particular time,” both in the sense that no moment of today’s time differs essentially from any other, and in the sense that time today grants or gives no time, no time to pause and draw aside, no time one can “bide.”

Conjoined with that reduction of time to what has no particular time, goes the reduction of place—Badiou goes on to observe a little later in the same session—to what has no particular place. He makes that observation in the context, specifically, of a discussion of Rimbaud and the colonial enterprise of Rimbaud’s day, but what he says applies no less to every day since Rimbaud’s day, even if the nature and status of the imperial enterprise itself has undergone considerable cosmetic do-over in the meantime.

“The imperial abstraction,” Badiou remarks, “is to transform the here [ici] into an it doesn’t matter where [en un ne importe où].” He gives an explanatory example clear to everyone (it doesn’t matter who): “That’s a feeling one experiences in the most anguishing manner when one is in an airport: you are sure you’re in an airport, but you could just as well be in Rio de Janeiro as in Paris or in Singapore. The airport is the absolute doesn’t matter where.” Just a bit latter he adds: “The contemporary savagery, the contemporary barbarism, is a barbarism that treats place [lieu] as if it is not a place. That treats place as if that place was nothing but a point in space.”

In contrast, for Badiou, the work of art is itself a truly singular place, not just any old place at all. Indeed, art as such is one of his standard four ways in which truth itself takes place. The other three ways, to repeat what I’ve said in earlier posts, are science, love, and politics. All four are, as it were, place-makers for truth. They are truth’s own em-place-ments, literally speaking.

In more than one place of his own, Heidegger says the same thing, at least about art, place, and space.  It’s become a sort of Heideggerian commonplace about place, in fact. Nevertheless, I will briefly cite two places he says such things. The fist is his lecture “On the Origin of the Work of Art,” first delivered in 1935. In that lecture he says emphatically that works of art as such—which means insofar as they are still “at work as art-works,” we might say—are not things that are located at certain places (such as in museums where paintings are hung, like corpses on nooses, or in the cities where the ruins of dead works of architecture can be visited still today, like the bones of dead ancestors in reliquaries). Rather, works of art are themselves places—places where whatever does take place, from people to rivers and gods to crickets, is allowed to take place. Thus, to use just one of Heidegger’s own examples from that lecture, the battle in ancient Greece between the old Minoan gods and the new Attic ones of the northern invaders, who came to define the very concept of “the Greeks” for us, is itself something that takes place in Sophocles’ Antigone, rather than being something that once took place somewhere else, then just got “represented” in Sophocles’ tragic drama. The Antigone itself is the battlefield, and the fighting of the battle takes place on that very battlefield.

My second Heideggerian reference it to something he wrote more than thirty years later,a short piece from his later works called “Art and Space” (“Die Kunst und der Raum”), which was originally published only in 1969, just seven years before Heidegger’s death. In it, Heidegger explicitly draws a strong, sharp contrast between the cosmic, place-less space of the physicists, on the one hand, and the place-scaped space, we might well call it, of the artwork—specifically, in this essay, the work of sculpture, which is itself a matter of spacing as the literal em-bodi-ment, the making into a body, of truth.   As one can easily see, at that point in making his point about spatial points, Heidegger may as well be Badiou. They both occupy the same space—which tells you the space they share is no longer Greek, by the way, or at least no longer Aristotelian.

*     *     *     *     *     *

In the same session of December 3, 2002, already mentioned, Badiou remarks that Rimbaud, in poems written during his time as an enlistee in the Dutch foreign legion, referred to himself as a “conscript of good will,” which is to say one who conducted himself as befits a willing conscript. Badiou says that Rimbaud’s usage of the expression good will is “exact,” in the strictly Kantian sense of good will, which Badiou also labels “the good democratic will.” That is, Rimbaud is a “conscript of good will” insofar as he is a willing “soldier of the rights of man, of civilization,” as Badiou puts it, and willing to help carry those rights and that civilization to those who do not yet share in its blessings. (Just the kind of conscript of good will George W. Bush still needed well over a century later!)

As Badiou notes, Rimbaud also coupled being such a good democratic conscript with serving what Rimbaud himself called a “ferocious philosophy.” According to Badiou, that means “a philosophy of aggression and of the in-differentiation of place,” that is, of the washing out of all differentiation between one place and another.

One should surely add: between one person and another, too!  After all, everyone (no matter who), everywhere (no matter where), at every time (no matter when) is entitled to the “universal rights of man” (please forgive the sexist language of the standard Enlightenment phrase). Furthermore, those rights boil down, essentially, to being allowed to vote (no matter for whom) in free and open elections, and being free to live out one’s life however one chooses (no matter how, so long as it doesn’t hurt anyone else).

Who cares if the elections we vote in and the lives we live out are all equally meaningless? All that finally matters is that all our votes get counted equally, and all our lives lived equally out.

*     *     *     *     *     *

Once again, Heidegger also points to such a flattening out of the notion of the human, to go with the flattening out of the notions of time and space. And once again he does so in more than one place.  This time, I will cite just one brief passage. It is one I read just recently, alongside Badiou’s seminar. The passage in question is from “Zu Ereignis III,” one of the six manuscripts about the “thinking” (Denken) of “the event” (das Ereignis) recently published together as Zum Ereignis Denken (volumes 73.1 & 73.2 of Heidegger’s Gesamtausgabe). This third of the six manuscripts is from the same Nazi decade as “On the Origin of the Work of Art,” cited above.

In ¶58 of “Zu Ereignis III” (GA 73.1, page 375) Heidegger discusses “the singularity [Einzigkeit] of Dasein,” which is to say the singularity of that being each of us human beings is given and called to be—however many of us may fail at that task, however often. Such singularity, he writes, is “precisely not individuality [Einzelnheit]—but also not the empty generality of what’s common.”

The terminology—which I have rendered as “singularity” and “individuality”—is not the crucial thing. What matters is the distinction itself, the one being marked by that terminology. That is the same distinction Badiou calls to our attention in his discussion of Rimbaud: the difference between what we might call two different sorts of “one of a kind.” On the one hand, there is what is “one of a kind” in the usual sense of that expression, where it means something that has no like, something truly unique, something altogether irreplaceable by anything else. That is the sense in which, for example, Muhammad Ali can rightfully be said to be “one of a kind.” On the other hand, there is what we might call “one of a kind” in a minimizing, even pejorative sense. In that sense, “one of a kind” would mean: just one of any number of possible instances of some given “kind,” that is, some common or general class of things of which any one member of that class could serve just as well as any other as an example, since they are all equal, all interchangeable with one another, as instances of the kind or class at issue.

Take Fluffy as an example.

Fluffy was my daughter’s pet guinea pig when she (my daughter—Fluffy was a “he”) was a child. One day Fluffy went belly-up in his cage. My daughter was, of course, troubled by Fluffy’s passing. She cried. That, in turn, troubled me, her father. Utterly lacking in the pertinent skillful means myself, at least at that particular time in that particular situation, I attempted to console my daughter by telling her it was all right, we could just go to the pet store and get her another guinea pig to have as a pet. Her voice and expression full of the disgust and contempt such a wholly clueless attempt to “fix” everything warranted, she replied indignantly that she did not want any “other” guinea pig—she wanted Fluffy.

For me, Fluffy was just in a certain sense one of a kind, the sense of being no more than one instance of the general kind, guinea pig. For my daughter, Fluffy was—well, Fluffy, who was one of a kind.

Now, there is absolutely nothing wrong with guinea pigs, or with liking them as such. And if all there is to it is it that you happen to like guinea pigs just because they’re guinea pigs, then it’s no big deal if your guinea pig of the moment dies on you, so long as you have access to others. All you need do is go out and get another guinea pig, any other guinea pig will do, since being guinea pigs is what you like about them all equally.

However, if you make the mistake of coming to love whichever guinea pig fate may have sent your way at some given time, and your beloved guinea pig dies on you, then things are not so easy. Indeed, should such a thing happen, should your beloved guinea pig pull a Fluffy on you and go belly up—as, of course, it eventually will, unless your beloved guinea pig just happens to outlive you, like the last coat a given tailor cuts might well outlive the tailor that cuts it, to borrow another example from Plato—then you will find yourself, in fact, at a point of decision.

At that point, you may decide to remain true to your love, with all the pain that entails under the circumstances—since it does indeed hurt to lose someone you love, as my daughter could testify it hurt to lose Fluffy. Or you may decide to betray your love and seek you own comfort by rushing out to find some replacement for the irreplaceable—as I shamefully encouraged my daughter to do, in my rush to escape my own discomfort over her pain for the same Fluffy loss. You can choose, that is, to numb your love, and thereby deny it. Or you can choose to feel it in all its pain, and thereby affirm it.

At such points, the decision is up to you. That’s what defines them.

*     *     *     *     *     *

My next post, continuing this series, will start at the same point, with points of decision.

 

* Translated by Don Bartlett (New York: Farrar, Straus and Giroux, 2013), page 506 of the e-book edition.

** The right term! Presuming to display charity, some might try to substitute “child-like” for “child-ish.” But—as is true for so much charity— the caritas in such charity, however well intentioned it may be, is utterly lacking in skillful means. Endeavoring to help, it actually harms. From the point of view of what passes for a world in what passes for today, philosophy can only manifest as an enterprise that it is utterly childish, not just childlike, to pursue; and the dignity of philosophy demands that its true rank in relationship to our “present times,” as Badiou’s puts it, be acknowledged and granted. To pursue philosophy today, a day of such times, is utterly childish: Philosophy is really useless, something no serious adult can afford to waste any time on.

Published in: on July 18, 2014 at 8:51 pm  Comments (1)  
Tags: , , , , ,

Pulling Out of the Traffic: The Future of Culture (2)

This is the second in a series of posts under the same general title.

*     *     *     *     *     *

In the New York Times for Thursday, June 26 of this year—which was also the day I put up the post to which this one is the sequel—there was a news-piece by Mark Mazzetti under the headline “Use of Drones for Killings Risks a War Without End, Panel Concludes in Report.” The report at issue was one set to be released later that same morning by the Stimson Center, “a nonpartisan Washington think tank.” According to Mr. Mazzetti’s opening line the gist of the report was that “[t]he Obama administration’s embrace of targeted killings using armed drones risks putting the United States on a ‘slippery slope’ into perpetual war and sets a dangerous precedent for lethal operations that other countries might adopt in the future.” Later in the article, Mr. Mazzetti writes that the bipartisan panel producing the report “reserves the bulk of its criticism for how two successive American presidents have conducted a ‘long-term killing program based on secret rationales,’ and on how too little thought has been given to what consequences might be spawned by this new way of waging war.”     For example, the panel asked, suppose that Russia were to unleash armed drones in the Ukraine to kill those they claimed to have identified as “anti-Russian terrorists” on the basis of intelligence they refused to disclose for what they asserted to be issues of national security. “In such circumstances,” the panel asks in the citation with which Mr. Mazzetti ends his piece, “how could the United States credibly condemn Russian targeted killings?”

Neither Mr. Mazzetti nor—by his account at least—the panel responsible for the Stimson Center report bothers to ask why, “in such circumstances,” the United States would want to “condemn” Russia for such “targeted killings” on such “secret rationales.” It is just taken for granted that the United States would indeed want to condemn any such action on the Russians’ part.

That is because, after all, the Russians are among the enemies the United States must defend itself against today to maintain what, under the first President Bush, used to be called “the New World Order”—the order that descended by American grace over the whole globe after the “Cold War,” which itself characterized the post-war period following the end of World War II. Today is still just another day in the current “post post-war” period that set in after the end of the Cold War—as Alain Badiou nicely put it in 2002-2003, during the second year of his three-year monthly seminar on Images of the Present Times, just recently published in France as Le Seminaire: Images du temps present: 2001-2004 (Librarie Arthème Fayard, 2014).

It is really far too late on such a post post-war day as today to begin worrying, as the Stimson panel penning the report at issue appears to have begun worrying, about entering upon the “slippery slope” that panel espies, the one that slides so easily into “perpetual war.” For one thing, what’s called the Cold War was itself, after all, still war, as the name says. It was still war, just “in another form,” to twist a bit a famous line from Clausewitz. Cold as that war may have been, it was still but a slice of the same slope down which the whole world had been sliding in the heat of World War II, which was itself just a continuation of the slide into which the world had first swiftly slipped at the beginning of World War I.

Let us even go so far as to assume that the great, long, European “peace” that ran from the end of the Franco-Prussian War in 1870 all the way down to 1914, one hundred year ago this summer, when it was suddenly interrupted by a shot from a Serbian “terrorist” in Sarajevo, was peace of a genuine sort, and not just the calm of the proverbial gathering storm. Even under that assumption, peace has never really been restored to the world again since the guns began firing in August or that same year, 1914, if the truth is to be told. Instead, the most that has happened is that, since then, from time to time and in one place or another there has occurred a temporary, local absence of “hot” war, in the sense of a clash of armed forces or the like. The guns have just stopped firing for a while sometimes in some places—in some times and places for a longer while than in others.

So, for example, even today, a quarter of a century after the end of the post-war period and the beginning of the post post-war one, the western and west-central European nations have remained places where “peace,” in the minimal, minimizing sense of the mere absence of “active hostilities,” has prevailed. Of course, elsewhere, even elsewhere in Europe—for example, in that part of Europe that during part of the time-span at issue was Yugoslavia—plenty of active hostilities have broken out. In many such cases (including the case of what once had been Yugoslavia) those episodes have often and popularly been called “wars,” of course.

Then, too, there have been, as there still are, such varied, apparently interminable enterprises as what Lyndon Johnson labeled America’s “war on poverty,” or what Richard Nixon labeled the American “war on drugs.” In cases of that sort, it would seem to be clear that we must take talk of “war” to be no more than metaphorical, in contrast to cases such as that of, say, America’s still ongoing “war in Afghanistan,” where the word would still seem to carry its supposedly literal meaning.

Another of the wars of the latter, “literal” sort is the one that began with the American invasion of Iraq on March 20, 2003. As it turned out, that particular war broke out right in the middle of the second year of Badiou’s seminar on “images of the present times.”  In fact, the hostilities in Iraq started right in the middle of some sessions of his seminar in which Badiou happened to be addressing the whole issue of “war” today, during our “post post-war” period—as though tailor-made for his purposes.

In his session of February 26, 2003, less than a month before the start of hostilities in Iraq, Badiou had begun discussing what war has become today, in these present times. He resumed his discussion at the session of March 26—following a special session on March 12, 2003, that consisted of a public conversation between Badiou and the French theatre director, Lacanian psychoanalyst, and philosopher François Regnault. President George W. Bush had meanwhile unleashed the American invasion of Iraq.

In his session of February 26, 2003, Badiou had maintained that in the times before these present times—that is, in the post-war period, the period of the Cold War—the very distinction between war and peace had become completely blurred. Up until the end of World War II, he writes, the term war was used to mark an “exceptional” experience. War was an “exception” in three interconnected dimensions at once: “ a spatial exception, a temporal exception and also a new form of community, a singular sharing, which is the sharing of the present,” that present defined as that of “the war” itself.

We might capture what Badiou is pointing to by saying that, up till the end of World War II and the start of the Cold War, war was truly a punctuating experience. That is, it was indeed an experience in relation to which it did make clear and immediate sense to all those who had in any way shared in that experience to talk of “before” and “after.” It also made sense to distinguish between “the front” and “back home.” Some things happened “at the front,” and some “back home”; some things happened “before the war,” and some only “after the war.” And war itself, whether at the front or back home, and despite the vast difference between the two, was a shared experience that brought those who shared it together in a new way.

During the Cold War, however, all that changed, and the very boundaries of war—where it was, when it was, and who shared in it—became blurred. Badiou himself uses the example of the “war on terror” (as George W. Bush, who declared that war, was wont to call it, soon accustoming us all to doing so) that is still ongoing, with no end in sight. The war on terror is no one, single war at all, Badiou points out. Instead, the term is used as a cover-all for a variety of military “interventions” of one sort or another on the part of America and—when it can muster some support from others—its allies of the occasion. Indeed, the term can be and often is easily stretched to cover not only the invasions of Afghanistan and Iraq under the second President Bush but also the Gulf War unleashed against the same Iraq under the first President Bush, even before the war on terror was officially declared—and so on, up to and including the ever-growing use of armed drones to kill America’s enemies wherever they may be lurking (even if they are Americans themselves, though so far—at least so far as we, “the people,” know—only if those targeted Americans could be caught outside the homeland).

So in our post post-war times there is an erasure of the boundary between war and peace, a sort of becoming temporally, spatially, and communally all-encompassing—we might well say a “ going global”—of the general condition of war. Coupled with that globalization of the state of war there also occurs, as it were, the multiplication of wars, in the plural: a sort of dissemination of war into ever new locations involving ever new aspects of communal life. Wars just keep on popping up in more and more places, both geographically and socially: the war in Afghanistan, the war in Iraq (just recently brought back again—assuming it went away for a while—by popular demand, thanks to ISIS), the war in Syria, the wars in Sudan, Nigerian, Myanmar, Kosovo, the Ukraine, or wherever, as well as the wars against poverty, drugs, cancer, “undocumented”/“illegal” immigration, illiteracy, intolerance, or whatever.

At the same time, this globalization of war and proliferation of wars is also inseparable from what we might call war’s confinement, or even its quarantine. By that I mean the drive to insure that wars, wherever and against whatever or whomever they may be waged, not be allowed to disrupt, damage, or affect in any significant negative way, the ongoing pursuit of business as usual among those who do the war-waging. (The most egregious example is probably President George W. Bush in effect declaring it unpatriotic for American consumers not to keep on consuming liberally—including taking their vacations and driving all over lickety-split—in order to keep the American economy humming along properly while American military might was shocking and awing the world in Baghdad and the rest of Iraq.)

Thus—as Badiou puts it in his session of March 26, 2003—in league with the expansion of war into global presence and the random proliferation of wars goes a movement whereby simultaneously, among the wagers of war, “[e]verything is subordinated to a sort of essential introversion.” That is a reference above all, of course, to America, the only superpower that remained once one could no longer go back to the USSR. On the one hand, as both Badiou and the Stimson report with which I began this post indicate, the American government does not hesitate to claim the right to “intervene” anywhere in the world that it perceives its “national interests” to be at stake, no matter where that may be. It claims for itself the right to make such interventions whenever, against whomever, and by whatever means it judges to be best, and irrespective of other nations’ claims to sovereignty—even, if need be, against the wishes of the entire “international community” as a whole (assuming there really is any such thing). Yet at the same time such interventionism is coupled essentially with a growing American tendency toward “isolationism.”

This counter-intuitive but very real American conjunction of interventionism and isolationism is closely connected, as Badiou also points out, to the ongoing American attempt to come as close as possible to the ultimate goal of “zero mortality” on the American side, whenever, wherever, against whomever, and however it does conduct military interventions under the umbrella of the claimed defense of its national interests, as it perceives them, on whatever evidence it judges adequate. That is best represented, no doubt, by the aforementioned increasing American reliance on using unmanned, armed drones to strike at its enemies, a reliance that began under the Bush administration and has grown exponentially under the Obama administration.

Furthermore, the drive toward zero war-wager mortality is coupled, in turn, with another phenomenon Badiou addresses—namely, what we might call the steady escalation of sensitivity to offense. The more American power approaches what Badiou nicely calls “incommensurability,” and the nearer it comes to achieving the zero American mortality that goes with it, the less it is able to tolerate even the slightest slight, as it were. Rather, in such an affair—as he says in the session of March 26, shortly after the American attack on Iraq under the second President Bush—“where what is at stake is the representation of an unlimited power, the slightest obstacle creates a problem.” Any American deaths at all, or any remaining resistance, even “the most feeble, the worst armed, . . . the most disorganized,” is “in position to inflict damage to the imperious power that it faces.” As there is to be zero American mortality, so is there to be zero resistance (or whatever origin, including on the part of Americans themselves).

*     *     *     *     *     *

All these interlocked features belong to what we have come to call “war” today. Or rather, the situation today is really one in which the very notion of war has come to be entirely flattened out, as I would put it. War itself has ceased to be any distinctive event—anything “momentous,” properly speaking: marking out a clear division between a “before” and an “after,” such that we might even speak of the “pre-war” world and the “post-war” one. That is what Badiou means by saying that we live today in the “post post-war” period. It is a strange “period” indeed, since there is, in truth, no “point” at all to it—either in the sense of any clearly defined limit, or in the sense of any clearly defined goal, I might add—which is what I had in mind in my earlier remark that war today has ceased to be any truly “punctuating” experience.

In one of my posts quite a while ago, I wrote that, in line with contemporary Italian philosopher Giorgio Agamben’s thought about sovereignty and subjectivity, an insightful hyperbole might be to say that it had been necessary to defeat the Nazis in World War II in order that the camp-system the Nazis perfected not be confined to Nazi-occupied territory, but could go global—so the whole world could become a camp, in effect, and everyone everywhere made a camp inmate subject to being blown away by the winds of sovereignty gusting wherever they list.

Well, in the same way it might be suggested that the whole of the long period of preparation for, and then eventual outbreak and fighting of, the (“two”) World War(s), as well as the whole post-war period of Cold War that followed, was just the long ramp-up necessary for the true going global of war in our post post-war period.  That is, the whole of the unbelievably bloody 20th century, ushered in by the whole of the 19th, back at least to the French Revolution of the end of the 18th, can be seen as nothing but the dawning of the new, ever-recurring day of our present post post-war, unpunctuated period.

Indeed, war today has become so enveloping spatially, temporally, and communally, all three, that it is no longer even perceivable as such, except and unless it breaks out in some ripple of resistance somewhere, by some inexplicable means. Whenever and wherever and from whomever, if anywhere any-when by anyone, the power into whose hands the waging of war has been delivered suffers such an offense against it, no matter how slight the slight, then the only conceivably appropriate response is, as the old, post-war saying had it, to “nuke ‘em.”

Furthermore, since offenses are in the feelings of the offended, none of us, “the people,” has any assurance at any time that we will not, even altogether without our knowingly having had any such intent, be found to have done something, God knows what, to offend. If we do, then we may also come to be among those getting nuked (or at least deserving to be)—probably by an armed drone (maybe one pretending to be delivering us our latest Amazon.com order).

*     *     *     *     *     *

By now, even the most patient among my readers may be wondering what this whole post, devoted as it is to discussion of the meaning of “war” today, has to do with “the future of culture,” which is supposed to be the unifying topic in the entire current series of posts of which this one is supposed to be the second. That will only become evident as I proceed with the series—though perhaps it will not become fully evident until the whole series draws together at its close. At any rate, I will be continuing the series in my next post.

Pulling Out of the Traffic: The Future of Culture (1)

 Is there any future for culture? That is the question with which I ended my previous post, more than three months ago now. It is where I want to resume now, after that long break.

To get right to the point, the answer to that question is no, there is no future for culture. The only future that what presents itself today as our global reality permits us is the endless continuation of the circulation of commodities, a pseudo-future that precludes all cultural production. We can only expect more of the same, that is, yet ever more new commodities, newly circulating. Culture today is impossible.

Accordingly, the creation of a future for culture—of a future itself—can today be only an impossible possibility. Since cultural production is no longer possible today, any cultural product that comes upon us must come to us on some other day than this one, this endless day of ceaseless commodity production and circulation.

Culture is no commodity, and no commodity is a cultural product.

*     *     *     *     *     *

Martin Heidegger’s so called “Schwarze Hefte,” the “Black Notebooks” he kept from the period of his Nazi involvement early in the 1930s all the way down to the beginning of the 1970s, near the end of his life, have begun to appear in German in the Gesamtausgabe (GA), or Complete Edition, of his works. So far, three volumes containing fifteen notebooks labeled Überlegungen (Reflections) have been issued (GA 94-96).

In a note early in “Überlegungen IV,” written in the1930s after Heidegger’s controversial year as Rector under the Nazis at the University of Freiburg from 1933-1934 had ended, Heidegger writes (GA 94, page 210): “The ‘world’ is out of joint; there is no world any more, more truly said: there never was yet world. We are standing only at its preparation.” He then begins the immediately following note with the italicized remark that “[w]ith the gods, we have also lost the world.”

Where there is no world, there is no culture; and where no culture, no world. Nor is there anything of gods or the divine in such an indifferent, placeless place.

(What all that may have to do with Nazism, and with Heidegger’s relationship to it, I will leave for subsequent reflections of my own sometime somewhere.)

*     *     *     *     *     *

Norwegian author Karl Ove Knausgaard has already come to count as something of a sensation of 21st century literature—if there is any such thing as literature any longer, which is a question with which Knausgaard is himself concerned—with the publication of his multi-volume autobiographical novel entitled My Struggle. Particularly in the original Norwegian, Min Kamp, that title was immediately controversial because of its obvious allusion to Hitler’s notorious Mein Kampf. Despite the expectations such a title might inspire, there certainly seems to be nothing of Nazism, anti-Semitism, Fascism, or the like in Knausgaard’s text. At least no critics I know of have suggested that there is, nor can I personally detect anything of the sort in what I’ve read of it so far—which admittedly is not that much, relatively speaking, since I am still only midway through the second of the six volumes of the work.

At one point well along in the first volume of My Struggle Knausgaard remarks on the common contemporary feeling that (as he puts it on page 221) “the future does not exist.” He explains that he means the feeling that what lies ahead for us today is “only more of the same,” never anything really new or surprising any more, vibrant with possibility. What that feeling indicates, he says, also “means that all utopias are meaningless.” However, he continues: “Literature has always been related to utopia, so when the utopia loses meaning, so does literature.” He suggests that the literary enterprise, or at least his own literary enterprise, has always been an endeavor “to combat fiction with fiction.” That is, by conjuring up a “no-place”—which is the literal meaning of the word utopia—literature aims to put the lie to what presents itself as being present, but is really no more than a sort of convenient lie or confabulation—something the proverbial powers that be, whoever or whatever those powers themselves may really be at any given time, would have us all take to be “reality” itself, rather than see the very different real reality behind such mere appearances. Telling tales that tell the tale on the tales we are told (often even telling them to ourselves): that is the work of literature, as I take Knausgaard to be articulating it.

What that which passes for “reality” today kept telling Knausgaard himself he “ought to do,” he goes on to say in the passage at issue, “was to affirm what existed, affirm the state of things as they are, in other words, revel in the world outside instead of searching for a way out, for in that way I would undoubtedly have a better life.” Surely that is indeed what he “ought” to do, instead of pursuing all this literary nonsense that leads straight to nowhere; “but,” he says, “I couldn’t do it, I couldn’t, something had congealed inside me, and although it was essentialist, that is, outmoded and, furthermore, romantic, I could not get past it, for the simple reason that it had not only been thought but also experienced, in the sudden states of clear-sightedness that everyone must know, where for a few seconds you catch sight of another world from the one you were in only a moment earlier, where the world seems to step forward and show itself for a brief glimpse before reverting and leaving everything as before.”

*     *     *     *     *     *

Perhaps the most shocking thing about our present age is that today we can no longer be shocked by anything. Such moments as Knausgaard describes, when we are suddenly shocked out of the somnambulism of our daily conduct of business as usual, where there is only ever more of the same old same old—moments when we are brought alive in the world again—are perhaps no longer possible for us. At any rate, if even a glimmer of such an impossible possibility dares show itself to us, then the dark that wants to be taken for the real rushes in to close back over it again immediately.

That is just what it does for Sally Elliott, a character in another novel I have recently been reading.

Only a few weeks ago American novelist Robert Coover’s The Brunist Day of Wrath was published, and I immediately downloaded a Kindle copy and read it. It is the long-awaited—and very long—sequel to his first novel, The Origin of the Brunists, which first appeared long ago, way back in 1966, when it won that year’s Faulkner Prize for best first novel.

Briefly, Coover’s fictional Brunists are a typically American, whacko fundamentalist Christian extremist sect. In the first of the two novels about the Brunists, Cover traces the sect’s emergence. The Brunists then return to the scene of their cultish birth five years later, in Coover’s eventual follow-up. That story of their return to the scene culminates in a typically American, eruptive and violent bloodbath, a sort on anti-apocalyptic apocalypse that, once it has happened, ultimately just lets everything keep right on going pretty much the same as before, really.

Sally Elliott appears as one of the many characters that people both novels. She is anything but a Brunist herself, being not only atheistic but also even anti-theistic—or more properly put anti-religious, since she does not confine her critique to theism as such. For the most part Sally stands aside from the main action of the story of the Brunists, to serve Coover as a sensitive observer registering the events that unfold around her. Still just a child during the action in the story of the Brunists’ origin, she becomes the very anchor of moral sanity in the narrative of their eventual day of wrath.

Relatively early on in the later novel, Sally pays a spy’s visit to the Brunist camp. There she encounters some young Brunists with hopes of converting her.   When Sally grows faint, they become concerned and lead her into the communal tent to rest, and where they give her a cream soda to refresh herself. Coover pauses with her there to write (starting at location 3,844 of a total of 15,901 in the Kindle version I read): “Sometimes, it seems to her [despite or at least apart from all her anti-religious sanity] that she grasps or is embraced by a great cosmic mystery, and for a moment she enjoys a certain rapt serenity. But usually the mystery eludes her of it evolves into some familiar banality, like the cream soda she burped then, and it never comes close to happening when she’s bummed out with the blahs.”

The very point of what presents itself as present today is to bum us all out with the blahs, so that nothing of the future may ever come—and even if it does, will fizzle out again right away, like bubbles from some cheap carbonated soda.   What presents itself as present today lacks all presence. It cannot hold. It has no grounding.

Nor can it, accordingly, offer any ground for anything else to grow in it. Nothing can be cultivated in such soil. No culture can take root there.

*     *     *     *     *     *

Nietzsche remarks somewhere that his ambition is to say more in a single aphorism than others say in an entire book. Then he immediately corrects himself and says, no, his goal is to say more in a single aphorism than others do not say in a whole book.

Indeed, Nietzsche aims to say the whole world in a single aphorism. At least one aphorism where he succeeds in doing just that is in a passage about the very nature of “world” itself, a passage from The Twilight of the Idols entitled “The History of an Error: How the ‘True’ World Finally Became a Fable.” At the end of his telling of that history, Nietzsche asks just what’s left of the world, once the belief in some “true” world has finally shown itself up as no longer worthy of any belief. When the “true” world finally vanishes, just what world remains? The “apparent” one, perhaps? But no, Nietzsche answers. Along with the “true” world, he says, the “apparent” one also vanishes.

Half a century and two World Wars (at least by one count) later, Maurice Merleau-Ponty in his Phenomenology of Perception glosses Nietzsche’s remark by saying that, with the collapse of the very grounds for any distinction between a supposedly merely apparent or “false” world, on the one hand, and a supposedly “true” one, on the other, the world itself at last comes forth clearly for itself, as the very place where sense and non-sense, meaning and the lack of it, themselves emerge. This world itself is neither “true” nor “false.” The world is just that, the world—of which, as Merleau-Ponty nicely says, “the true and the false are but provinces.”

Unfortunately, however, there is another possibility, one which neither Nietzsche nor Merleau-Ponty would have welcomed at all, but of which both were all too much aware, as I read them. That is the possibility that, to borrow a way of putting it from Heidegger, who came between those two, the world itself might simply cease to world at all.

Framed in those terms, to continue considering whether culture has any future today confronts us with the no doubt strange-sounding question of whether, in the world of today, there is any longer any world—or, with it, any today—at all. Can anything really present itself at all in what presents itself today as what is present?

That is precisely the question with which contemporary French philosopher Alain Badiou occupies himself in yet another book I’ve been reading just recently, since my last post to this blog more than three months ago now. I will start with Badiou in my next post (which I do not think will take me another three more months to put up).

*    *     *     *     *     *

The Traffic in Trauma: Commodifying Cultural Products (3)

This is the third and final post of a series under the same title.

*     *     *     *     *     *

In the gravelled parking space before the station several cars were drawn up. Their shining bodies glittered in the hot sunlight like great beetles of machinery, and in the look of these great beetles, powerful and luxurious as most of them were, there was a stamped-out quality, a kind of metallic and inhuman repetition that filled his spirit, he could not say why, with a vague sense of weariness and desolation. The feeling returned to him–the feeling that had come to him so often in recent years with a troubling and haunting insistence–that “something” had come into life, “something new” which he could not define, but something that was disturbing and sinister, and which was somehow represented by the powerful, weary, and inhuman precision of these great, glittering, stamped-out beetles of machinery.  And consonant to this feeling was another concerning people themselves:  it seemed to him that they, too, had changed that “something new” had come into their faces, and although he could not define it, he felt with a powerful and unmistakable intuition that it was there, that “something” had come into life that had changed the lives and faces of the people, too.  And the reason this discovery was so disturbing—almost terrifying, in fact—was first of all because it was at once evident and yet indefinable; and then because he knew it had happened all around him while he lived and breathed and worked among these very people to whom it had happened, and that he had not observed it at the “instant” when it came.  For, with an intensely literal, an almost fanatically concrete quality of imagination, it seemed to him that there must have been an “instant”—a moment of crisis, a literal fragment of recorded time in which the transition of this change came.  And it was just for this reason that he now felt a nameless and disturbing sense of desolation—almost of terror; it seemed to him that this change in people’s lives and faces had occurred right under his nose, while he was looking on, and that he had not seen it when it came, and that now it was here, the accumulation of his knowledge had burst suddenly in this moment of perception—he saw plainly that people had worn this look for several years, and that he did not know the manner of its coming.

They were, in short, the faces of people who had been hurled ten thousand times through the roaring darkness of a subway tunnel, who had breathed foul air, and been assailed by smashing roar and grinding vibrance, until their ears were deafened, their tongues rasped and their voices made metallic, their skins and nerve-ends thickened, calloused, mercifully deprived of aching life, moulded to a stunned consonance with the crashing uproar of the world in which they lived. These were the dead, the dull, lack-lustre eyes of men who had been hurled too far, too often, in the smashing projectiles of great trains, who, in their shining beetles of machinery, had hurtled down the harsh and brutal ribbons of their concrete roads at such a savage speed that now the earth was lost for ever, and they never saw the earth again:  whose weary, desperate ever-seeking eyes had sought so often, seeking man, amid the blind horror and proliferation, the everlasting shock and flock and flooding of the million-footed crowd, that all the life and luster and fire of youth had gone from them; and seeking so for ever in the man-swarm for man’s face, now saw the blind blank wall of faces, and so would never see man’s living, loving, radiant, and merciful face again.

Thomas Wolfe, Of Time and the River (1935)

 

Not long after Thomas Wolfe published the novel from which I’ve taken that lengthy citation, Walter Benjamin, in his essay on “The Work of Art in the Age of Mechanical Reproduction” (section XIV) wrote:  “One of the foremost tasks of art has always been the creation of a demand which could be fully satisfied only later.”  To that remark, Benjamin appends a note, which itself begins with a quotation from the definitive “Surrealist,” André Breton:  “The work of art is valuable only in so far as it is vibrated by the reflexes of the future.”  In turn, both Breton’s and, even more clearly, Benjamin’s remarks resonate strongly with the one from Jean Laplanche, which I already cited in my first post of this three-post series on the commodification of cultural products, his remark that “in the cultural domain” it is “a constant” that “the offer . . . creates the demand.”

What demand is the work of art today creating?  What future vibrates in it?   How and when could the demand it draws forth ever be fully satisfied?

Benjamin contrasts painting—and poetry—with film.  By his account, which is also the account of many others both before and after him, a painting evokes contemplation.  As Salvador Dali’s The Last Supper did years ago to me, as I recounted in my preceding post, the painting arrests us before itself, bringing us to a stop, interrupting our daily rush of business, calling upon us to look, behold, and ponder.  “The painting,” writes Benjamin, “invites the spectator to contemplation; before it the spectator can abandon himself to his speculations.”  Similarly, a poem makes its reader or other “recipient,” to use Laplanche’s term, pause and reflect over language itself and its power to say.  The poetic work also brings us to a stop, interrupting the flow of the daily chatter wherein we subordinate language and its saying to its mere utility as a means for conveying information.

The history of art, however, is for one thing the history of the emergence of new art forms called up the better to satisfy demands eventually created by developments in older forms.  Slightly earlier than his line about art’s tasks including the creation of new demands vibrant with the future, Benjamin writes:  “The history of every art form shows critical epochs in which a certain art form aspires to effects which could be fully obtained only with a changed technical standard, that is to say, in a new art form.”  He sees one such “critical epoch” emerging for both painting and poetry in the late nineteenth and early twentieth centuries, with the emergence of Dadaism, in which, as Benjamin puts it, “poems are [reduced to]‘word salad’ containing obscenities and every imaginable waste product of language,” just as in their paintings the Dadaists “mounted buttons and tickets” and the like.    What was in play in such developments, by Benjamin’s analysis, was “a rentless destruction of the aura of their [own] creations”—and, indeed, of the “aura” of paintings and poems and works of art in general.

What new art form was preparing its own way in advance in Dadaism and the entire epoch of art it represents?  Benjamin’s answer is that “Dadaism attempted to create by pictorial—and literary—means the effects which today the public seeks in the film.”  By the “studied degradation of their materials,” the reduction of their works to the status of trash and waste, what they aimed to achieve was the “uselessness” of their works “for contemplative immersion.”  Dadaist works systematically eschewed the contemplation to which art before them had called its recipients, and instead sought distraction.   To attain that end, “One requirement was foremost:  to outrage the public.”  The Dadaist work thereby “became an instrument of ballistics.  It hit the spectator like a bullet, it happened to him, thus acquiring a tactile quality” whereby it “promoted a demand for the film, the distracting element of which also primarily tactile, being based on changes of place and focus which periodically assail the spectator.”  Comparing the traditional painting to the film, Benjamin writes:

The painting invites the spectator to contemplation; before it the spectator can abandon himself to his associations.  Before the movie frame he cannot do so.  No sooner has his eye grasped a scene than it is already changed.  It cannot be arrested.  [Georges] Duhamel, who detests the film and knows nothing of its significance, though something of its structure, notes this circumstance as follows [in Scènes de la vie future, published in Paris in1930 after a trip to the United States, and translated one year later as America the Menace:  Scenes from the Life of the Future]:  ‘I can no longer think what I want to think.  My thoughts have been replaced by moving images.’  The spectator’s process of association in view of these images is indeed interrupted by the constant, sudden change.

It is at just this point that Benjamin comes to speak—as Heidegger had done a bit earlier and differently, as I discussed in my preceding post—of “shock” in relation to the work of art.  He writes that this catching, controlling, and manipulation of the spectator’s attention by the devices of film—cuts, camera angles, etc.—“constitutes the shock effect of the film.”  Whereas Dadaism insisted on outraging the public, and in that very insistence remained within the bounds of the moral—“outrage” as such ultimately being a matter of moral offense—“[b]y means of its technical structure, the film has taken the physical shock effect out of the wrappers in which Dadaism had, as it were, kept it inside the moral shock effect.”

Cinema’s unwrapping of shock from it moral wrapping—unmooring shock from its moral anchoring, loosening and abstracting it from its moral setting—is, in fact, more than a merely moral matter, in any ordinary understanding of morality.  It is, rather, a literal de-contextualizing of shock that sets shock altogether free of any context that might give it any “sense” or “meaning” that might enclose it, buffer it, cushion shock’s shock.  To put the same point differently, by riveting attention to itself, forcing and manipulating that attention, stripping it of all autonomy and making it conform to wants not its own, distracting it persistently and insistently from itself, the cinematic manipulation of images uproots shock from the temporality that has always heretofore defined it, the very temporality that gives shock itself time to “register.”  That is, it unhinges shock from the very “belatedness,” Freud’s “Nachträglichkeit,” that permits shock to be felt and registered in its after-shocks.  In the same way, for the repetition with which shock continues to hold on to its recipients, the techniques set to work in film substitute the incessant multiplication of shocks.  No sooner is one shock delivered than another, new shock is on the way, one shock following right upon the preceding one, coming one after another without let-up, like fists reigning down upon someone undergoing a lengthy, brutal beating the end of which comes only with death or coma.  Instead of the Nachträglichkeit of traumatic time one has the endless Nacheinander of the ticks of clock-time, the “after-one-another” of the seconds as they click by without cease.  The compulsive repetition whereby shock arrests those it strikes, demanding that they finally stop and accept the invitation to contemplation—and to “abandon [themselves] to [their] associations,” as Benjamin nicely puts it, just as one might when encouraged to share one’s “free associations” during psychoanalysis—gives way to the cascade of distractions whereby modern life assaults us all.

After all, that’s where all the profit lies waiting to be made!

In La Cité perverse, his discussion (which I’ve cited before in this three-post series) of the perversity that founds and grounds the contemporary global “city”—from civitas, the Latin translation of the Greek polis:  the public place, the commons, the dis-enclosed enclosure of community we build together every day in our communications with one another—Dany-Robert Dufour makes use of the by now old idea of the “monkey trap” that uses the monkey’s own appetites to catch it fast.  The trap is very simple.  It consists of a small but solidly tethered contraption inside of which an appropriately monkey-directed enticement has been placed, so that the monkey has to reach inside the trap to retrieve the treat.  The aperture to the trap, however, is just large enough for the monkey to insert its reaching, fingers-extended paw, to grasp the monkey-goody inside, but too small to permit the monkey to withdraw the same paw once it has closed into a fist around its trophy.  All that the monkey would have to do to escape the trap would be to open its paw and retract it.  To do that, however, it would have to let go of the treat it first reached inside the trap to grasp.  The monkey’s appetite—its “greed,” if you will—just will not let it let go, that it might itself be let go from the trap.  So the monkey just stays there, trapped by its own wants, until the trapper at his leisure comes to collect his catch.

I have repeatedly cited Laplanche’s remark that in “cultural” matters—which is to say in matter’s of Dufour’s “city,” the place of “civilization”—it is always the offer that first creates the demand.  However, when demand gets perverted into the need for commodities, then citizens are transformed into consumers, and we all become caught in a trap from which our own efforts to extricate ourselves can only entrap us more tightly.  When the exchange of commodities replaces the exchange of cultural communications (another redundant expression, when heard as I’d like it to be heard here), we are all made into monkeys caught in a monkey-trap by our own demand.

At that point, demand has become the death of desire, in just the sense of that latter term in which Jacques Lacan, for instance, admonishes us all not to let go of our desire.  Once our desire itself, with no will or intention on our part, gets associated altogether un-freely with a manipulatively produced demand for commodities that have been expressly designed to entice us to confuse them with our desire itself and to grasp for them, we find ourselves caught in a self-made bondage.  It is a situation in which what is really no true choice at all is forced upon us as the only “choice” available.

On the one hand, we can “choose” to put our hands in the trap.  We can reach out to grasp the goods and goodies held out to us as the key to our happiness, only to find ourselves frustrated, depressed, and despairing when the commodities we have been made to long for finally come our way, and we find to our chagrin that they do not satisfy our desire after all.  Far from it!  “Is that all there is?” we ask—as we pick ourselves up and dust ourselves off and start all over again, reaching for the next commodity presented to us as the royal road to happiness, only to be led again to the same frustration, depression, and despair, and so on time after time after time, one time after another till all our time runs out.

On the other hand, the always have the “option” simply—contrary to Lacan’s wise injunction against doing any such thing—to give up our desire itself.  Since desire has now become inextricably confused with the market-produced demand for those very market-produced commodities the securing of which leaves us empty and looking for more each time it occurs, to let go of our grip on those commodities in order to free ourselves from the monkey-trap, opting out of such pursuit of commodities unavoidably presents itself to us as just such a relinquishing of our definitive desires themselves.   But to let go of our very desire itself is, as Lacan saw, to consign ourselves once again to frustration, depression, and despair.

Only if something happens to bring us up short, to make us pause and reflect, inviting us, in contemplation, to abandon ourselves to our own free associations, does the opportunity present itself for the trap in which we are caught suddenly to spring open, letting us loose at last.  To repeat what I’ve said before:  that’s what art’s for.  However, how are we to find hope in art any longer, when art itself long ago now ceased to invite and invoke contemplation, and itself became a device of sheer distraction?  Diverted into distraction, art becomes subservient to commerce, and no less a caught-monkey than each of us, art’s recipients.  To that extent, at least, art no longer offers any interruption of the flow of goods around the globe, but has instead simply become part of that traffic.  Art, voiding itself of all “usefulness for contemplative immersion,” which is to say voiding itself of all of what Marx called its “use value,” retains only whatever “exchange value” the market may give.   That exchange value is often considerable, even astronomical, to be counted in the hundreds of millions of dollars for a single painting, but in the process of becoming such a valuable commodity for exchange art altogether loses its dignity, and any worth it may once of had for itself.  Nor does the art-work itself, as offer, any longer create the demand that answers to it.  Instead, it is the demand for art, the “buzz”-built clamor in the art-market for a given commodity, that produces the supply—that is, makes whatever the buzz builds the clamor for count as “art” in the first place.  “Art” thus becomes no more than that which gets taken as art, in effect, in the art market.  Art becomes whatever so “counts” as art, whether paintings by Van Gogh or literal pieces of shit—such as those produced by the machine created for that purpose in 2000 by Belgian artist Wim Delvoye as the first of eight versions of a work he entitled Cloaca, and selling for roughly $1,000 per shitty piece, to borrow an example from Dufour.

None of this even shocks us at all any longer, of course.  We have long ago grown quite numbed to it, just as nurses and doctors in the emergency room of a big-city hospital become inured to all the pain and suffering that perpetually surrounds them.  Writing of the situation in the industrialized nations of 1936, Benjamin observes in one of the notes he appends to the passage from which I have been drawing citations that “film corresponds to profound changes in the apperceptive apparatus—changes that are experienced on an individual scale by the man in the street in big-city traffic, on a historical scale by every present-day citizen.”  As he discusses both in his article on “the work of art in the age of mechanical reproduction” and elsewhere, everyday modern urban life is a life in which the individual is subjected every moment of the day to one shock after another, and made thoroughly numb in the process.  Such numbing is always the result of being made the recipient of persistent, uninterrupted pummeling, one shock after another with no time any longer even left for any after-shocks wherein the shocks might be registered by those who undergo them.  We monkeys are thereby kept always with our hands in the monkey-trap, being the good little monkeys our trappers would have us be.

A dismal picture indeed!  For one thing, it is a picture of art in its death-throes.  The commodification of cultural products which is at work in the globalization of the market economy puts out the light of the truth that used to put itself into work in art-works.

Much has happened, of course, in the arts themselves during the century and more since the Dadaism that Benjamin discusses first came along.  In painting we have traversed multiple newer developments, fads, and fashions, from Cubism to Surrealism, Abstract Expressionism, Op Art, Pop Art, Conceptual Art, Hyper-Realism, and various other developments.  Poetry and literature have gone through modernism to post-modernism to post-post-modernism and whatever lies beyond that.  Then there is the proliferation of brand new art forms from the Happenings of the 1960s to Body Art to the many permutations of Performance Art today.  And all that’s not even to mention the progression in film itself, let alone the movement from mechanical to digital reproduction that Benjamin never really dreamt of, with all the possibilities for the production, reproduction, and dissemination of new works of art, and what amounts to the radical democratization of art and artistic creation that is taking place as the digital explosion continues to expand, like the universe since the Big Bang.

None of that, however, is any proof against art’s death.  Death takes time, and the greater the life that comes to its end, the longer the dying.  Concerning art, it is as Heidegger writes in his “Afterword” to “The Origin of the Work of Art”:  “The dying proceeds so slowly, that it takes a few centuries.”  And even after that, it may take far longer yet for the news of the death to get around—just as Nietzsche said it would no doubt take a couple of millennia before the news of God’s death was heard everywhere.

As for what, if anything, may be still to come, after the death of art, that is really just a form of the question of whether there is any longer any “culture” at all possible after that.  Is there any future for culture?  Or has the future itself closed down on us, consigning us all forever to an endless, trapped-monkey existence as good consumers, spending freely for the good of the economy, as President Bush urged us all to do during our wars in Afghanistan and Iraq, and especially after the first forward surge of the Great Recession of 2008 that those wars did so much to help unleash?  In our benumbed and distracted consumer-condition, can there ever again be a new demand that gets through to us, if not from art then from elsewhere?

Benjamin himself offers some hope.  So even does Heidegger.  Neither could be accused of optimism, certainly.  What is more, the hope that each offers is one that can only rise, Phoenix-like, from hopelessness.  Both suggest, nonetheless, that there may be a way of pulling out of the traffic.

Can we?  Can we somehow do that—pull out of the traffic in trauma, and the commodification of cultural products that is inseparable from it?

That is a topic I will leave for another occasion—another series of posts perhaps.

The Traffic in Trauma: Commodifying Cultural Products (2)

This is the second of a series of posts under the same title.

*     *     *     *     *     *

In 1936—only three years after the Nazis were given power, two years before Kristallnacht, and four years before he himself committed a life-affirming act of suicide to rob the Nazis of the chance to exterminate him—Walter Benjamin, of German-Jewish provenance, wrote his well known essay on “The Work of Art in the Age of Mechanical Reproduction” (in Illuminations, translated by Harry Zohn, New York:  Schocken Books, 1968).  Only a few months earlier, in November of 1935, Martin Heidegger, another German, Catholic born and eventually Catholic buried, who joined the Nazi party in 1933 and continued to pay his party dues as long as there remained a party to pay them to, first delivered his probably even more well know lecture on “The Origin of the Work of Art.”  The comparison of those two cultural products, Heidegger’s lecture and Benjamin’s essay—both of which cultural products are themselves about those cultural products par excellence called works of art—is revealing on a number of counts, only one of which will concern me here in this post.  That is how each of the two addresses the “shock” that, according to both, pertains essentially to the work of art.

Since Heidegger’s lecture came first, I will start with that.  Heidegger addresses how the work of art as such always comes as a “shock” to those upon whom it works art’s work.  The German term Heidegger uses is Stoß, which can also be variously translated as “push, poke, punch, kick (as, say, moonshine liquor has, when one swallows it), nudge, butt (as a goat might, with its horns), stab (as with a knife), thrust, stroke, or (with less punch or kick) impact.”  The Stoß of the work of art is how it strikes a blow to those who receive it, bringing them up short, knocking the wind out of them, as the sudden revelation of beauty in the face of another can strike us so forcefully that it renders us, as we say, “breathless.”  The work of art always comes as such a shock, if it truly comes at all.  That such a thing as the work can even be, says Heidegger, that is the “shock” of the work.

An example from my own experience that I have used before (namely, in my first published book, The Stream of Thought*) happened to me when I was a teen-ager, on a foundation-sponsored trip one winter to Washington, DC, that included a visit to the National Gallery of Art, where surrealist Salvador Dali’s painting of The Last Supper was on loan for display.  When I entered the room where Dali’s painting hung, I was indeed “shocked,” in Heidegger’s sense.   All I could do was stand transfixed before the painting, gaping at it.  I remember clearly that what transfixed me were the colors on Dali’s canvas, which presented themselves to me as impossible—that being the very word that came to my adolescent mind at the time.  No doubt not altogether inappropriately, given the term and notion of “sur-realism,” there was nothing at all “real” or “natural” about those colors, as they gave themselves to my perception then.   Yet there they were, totally redefining the whole domain of “color” for me, shattering my old, familiar, taken-for-granted understanding of just what that word ‘color’ even meant.

In my very experience, those colors, precisely as “impossible” and altogether outside the domains of anything that might occur in “nature,” also riveted my awareness to the sheer createdness of the painting.  Heidegger points to this by saying that, in the work of art, the very having been created of the work is, as it were, co-created into the work itself.  In The Stream of Thought I spoke of that as the “self-presenting” character of the art-work, and contrasted it with what might be called the “self-effacing” character of, for example, a good snapshot in a family photo album.  A snapshot as such (as contrasted, say, with one of Ansel Adams’ photographs, which is itself a work of art) is just a tool, an instrument, there to be useful and used, no different in that regard than a hammer or a computer; and the utility of a tool is inversely proportional to demands it places upon users to attend to it, rather than staying focused on what they are trying to do with it.  A tool or instrument is not supposed to call attention to itself, but instead to facilitate the accomplishment of the task for which it is employed.  In contrast, the work of art does call attention to itself, and in so doing it delivers us a blow, bowls us over—shocks us out of our complacent everyday going about our usual business.

As with any shock, the shock delivered by the work of art exceeds the capacity of those to whom it is delivered to “process” it.  That is to say it is always traumatic.  And as Freud has taught us, its impact—the very delivery of the shock with which it shocks us—is marked by a certain “belatedness,” as I prefer to translate Freud’s German term Nachträglichkeit, which in the Standard Edition of Freud’s works in English is rendered by “differed action.”  The shock of the work of art is really felt and fully at work, as it were, only in its after-shocks, which keep on coming after the first, definitive shock has struck, allowing the shock itself to “register.”  That’s precisely the job of what Freud identifies as the “repetition compulsion,” the compulsion to repeat the original, shocking experience, until the numbness, the “going into shock” as we say, that is the other side of the two-sided effect of traumatic shock (a redundant expression:  “traumatic shock”), finally breaks down, creating the possibility that it may at last be broken through.

If such a break-through finally does occur, then what it breaks through to—the “other side” to which musical artist Jim Morrison, for example, long ago urged listeners to “break on through”—is nothing other than letting oneself at last be shocked.  It is ceasing, so far as one can (which means moment by moment), to numb oneself against the shock, and instead opening oneself to it and (again, moment by moment) holding oneself open within it.  In short, to adopt and adapt a formula I’m fond of from Heidegger, it is a break-through into maintaining oneself in the truth opened up in the shock itself, “preserving” that very truth by continuing to stand firm within it, with-standing it, as we might well say.

The origin of the work of art, says Heidegger, is truth’s setting itself into work in the work.  Truth sets itself into work in the work in an at least double sense.  First, it sets itself up there, fixes itself fast there, takes form there.  That’s what art needs artists for:  to create works of art as places where truth takes form, fixes itself fast, sets up.   Second, it goes to work there, in the work, as mechanics goes to work in their garages:  Truth is at work there, in the work, “doing” its work there.  That brings us to what the rest of us are for, that “rest” of us who are not ourselves artists—or insofar as we are not the artists who created the given works of art at issue—but to whom those works are “addressed,” its “recipients” (to use Jean Laplanche’s way of speaking).  If what “artists” are for is creating works of art, then what we “recipients” of those works are for, is (to go back to a Heideggerian locution) “preserving” those works.

Such “preservation” of works of art has nothing to do with keeping them locked safely away in closets, attics, or vaults–or even in art museums.  Or, rather, it does have something to do with that, since locking the works away somewhere, even if that place is a museum, is only possible if those works are no longer “preserved,” but are instead taken out of their original circulation, the circulation of truth itself around the circuit of artists, art-works, and recipients, and forced into a very different sort of circulation (today, ever more around the circuit of the provision and consumption of pleasures, ultimately to somebody’s profit).  It’s only the remains of dinosaurs that one will find in museums of natural history, not the real, living thunder-lizards themselves.  Likewise, it’s only the remains of dead works of art that can be visited in art-museums.  Insofar as the very works whose carcasses we can see put on display in museums are still somehow at work in our world, it is not in museums that we will find those works at their work, but in our daily lives together.  They will be at work there only if and insofar as we continue to hold ourselves open to and within the blows that they deliver to us, letting them shock us out of the usual rush of busy-ness with which we strive to avoid all such blows.

What Heidegger calls “preserving” the work of art is a matter of persevering in exposure to the shock it delivers.  Only in such perseverance does the truth that has set itself into work in the work still keep on working.

So much for Heidegger!  Now on to Benjamin!

*     *     *     *     *     *

When Walter Benjamin talks about the “shock” delivered by the work of art, in his own way he says the same thing as Heidegger, but then he also adds something of major significance.  That important addition derives from Benjamin’s concentration on the fate of the work of art “in the age of mechanical reproduction,” as he puts it in the well-chosen title to his piece.

In the process of articulating his thoughts on art today, Benjamin develops a vocabulary of his own that diverges from the one Heidegger is simultaneously developing to articulate his own thoughts on the same topic.  Both vocabularies, however, have a common provenance, as readers should be able to see for themselves in what follows.

In the second section of a total of fifteen (plus a brief introduction and epilogue) of his essay, “The Work of Art in the Age of Mechanical Reproduction,” Benjamin writes:  “The situations into which the product of mechanical reproduction can be brought may not touch the actual work of art, yet the quality of its presence is always depreciated.”  Once again, I can use my own teen-aged experience with Dali’s The Last Supper to exemplify the point I take him to be making.**  Before taking the student trip that brought me before the actual, original painting itself, I had often seen reproductions of Dali’s paintings, including that one, The Last Supper.  In fact, in all the reproductions of his work that I had seen by that time, his painting of Jesus’ last meal with his disciples had always interested me the very least of them all.  Looking back now, I would say that it was precisely my dis-interest in that particular painting, as it was delivered to me in all the reproductions I had seen of it, that set me up—like a bowling pin, as it were—to be knocked flat when I suddenly found myself in the actual presence of the painting itself.  It is precisely that “quality” of the “presence” of the work that, as Benjamin writes, is “depreciated” in any “mechanical reproduction” of it.

My own experience, not only of Dali’s painting but also of other cases, tells me that Benjamin is speaking very cautiously when he uses the word ‘depreciated.’  I would say ‘lost’ or ‘buried’ is better.  By all my experience, the “presence” of the art-work as such is just what, in and of the work, simply cannot be reproduced, at least in any “mechanical” reproduction:  any striking of copies off of some original—or some “first” copy of the original, as in an initial photograph of a painting—used as a template.***  Benjamin himself a few lines later refers to this “quality” of the work’s “presence” as “the eliminated element” in the work, and proposes calling it the work’s “aura.”  At any rate, whether it is only depreciated or totally eliminated, it is this “aura” of the work, Benjamin says, that “withers in the age of mechanical reproduction.”

Significantly, Benjamin does not confine the notion of “aura” solely to works of art, or even to what he calls “historical objects”—what I’m following Jean Laplanche in calling “cultural products”—in general.  Rather, he extends it to cover “natural objects” as well.  “If,” he writes in section III of his essay, “while resting on a summer afternoon, you follow with your eyes a mountain range on the horizon or a branch which casts its shadow over you, you experience the aura of those mountains, of that branch.”    He defines aura, in effect, as that “quality” of the very “presence” of each and every thing in its uniqueness, its very irreproducibility.

At this point, we can combine Heidegger with Benjamin to observe that it is the very way the work of art has of bringing us up short, literally arresting us before its presence, that also—through and in the work, that “historical” or “cultural” product—breaks through our ordinary numbness in the face of the presence, the aura, of what is “natural” as well.  So, to stay with the same example from my own younger life, when my attention was first riveted by the “impossible” colors of Dali’s painting of The Last Supper during my adolescence, what also riveted my attention was what might well be called the aura of color itself.  Even at the time, as I’ve already noted, the thought came to be that until that moment I had never really seen color at all.  I never saw color in its full presence or aura until then.

To “preserve” the work of art, to revert for a moment to Heidegger’s way of speaking, is keep open to aura as such—to the presence of what is present.  That is what it means to stand within the truth of the work, to hold open the truth, namely, that very truth first opened up in and as the work itself.  It means to persist, to persevere, in holding oneself open to and in the aura of things, the aura itself first opened up to one in the work.  It is to bring all one’s saying and doing, thinking and speaking, into that opening of the aura of things, and to maintain it there.

That, in turn, is what’s called “living.”

To lapse back into what today has become an ordinary yet—as befits the day—distorting way of speaking, the “job” of art, what art’s “for,” by both Heidegger’s and Benjamin’s accounts, is to open the way to living, which, like all things human, always comes belatedly, as a sort of after-birth to birth itself.  In that sense, we are all still-born, all born dead, and only subsequently shocked into life.  If we are lucky!

Art brings us luck.  That’s what art’s for.

*     *     *     *     *     *

In this post, the second in my series under the title “The Traffic in Trauma:  Commodifying Cultural Products,” I have focused on the nature of cultural products, as paradigmatically exemplified in works of art.  In my next post, the final one of the series, I will focus on what happens to art, and to cultural production as such, when it gets shanghaied by the market—which is to say commodified. 


* A couple copies of which I still have available.  So let me hasten to commodify my own cultural product by repeating an offer I made already in my second-before-last post:  you may purchase an author-autographed copy of The Stream of Thought from me in person for the bargain-basement price of $14.95 (for a book that originally cost a whopping $27.50!), plus shipping and handling expenses of $5.17, for a total of $20.12.  To make purchasing arrangements, contact me via email right away at frank.seeburger@me.com.

** It wasn’t until a few years after my experience with Dali’s painting in the National Gallery of Art in Washington, D. C., that I read Heidegger’s essay on the origin of the art-work, and then a number of years after that before I read Benjamin’s on the art-work in our age of mass reproduction, but both readings brought my experience with Dali’s painting back to my mind.  My experience helped me to understand the two essays, and they both in turn cast light back upon that experience for me.  

*** Exploring the difference between the multiplication of mechanical re-productions of such works of art as paintings, on the one hand, and multiple productions of such works of art as plays, symphonies, or comedy sketches, on the other, would certainly be well worthwhile.  Even more worthwhile, perhaps, would be to go on from there to an exploration of what further shift occurs with the move from mechanical reproduction to digital proliferation, where once again, as with multiple performances of the same work of music, there is, taken strictly, no “copying” of any “original,” but in which, rather, multiple iterations of one and the same work occur.  I will, perhaps, take up such matters in eventual later posts.

Follow

Get every new post delivered to your Inbox.

Join 91 other followers