Faith in Trauma: Breaking the Spell

Trauma-Faith: Breaking the Spell (continued and concluded)

The decision whereby one comes truly alive is itself never without risk. If it were, there would be nothing decisive about it. To take that risk is to risk oneself, not just such stuff as one’s money, one’s comfort, or one’s security; and to run such a risk—where the stakes are one’s very being as a “self” in the first place—requires real faith, not just comforting self-bewitchment.

Yet, as Kathleen Norris notes, that faith is nothing out of the ordinary, reserved for only the few. That is the “fascinating trait” of every real choice for life over death—every choice, as Alain Badiou puts it at one point in his recent book on the “metaphysics” of happiness (p. 37), to surmount “the tissue of mediocre satisfactions” held out to us all by our rampantly consumerist society as its vision of what constitutes a happy life. It is a choice to risk real life, and the real happiness that goes with such life, and only with it.

Norris and Badiou are at one in insisting that the opportunity, the opening, to make such a choice is nothing that comes only in rare or unusual moments, and only to a select few. It is, rather, an opportunity, an opening, that can suddenly present itself, as Badiou writes, “in every episode of life, no matter how trivial or minor it may be.” Even the most everyday of occurrences can suddenly break the spell that binds us, calling upon us to display real faith by choosing to begin really living our lives, rather than just passively undergoing them, just going on outliving ourselves day after day to the grave.

Once we are truly given a real choice, everything depends on us, and whether we have the faith to go ahead and choose.

What is more, such simple faith, the faith that permits choosing actually to live one’s own life rather than just trying to survive it, can never be claimed as some sort of permanent acquisition. It is not some piece of privately owned property that, once acquired, can be disposed of as one sees fit. The decision to live, however everyday it may be, is a decision whereby one accepts martyrdom for one’s faith—from the Greek term martyr, to witness—which need have nothing flashy or Hollywood-heroic about it. As Norris helps us see, such genuine martyrdom can be as quiet and unpretentious as the small daily sacrifices, fully embraced, that parents continually make for their children.

Nor, short of death itself, is such witnessing ever over and done with. It is always there in front of us, needing to be demonstrated ever again anew. It demands constant, ongoing reaffirmation—exactly what Kierkegaard called “repetition.” Exchanging truly understood and meant wedding vows in some formal setting, to use one of Kierkegaard’s own best examples, does not let spouses off the hook of then having to honor those vows, to keep them and the love they sacramentally express alive in their daily life together—forever repeating their vows and the love the bestowing of those vows effectively signifies, “till death do us part.”

Nor is that anything peculiar to getting married. It is the same with every decision, once really taken.

The faith witnessed by any real decision to run the risk of coming truly alive is just such a faith that must be kept. The specific “content,” as it were, of the decision and faith at issue, may vary greatly, of course, from person to person and even from one day to the next.

In the same way, each day for each person, temptation to “break faith” (a tellingly accurate expression) with one’s own decision can take a new form. Whatever form the temptation to break the faith with one’s own life may take, however, each and every day one is faced again with the decision either to keep on truly living, or just to fall back into letting one’s days dribble on endlessly, one after another, till one can finally check out of the whole game altogether and just expire—like Nietzsche’s ever-contented “last man.”

Only a faith that accepts the risk of living is one that finally turns and faces trauma, rather than running from it, and then tests and proves itself by faithfully facing trauma again anew, each and every day, day after day thereafter.

That is true faith in trauma, a faith that always keeps the wound open.

Faith in Trauma: Breaking the Spell

Trauma-Faith: Breaking the Spell

To enchant is to cast a spell. In turn, to disenchant is to break the spell of an earlier enchantment. In the first decades of the 20th century, Max Weber made popular the idea that modernization—with its ever more exclusive prioritization of science, technology, and instrumental rationality over faith, tradition, and belief—centrally involved a process of the “disenchantment” (Entzauberung) of nature. Ever since Weber, however, it can and has been debated whether modernization really broke a spell, or whether it cast one.

So, for example, in one of his writings on the rise of modern technology in volume 76 of the Gesamtausgabe (“Complete Edition”) of his works, Martin Heidegger makes explicit reference to the Weberian idea of disenchantment, only to argue against that thesis. Rather than a dis-enchantment (Entzauberung), says Heidegger (pages 296-297), what is truly involved in the rise of modern technology itself is instead an en-chantment (Verzauberung), a bewitching, hexing, or casting of a spell. That enchantment, according to him, is one whereby the very power at play in modern technology can make good on its own exclusive claim to power, as it were—just as, in the fairy story, the wicked witch, to secure her own claim to the power of beauty, casts a spell over Sleeping Beauty, the legitimate claimant.

According to Heidegger, that enchantment—the casting of the spell whereby what is at work in modern technology (as well as at work in all of the modern science and instrumental rationality that goes with that technology) seizes and secures its own power—goes hand in hand with the de-worlding (Entweltung) of the world, the de-earthing (Enterdung) of the earth, the de-humanizing (Entmenschung) of humanity, and the de-divinizing (Entgötterung) of divinity. “Technology,” writes Heidegger, “as the unleashing and empowering of energies [. . .] first creates ‘new needs’,” and then produces the resources to satisfy them: technology “first discloses the world to which it then fits its products.”

Badiou said essentially the same thing just last year in À la recherche du réel perdu (“In Search of the Lost Real”), his critique of our contemporary “entertainment world,” as he calls it at one point, using the English expression—a world-less pseudo-world actually, one ever more frenziedly devoted to the pursuit of Pascalian diversion from reality. In such a desolate pseudo-world, what falsely but inescapably presents itself as “reality” is in truth so utterly crushing that it permits no genuine, full living at all any longer, but only survival. Nor does such a divertingly fake world any longer have any room for any true faith. It only makes room for superstitions—precisely the sort of dangerously superstitious nonsense, for example, that United States Supreme Court Justice Antonin Scalia spouted at a high school commencement speech shortly before his recent demise, when he attributed the global success of the United States to the frequent invocation of God’s name by our Presidents and other public officials (see my citation of his remarks to that effect at the beginning of my earlier post, “An Anxious Peace: ‘God’ After Auschwitz”).

In a world already deeply asleep, under the bewitching spell cast by what Badiou lucidly calls “triumphant capitalism,” what we need is precisely dis-enchantment, the breaking of the spell. The spell that holds the world in thrall today is broken whenever, anywhere in the world, reality suddenly and unexpectedly breaks through to dispel (good word for it: “de-spell”) any illusion that happiness consists of endlessly buying what the global market endlessly offers for sale.

In Métphysique du bonheur réel (“Metaphysics of real happiness”)—a short book he also published earlier last year and in which he was already “in search of the lost real”—Badiou describes the illusion that the shock of reality shatters. It is the illusion wherein one takes the height of happiness to consist of the conjunction of the following factors, as he puts it in his introduction (p. 6): “a tranquil life, abundance of everyday satisfactions, an interesting job, a good salary, sound health, a flourishing relationship, vacations one doesn’t soon forget, a bunch of sympathetic friends, a well-equipped home, a roomy car, a loyal and cuddly domestic pet, [and] charming children with no problems who succeed in school.” In short, it is the illusion that one could be happy while living a life of crushing consumerist boredom, where nothing disruptive ever happens—life as no more than survival: outliving oneself from birth, in effect.

As opposed to any such pseudo-happiness of mere security and consumerist comfort in sheer survival, real happiness comes only as a by-product of true living. In turn, real life in itself begins only in the deliberate choice, the decision, to engage fully with reality, whenever it does break through our numbing consumerist spell to strike us. When it does, it reawakens us to the realization that, as Badiou puts it later (p. 38), “existence is capable of more than self-perpetuation.” When the consumerist spell that has held us in thrall is finally broken, we reawaken to the awareness that real happiness is nothing but the feeling that accompanies true life—a life fully lived “even unto death,” as the Christian biblical formula has it, rather than just survived.

The Traumatic Word (1)

In the strict sense, the word is not a sign at all. For to say its is a sign is to liken it to something in the field of vision. Signum was used for the standard which Roman soldiers carried to identify their military units. It means primarily something seen. The word is not visible. The word is not in the strict sense even a symbol either, for symbolon was a visible sign, a ticket, sometimes a broken coin or other object the matching parts of which were held separately by each of two contracting parties. The word cannot be seen, cannot be handed about, cannot be “broken” and reassembled.

Neither can it be completely defined.

— Walter J. Ong, S. J.

We would like language to be no more than a system of signs, a means for conveying information. At least since Aristotle, and down past C. S. Pierce to the present day, that view of language has been all but universally taken for granted, just assumed as true. It isn’t, as Walter J. Ong realized.

Ong was a United States professor of English who focused upon linguistic and cultural history—especially the cleft between oral and literary cultures, which was the topic of his most influential work, Orality and Literacy: The Technologizing of the Word, originally published in 1982.  The lines above are taken from an earlier work, however. They are from next to last page of The Presence of the Word: Some Prolegomena for Cultural and Religious History, first published in 1967 but consisting of lectures Ong gave by invitation at Yale in 1964, as the Dwight Harrington Terry Foundation Lectures On Religion in the Light of Science and Philosophy for that year.

Besides being a professor of English, with a Ph.D. in that field from Harvard, Ong had done graduate work in both philosophy and theology, and was also a priest of the Society of Jesus, that is, the Jesuit order, as the “S. J.” after his name indicates. That religious provenance is manifest in his work. In The Presence of the Word, it is especially evident in Ong’s focus not just on any old word, so to speak, but on “the” word in a particular sense. His concern in his Terry Lectures is not just on “words in general,” as the ordinary way of taking his title would suggest. So understood, “the word” in Ong’s title would function the same way “the whale” functions in the sentence, “The whale is a mammal,” which is equivalent to “All whales are mammals,” thus picking out a feature that is common to whales in general, applying indifferently to each and every whale whatever. Ong’s underlying focus in his Terry Lectures, however, is not upon words in general but rather upon the word in the distinctive sense that one might say, for example, that Mount Everest is not just a mountain but rather the mountain, the very embodiment of mountain as such.

Befitting the intent of the grant establishing the Terry Lectures, Ong’s underlying focus in The Presence of the Word, furthermore, is not upon some word that might come out of just anyone’s mouth. It is, rather, upon one uniquely singular word that comes out of one uniquely singular mouth—namely, “the Word of God.” At issue is the Word of which John says in the very opening verse of his version of the Christian Gospel (John 1:1): “In the beginning was the Word, and the Word was with God, and the Word was God.”

Thus, to put it in terms that became traditional within Christianity only long after John but based upon his Gospel, Ong’s underlying focus in The Presence of the Word is on Christ, the Second Person of the Trinity.

*     *     *     *     *     *

Alain Badiou’s seven-session seminar in 1986 was devoted to Malebranche (published in French by Fayard in 2013 as Malebranche: L’être 2—Figure thélogique). In his session of April 29, 1986, Badiou argued that Malebranche, being the committed Christian thinker that he was, found it necessary to think of God’s being (être) in terms of the cleavage (clivage) of God into Father and Son—which, we should note, though Badiou himself calls no special attention to it at this point, is a self-cleavage definitive of the Christian God in that God’s very being, such that God is God only in so self-cleaving.

However, to think of God’s being by thinking it back into his self-cleavage into Father and Son is to empty the thought of God of any substantial content beyond self-cleaving action itself: “In the retroaction of his cleavage,” as Badiou puts it (page 149), “God is empty: he is nothing but his process, his action.” God, so thought, is nothing but the very action set in action by the act of God’s self-cleaving. God voids God-self of any substantively separate self in such self-cleavage, and is only in such vanishing.

*     *     *     *     *     *

It is no accident—and it is deeply resonant with the opening of the John’s Gospel, it bears noting—that Walter Ong, long after Malebranche but more than twenty years before Badiou’s seminar on the latter, says the very same thing of the word. According to Ong (page 9 of The Presence of the Word), the emergence of electronic media in the 20th century “gives us a unique opportunity to become aware at a new depth of the significance of the word.” Not many pages later (on page 18) he expands on that point, writing: “Our new sensitivity to the media has brought with it a growing sense of the word as word, which is to say of the word as sound.” That growing sense of the word as word calls upon us to pay “particular attention to the fact that the word is originally, and in the last analysis irretrievably, a sound phenomenon,” that is, the fact that originally and always the word sounds. The word as word—which is to say the word as saying something—is the word as sound. The word only speaks by sounding.

Not every sound is a word, of course. However, every word is a sound. Or, to put that more resoundingly—that is, to make the sound louder (using the re- of resound not in its sense of “again,” but rather in its intensifying sense, as when we speak of a “resounding success”)—the word as word is nothing but sound, or rather sound-ing. As Malbranche’s God is nothing but his own process or action, so is the word nothing but “how it sounds,” if you will.

The word as sound, Ong insists repeatedly, is pure event. “A word [as spoken sound] is a real happening, indeed a happening par excellence” (page 111). In that sense, we might say that the word never is, but rather forever vanishes. The word as word is a “vocalization, a happening,” as Ong puts it at one point (page 33), adding a bit later (on pages 41-42):

Speech itself as sound is irrevocably committed to time. It leaves no discernable direct effect in space[. . .]. Words come into being through time and exist only so long as they are going out of existence. It is impossible [. . .] to have all of an utterance present to us at once, or even all of a word. When I pronounce “reflect,” by the time I get to the “-flect” the “re-” is gone.* A moving object in a visual field can be arrested. It is, however, impossible to arrest sound and have it still present. If I halt a sound it no longer makes any noise [that is, no longer “sounds” at all].

The word’s sounding is its event-ing, its coming forth in its very vanishing: as sounding, it “does not result in any fixity, in a ‘product,’” but instead “vanishes immediately” (page 95). The word as such is a vanishing that, in so vanishing, speaks, or says something. It speaks or says, as Ong observes (page 73), in the sense “caught in one of the accounts of creation in Genesis (1:3): ‘God said, Let there be light. And there was light.’ ” Such saying is creation itself, as the very letting be of what is bespoken.

In thus vanishing before what it calls forth, just what does the word—not just any old word, but the word as word—say?

It says the world.

*     *     *     *     *     *

More than once in his lecturing and writing, Heidegger addressed a poem by Stefan George entitled “Das Wort” (“The Word”), the closing line of which is: “Kein ding sei wo das wort gebricht.” In German, gebrechen means “to be missing or lacking”; and sei is the subjunctive form of the verb sein, “to be”—as, for example, in the line “If this be love, then . . .”   If we take sei that way in George’s poem, then his closing line says something such as: “no thing may be, where the word is lacking.” It would then express the relatively commonplace idea that, if we don’t have a name for something, as a sort of label to attach to it, then that thing doesn’t really take on full, separate status for us, such that we can retain it clearly in our thought, memory, and discourse with one another. That’s the idea that a thing really and fully “is” for us, separate and distinct from other things, only when we come up with such a name by which to label it—as, for example, an old bit of what passes for popular wisdom has it that we, who do not have a whole bunch of different names for different qualities of snow, such as the Eskimos are said to have, are not really able to see those differences, at least not with the clarity and ease with which the Eskimos are purported to be able to see them.

At the same time, however, sei is also the imperative form of the same verb, sein, “to be”—the form, for instance, a teacher might use to admonish a classroom full of unruly children, “Sei ruhig!” (“Be still!”). Taken that way, George’s closing line would have to be rendered as the imperative, “Let no thing be, where the word is lacking.”

What’s more, gebrechen, “to be missing or lacking,” derives from brechen, “to break,” which is not heard any longer at all in “missing” or “lacking.” At the same time, used as a noun, ein Gebrechen means a more or less lasting debilitation of some sort, such as a chronic limp from an old broken leg, or a mangled hand from an industrial accident (and it is interesting, as a side-note, that “to lack” in German is mangeln). If we were to try to carry over some part of what thus sounds in the German gebrechen, then we might translate the word no longer as “to be missing or lacking,” but instead by something such as “to break” (as the waves break against the shore), or “to break off” (as a softly sounded tone might suddenly be broken off in a piece of music, perhaps to be suddenly replaced or overridden by another, more loudly sounded one—or by a demanding call coming in on a cell-phone with a ringer set on high volume), or “to break up” (as the voices of those stricken by grief might break up when speaking of their losses).

Hearing gebricht along such lines, the closing verse of George’s poem “The Word” would say something to the effect that where the word breaks, or breaks off, or breaks up, there is no thing.

The way I just worded the end of the preceding sentence—“there is no thing”—is intentionally ambiguous, designed to retain some of the rich ambiguity of George’s own line, most especially a part of its ambiguity which is important to what Heidegger would have us hear in that line. To say that where the word breaks, or breaks off, or breaks up, “there is no thing” can be taken two different ways. First, it can be taken to say that no thing “exists.” That way of taking it would fit with the presumably common way of taking George’s line articulated above, whereby that line says that things fully “are” or “exist” for us as distinct and separate things only when we have names for them in their distinctness. However, the same phrase, “there is no thing,” can also be taken in a second way, one in which the first word is emphasized: “there”—that is at such and such a specific place. At what place, exactly, would no thing be? By George’s line, no thing would be exactly there, where the word breaks up, breaks off, just breaks: There, where the word breaks, don’t look for any thing. There, where the word breaks, you will have to look for something else altogether, something that really is no “thing” at all.

Yet if we are not to look for any thing there, where the word breaks, just what are we to look for? What are we to expect to take place there, where the word breaks? Heidegger’s response to that question is that there, where the word breaks, no thing, but rather the “is” itself takes place—the very letting be of whatever may be, as it were, takes place there.

“Thar she blows!” old whalers would call, at least by our stereotypes of them, when a whale broke the water’s surface again after diving when harpooned. “There she be!” they could as well have said, though less colorfully. Well, where the word breaks, there be world.

Just how would the word break—in the sense that the waves break against the beach or Moby Dick breaks the ocean’s surface—if it were not as sound, breaking against silence? Sounding in the silence, the very silence that it breaks, the word is word: It speaks.

As I said before, what the word says—what its says there, where it breaks out, and up, and off as sound—is world.

*     *     *     *     *     *

At this point, I will break off my reflections on “The Traumatic Word,” to resume them, given the breaks to do so, in my next post.

* That is worth repeating. So Ong repeats it almost twenty years later, in Orality and Literacy, just varying his example: instead of using “reflect,” he uses “existence,” and says that by the time I get to the “-tence,” the “exist-” no longer exists. That example especially well suits the word itself, which as word—that is to say, as sound sounding—“exists only at the instant when it is going out of existence,” to use Ong’s way of puting it at one point in The Presence of the Word (page 101).

Pulling Out of the Traffic: The Future of Culture (4)

This is the fourth in a series of posts under the same general title.

*     *     *     *     *     *

All sorts of things transpire—but nothing any longer happens—that is, no more decisions fall . . .

— Martin Heidegger, Überlegungen IV (in GA 94), ¶219


. . . it’s neither here, nor elsewhere . . .

— Alain Badiou, Images du temps present (January 14, 2014)


I had one opportunity. I had to cut out all ties with the flattening, thoroughly corrupt world of culture where everyone, every single little upstart, was for sale, cut all my ties with the vacuous TV and newspaper world, sit down in a room and read in earnest, not contemporary literature but literature of the highest quality, and then write as if my life depended on it. For twenty years if need be.

But I couldn’t grasp the opportunity. I had a family . . . And I had a weakness in my character . . . that was so afraid of hurting others, which was so afraid of conflict and which was so afraid of not being liked that it could forgo all principles, all dreams, all opportunities, everything that smacked of truth, to prevent this happening.

I was a whore. This was the only suitable term.

— Karl Ove Knausgaard, My Stuggle. Book Two: A Man in Love


Points of decision are crisis points. “Critical condition” in the medical sense is the condition of a patient who is at the decision point between survival and demise, where the body—with, it is to be hoped, the assistance of the medical staff—must marshal all its resources to sustain life, in the minimal, zoological sense. In the passage cited above, Knausgaard describes how he came to stand at a critical point of decision for or against life in the full, no longer merely biological sense of the term—the truly live-ly sense, we might say, in contrast to the rather deadening sense of bare survival.

Actually, that way of putting it, “ a critical point of decision for or against life,” won’t quite work. Rather, Knausgaard describes coming to a point where he was faced with the need and opportunity at last actually and fully to make a decision in the first place and, by and in making it, to become truly alive at last. At that point he was faced with either “choosing to choose,” as Heidegger puts it in Being and Time, or else just going on going on, literally just surviving (“living-through” or “-over”) his own life, having already outlived himself, as it were, by letting his moment of opportunity slip by, in failing or refusing to decide at all.

The way that Alain Badiou puts it in his seminar on “images of the present times” (in the session of November 27, 2003) is that what he calls simply a “point” is “the moment where you make the world [as such and as a whole] manifest in the yes or the no of a decision. . . . It is the manifestation of the world in the figure of the decision.” He adds right away that “[o]ne is not always in the process of dealing with points, thank God!” Badiou, a self-proclaimed atheist proud of his atheistic family heritage, adds that ejaculation of thanks because, as he goes on to say: “It is terribly astringent, this imperative necessity that suddenly the totality of your life, your world, comes to be the eye of a needle of yes or no. Do I accept or do I refuse? That is a point.”

*    *     *     *     *     *

Early in the second of the six volumes of the long story of his “stuggle”—Kampf in German, it is worth remembering, as in Hitler’s Mein Kampf—Knausgaard himself has already noted how challenging it is actually to have to decide to live one’s life, rather than just to keep on living through it. Toward the very beginning of that second volume—toward the very end of which comes the passage already cited –he writes: “Everyday life, with its duties and routines, was something I endured, not a thing I enjoyed, nor something that was meaningful or that made me happy.” The everyday life at issue for him during the time he is addressing was one of an at-home husband of an employed wife, and a father taking care of his young children while his wife was at work. Thus, it was a life filled with such things as washing floors and changing diapers. However, Knausgaard immediately tells us that his mere endurance rather than enjoyment of such a life “had nothing to do with a lack of desire to wash floors or change diapers.” It was not that he disdained such activities, or regarded them as beneath him, or anything else along such lines. It had nothing to do with all that, “but rather,” he continues, “with something more fundamental: the life around me was not meaningful. I always longed to be away from it. So the life I led was not my own.”

Knausgaard immediately goes on to tell us that his failure to make his everyday life his own was not for lack of effort on his part to do just that. In the process of telling us of his efforts, he also offers at least one good explanation for giving his massive, six-volume, autobiographical novel the title it bears. “I tried to make it mine,” he writes, “this was my struggle, because of course I wanted it . . .”

He loved his wife and his children, and he wanted to share his life with them all—a sharing, it is to be noted, that requires that one first have one’s life as one’s own to share. Thus, “I tried to make it mine,” he writes, “ . . . but I failed.” That failure was not for lack of effort but because: “The longing for something else undermined all my efforts.”

Conjoining the two passages, one from near the start of the book and one from near its very end, suggests that Knausgaard’s long struggle has been of the same sort as that of St. Augustine, as the latter depicted it in his Confessions. That is, the “struggle” at issue derives from the ongoing condition of not yet having made a real decision, one way or another. In such struggles, the struggle itself comes to an end only in and with one’s finally making up one’s mind, finally coming to a resolution, finally deciding oneself.

In the passage at the start of today’s post, coming more than 400 pages of “struggle’ after the one just cited, Knausgaard gives the fact that he “had a family” as the first reason he “couldn’t grasp” the “one opportunity” that he says he had.   Nevertheless, what is really at issue cannot be grasped in terms of choosing between two equally possible but conflicting options, either living the life of a family man or living the life of an artist. Rather, what is at issue is something only Knausgaard’s subsequent remarks really bring to focus: what kept him from seizing his sole opportunity was nothing but himself. It was not the love of his family that hindered him. It was the love of his own comfort—or at least the desire not to disturb his own comfort by disturbing the comfort of others nearby.

I can identify! It was really not my love of my daughter that tripped me up when her childhood pet, Fluffy the guinea pig, died one day, causing me to tempt my own daughter to betray her love for her pet by rushing out to buy a replacement, as I recounted in my preceding post. I did love my daughter, to be sure, as I still do. But, as I already revealed when first discussing the episode, what tripped me up was really not my love for her. Rather, it was my discomfort with my own discomfort over her discomfort over Fluffy’s death. I betrayed myself out of love of my own comfort, not out of love for her. So my betrayal as such was not done out of any genuine love at all; it was done just out of fear—the fear of dis-comfort. That is how clinging to one’s precious comfort always manifests itself, in fact: in Knausgaard’s case no less than my own.

Now, there may truly be cases in which points of decision manifest as what we might call “Gauguin moments.” That is, there may really be cases in which, in order to make one’s life one’s own, one must indeed leave behind one’s family and one’s home and go off into some other, far country, as Gauguin did in the 19th century for the sake of his art (or as Abraham does in the Bible, though not, of course, for the sake of art).

What truly marks points as points of decision, however, is not a matter of the difference in content between two equally possible life-options (let alone the romantic grandiosity of the choices suggested by Gauguin’s, or Abraham’s, model). What defines them (including in such dramatic examples) is just that they are points at which one confronted with the necessity at last truly to decide, that is to resolve oneself—to say yes or no to one’s world, and one’s life in it, as a whole, as Badiou puts it.

*     *     *     *     *     *

German for “moment” is Augenblick—literally, “the blink of an eye.” Heidegger likes to note that etymologically Blick, an ordinary German word for look, glance, view, or sight, is the same as Blitz, the German for lightning-flash, lightning-bolt. Points of decision, in the sense that I am using that expression, are moments that proffer what Heidegger calls an “Einblick in das, was ist,” an in-sight or illuminating in-flash into that which is. Points of decision are moments of illumination of what is there and has been there all along, though we are only now, in a flash, given the opportunity to see it. They are those points in our lives that offer us the chance to make our lives our own: to come fully alive ourselves—at last and for firsts.

In common with Blitzen in the everyday sense of lightning-bolts, moments or points of decisive in-sight/in-flash sometimes come accompanied by loud thunderclaps, or the equivalent. God may come down and talk to us as God did to Moses as the burning bush, or come in a whirlwind, or with bells and whistles. At least as often, however, moments or points of decision come whispering to us in a still, small voice, one easily and almost always drowned out by all the noise of the everyday traffic with which we everywhere surround ourselves (even if only in the space between our ears), for very fear of hearing that voice . . . and being discomfited by it.

Points of decision may break the surface of our the everyday lives—those lives that, like Knausgaard, we endure without enjoying—as suddenly and dramatically as the white whale breaks the surface at the end of Melville’s Moby Dick. Or they may come upon us slowly, and catch up on us all unawares, such that we waken one morning and realize that for a long while now, we have not been in, say, Kansas any longer, but have no idea of just where and when we might have crossed the border into whatever very different place we are now.

All such differences make no difference, however. What counts is only that we come to a moment, a point of clarity, where we are struck, as though by a bolt of lightning, with the realization that we do indeed have a choice, but only one choice. We have a choice, not in the sense that we can pick between two different options, as we might pick between brands of cereal to buy for our breakfast. Rather, we have a choice in the sense that, like Knausgaard, we realize that we do indeed have one and only one opportunity, which we can either take, or fail to take. We are faced with the choice, as the Heidegger of Being and Time put it, of choosing to choose, choosing to have a choice to exercise, rather than continuing just to let ourselves live through our own lives, without ever having to live them. The choice is either to live, or just to go on living.

An acquaintance of mine once came to such a point of decision in his own life, and who did indeed decide to make his life his own at that point. When asked about it, he says that up until that point it had always been as though his life was running on alongside him, while he was just sort of standing there observing it. What his moment of decision offered him, he says, was precisely the opportunity to “take part in” his own life, rather than just continue to let it run itself next to him. In a certain sense, he may have “had” a life up to that point, but only at that point did he come to live it himself.

*     *     *     *     *     *

In The Politics of Things (La politique des choses, first published in France in 2005 by Navarin, then in a slightly revised, updated edition in 2011 by Verdier) contemporary French philosopher Jean-Claude Milner traces the global processes driving inexorably, in what passes for a world in what passes for today, toward eliminating the very possibility of there being any genuine politics at all. That goal is being achieved above all through the development of ever more new techniques of “evaluation,” and the ubiquitous spread of processes of such evaluationinto ever more new dimensions of individual and collective life. (In the United States, we might add, the deafening demand for incessant development and promulgation of ever more new ways and means of evaluating everything and everyone is typically coupled with equally incessant palaver about the supposed need for “accountability.”)

What Milner calls “the politics of things” aims at what he calls “government by things.” At issue is the longstanding global drive to substitute what is presented as the very voice of “things” themselves—that is, what is passed off for “reality,” and its supposed demands—for any such messy, uncertain politics or government as that which requires actual decisions by human beings.

Thus, for example, “market mechanisms” are supposed to dictate austerity according to one set of “experts,” or deficit spending according to another set. Whichever set of experts and whichever direction their winds may blow doesn’t really make any difference, however. What counts, as Milner says, is just that it be one set or another, and one direction or another.

That’s because, he observes in his fourth and final chapter, “Obedience and Liberties” (in French, “Obéissance ou libértes”), the real aim of the whole business is simply the former: sheer obedience—what is indeed captured in the English word “obeisance,” derived from the French term. He writes (page 59) that, “contrary to appearances, the government of things does not place prosperity at the summit of its preoccupations; that is only a means to its fundamental goal: the inert tranquility of bodies and souls.”

To achieve that goal, the government of things plays upon human fears—two above all: the fear of crime, and the fear of illness. Under the guise of “preventing” crime and/or illness, the government of things reduces us all to un-protesting subservience. We prove always willing to do just as we’re told, as unpleasant as we may find it, because we have let ourselves be convinced that it is all for the sake of preventing crime or illness.

I will offer two examples of my own.  The first is how we line up docilely in long queues in airports, take our shoes (and, if requested, even our clothes) off, subject ourselves to pat-downs and scan-ups, delays and even strip-searches—all because we are assured that otherwise we run the risk, however slight, of opening ourselves to dreaded terrorist attacks. My second example is how we readily subject ourselves to blood-tests, digital rectal examinations, breast ex-rays, hormone treatments, and what not, all the tests, checks, and re-checks that our medical experts tell us are necessary to prevent such horrors as prostate or breast or colon or skin cancer, or whatever. We readily subject ourselves to all these intrusive procedures, only to be told sooner or later by the very same experts that new evidence has changed their collective expert thinking, and that we must now stop subjecting ourselves to the same evaluation procedures, in order to prevent equally undesirable outcomes. In either case, we do just as we’re told, without complaint.

We do as we’re told, whatever that may be at the moment, to prevent crime and/or illness because, as Milner writes (page 61): “Under the two figures of crime and illness, in effect one and the same fear achieves itself, that one which, according to Lucretius, gives birth to all superstition: the fear of death.” In fact, we are all so afraid of death and so subject to manipulation through that fear that we fall easy prey to the “charlatans,” as Milner appropriately calls them (on page 62), through whom the government of things seeks to universalize what amounts (page 64) to the belief in Santa Claus (Père Noël in France, and in Milner’s text)—a belief, finally, that “consists of supposing that in the last instance, whether in this world or in the next, the good are rewarded and the evil are punished.”

The government of things strives to make everyone believe in such a Santa Claus “with the same effect” that it fosters the development and universalization of techniques and procedures of evaluation: the effect of “planetary infantilization.” Furthermore:

One knows that no Santa Claus is complete without his whip. Indefectible solidarity of gentle evaluation and severe control [our American Santa making up his lists of who’s naughty and nice, then rewarding the latter with goodies and punishing the former with lumps of coal, for instance]! The child who does not act like a child [by being all innocent and obedient, sleeping all nice and snug in her bed, with visions of sugar-plumbs dancing away in her head] is punished; that is the rule [and we must all abide by the rules, musn’t we?]. All discourse not conducive to infantilization will be punished by the evaluators, that is the constant. Among its effects, control also carries this one: the promise of infantilization and the initiation of transformation into a thing.

After all, the desideratum is a government not only of things, but also by things and for things (pace Lincoln—at least it we grant him the charity of thinking that’s not what he really meant all along).

In the closing paragraphs of his little book (pages 66-67), Milner issues a call for resistance and rebellion against all such pseudo-politics and pseudo-government of things, and in affirmation of a genuine politics. It is a call, quite simply, for there to be again decision.

“If the name of politics has any meaning,” Milner writes, “it resolutely opposes itself to the government of things.” In rejecting the pretense of a politics of things, real politics “supposes that the regime of generalized subordination can be put in suspense.” A politics worthy of the name can emerge only if at last an end is put to all the endless chatter about how we all need to show “respect for the law,” “respect for authority,” and the like, all of which is just code for doing what we’re told.

Such suspension of generalized subordination and end of infantilizing chatter may not last long: “Maybe only for an instant . . .” But that instant, that moment, that blink of an eye, “that’s already enough, if that instant is one of decision. What’s needed is that there again be decision.”

That’s all that’s needed, but that’s everything. As Milner writes, “politics doesn’t merit the name unless it combats the spirit of subordination. One doesn’t demand that everyone be generous, or fight for the liberties of everyone; it is quite enough if each fights for her own freedom.” The return of a genuine politics requires that we stop relinquishing our own choices to “the order of things.” It requires, instead, “[t]hat at times we decide for ourselves . . .”

There is no future of politics otherwise. Nor, without decision, is there any future of culture in any form, be it political, artistic, philosophical, or whatever. But that just means that, without decision, there really is no future at all.

*     *     *     *     *     *

I intend my next post to be the last in this current series on “Pulling Out of the Traffic: The Future of Culture.”

Pulling Out of the Traffic: The Future of Culture (2)

This is the second in a series of posts under the same general title.

*     *     *     *     *     *

In the New York Times for Thursday, June 26 of this year—which was also the day I put up the post to which this one is the sequel—there was a news-piece by Mark Mazzetti under the headline “Use of Drones for Killings Risks a War Without End, Panel Concludes in Report.” The report at issue was one set to be released later that same morning by the Stimson Center, “a nonpartisan Washington think tank.” According to Mr. Mazzetti’s opening line the gist of the report was that “[t]he Obama administration’s embrace of targeted killings using armed drones risks putting the United States on a ‘slippery slope’ into perpetual war and sets a dangerous precedent for lethal operations that other countries might adopt in the future.” Later in the article, Mr. Mazzetti writes that the bipartisan panel producing the report “reserves the bulk of its criticism for how two successive American presidents have conducted a ‘long-term killing program based on secret rationales,’ and on how too little thought has been given to what consequences might be spawned by this new way of waging war.”     For example, the panel asked, suppose that Russia were to unleash armed drones in the Ukraine to kill those they claimed to have identified as “anti-Russian terrorists” on the basis of intelligence they refused to disclose for what they asserted to be issues of national security. “In such circumstances,” the panel asks in the citation with which Mr. Mazzetti ends his piece, “how could the United States credibly condemn Russian targeted killings?”

Neither Mr. Mazzetti nor—by his account at least—the panel responsible for the Stimson Center report bothers to ask why, “in such circumstances,” the United States would want to “condemn” Russia for such “targeted killings” on such “secret rationales.” It is just taken for granted that the United States would indeed want to condemn any such action on the Russians’ part.

That is because, after all, the Russians are among the enemies the United States must defend itself against today to maintain what, under the first President Bush, used to be called “the New World Order”—the order that descended by American grace over the whole globe after the “Cold War,” which itself characterized the post-war period following the end of World War II. Today is still just another day in the current “post post-war” period that set in after the end of the Cold War—as Alain Badiou nicely put it in 2002-2003, during the second year of his three-year monthly seminar on Images of the Present Times, just recently published in France as Le Seminaire: Images du temps present: 2001-2004 (Librarie Arthème Fayard, 2014).

It is really far too late on such a post post-war day as today to begin worrying, as the Stimson panel penning the report at issue appears to have begun worrying, about entering upon the “slippery slope” that panel espies, the one that slides so easily into “perpetual war.” For one thing, what’s called the Cold War was itself, after all, still war, as the name says. It was still war, just “in another form,” to twist a bit a famous line from Clausewitz. Cold as that war may have been, it was still but a slice of the same slope down which the whole world had been sliding in the heat of World War II, which was itself just a continuation of the slide into which the world had first swiftly slipped at the beginning of World War I.

Let us even go so far as to assume that the great, long, European “peace” that ran from the end of the Franco-Prussian War in 1870 all the way down to 1914, one hundred year ago this summer, when it was suddenly interrupted by a shot from a Serbian “terrorist” in Sarajevo, was peace of a genuine sort, and not just the calm of the proverbial gathering storm. Even under that assumption, peace has never really been restored to the world again since the guns began firing in August or that same year, 1914, if the truth is to be told. Instead, the most that has happened is that, since then, from time to time and in one place or another there has occurred a temporary, local absence of “hot” war, in the sense of a clash of armed forces or the like. The guns have just stopped firing for a while sometimes in some places—in some times and places for a longer while than in others.

So, for example, even today, a quarter of a century after the end of the post-war period and the beginning of the post post-war one, the western and west-central European nations have remained places where “peace,” in the minimal, minimizing sense of the mere absence of “active hostilities,” has prevailed. Of course, elsewhere, even elsewhere in Europe—for example, in that part of Europe that during part of the time-span at issue was Yugoslavia—plenty of active hostilities have broken out. In many such cases (including the case of what once had been Yugoslavia) those episodes have often and popularly been called “wars,” of course.

Then, too, there have been, as there still are, such varied, apparently interminable enterprises as what Lyndon Johnson labeled America’s “war on poverty,” or what Richard Nixon labeled the American “war on drugs.” In cases of that sort, it would seem to be clear that we must take talk of “war” to be no more than metaphorical, in contrast to cases such as that of, say, America’s still ongoing “war in Afghanistan,” where the word would still seem to carry its supposedly literal meaning.

Another of the wars of the latter, “literal” sort is the one that began with the American invasion of Iraq on March 20, 2003. As it turned out, that particular war broke out right in the middle of the second year of Badiou’s seminar on “images of the present times.”  In fact, the hostilities in Iraq started right in the middle of some sessions of his seminar in which Badiou happened to be addressing the whole issue of “war” today, during our “post post-war” period—as though tailor-made for his purposes.

In his session of February 26, 2003, less than a month before the start of hostilities in Iraq, Badiou had begun discussing what war has become today, in these present times. He resumed his discussion at the session of March 26—following a special session on March 12, 2003, that consisted of a public conversation between Badiou and the French theatre director, Lacanian psychoanalyst, and philosopher François Regnault. President George W. Bush had meanwhile unleashed the American invasion of Iraq.

In his session of February 26, 2003, Badiou had maintained that in the times before these present times—that is, in the post-war period, the period of the Cold War—the very distinction between war and peace had become completely blurred. Up until the end of World War II, he writes, the term war was used to mark an “exceptional” experience. War was an “exception” in three interconnected dimensions at once: “ a spatial exception, a temporal exception and also a new form of community, a singular sharing, which is the sharing of the present,” that present defined as that of “the war” itself.

We might capture what Badiou is pointing to by saying that, up till the end of World War II and the start of the Cold War, war was truly a punctuating experience. That is, it was indeed an experience in relation to which it did make clear and immediate sense to all those who had in any way shared in that experience to talk of “before” and “after.” It also made sense to distinguish between “the front” and “back home.” Some things happened “at the front,” and some “back home”; some things happened “before the war,” and some only “after the war.” And war itself, whether at the front or back home, and despite the vast difference between the two, was a shared experience that brought those who shared it together in a new way.

During the Cold War, however, all that changed, and the very boundaries of war—where it was, when it was, and who shared in it—became blurred. Badiou himself uses the example of the “war on terror” (as George W. Bush, who declared that war, was wont to call it, soon accustoming us all to doing so) that is still ongoing, with no end in sight. The war on terror is no one, single war at all, Badiou points out. Instead, the term is used as a cover-all for a variety of military “interventions” of one sort or another on the part of America and—when it can muster some support from others—its allies of the occasion. Indeed, the term can be and often is easily stretched to cover not only the invasions of Afghanistan and Iraq under the second President Bush but also the Gulf War unleashed against the same Iraq under the first President Bush, even before the war on terror was officially declared—and so on, up to and including the ever-growing use of armed drones to kill America’s enemies wherever they may be lurking (even if they are Americans themselves, though so far—at least so far as we, “the people,” know—only if those targeted Americans could be caught outside the homeland).

So in our post post-war times there is an erasure of the boundary between war and peace, a sort of becoming temporally, spatially, and communally all-encompassing—we might well say a “ going global”—of the general condition of war. Coupled with that globalization of the state of war there also occurs, as it were, the multiplication of wars, in the plural: a sort of dissemination of war into ever new locations involving ever new aspects of communal life. Wars just keep on popping up in more and more places, both geographically and socially: the war in Afghanistan, the war in Iraq (just recently brought back again—assuming it went away for a while—by popular demand, thanks to ISIS), the war in Syria, the wars in Sudan, Nigerian, Myanmar, Kosovo, the Ukraine, or wherever, as well as the wars against poverty, drugs, cancer, “undocumented”/“illegal” immigration, illiteracy, intolerance, or whatever.

At the same time, this globalization of war and proliferation of wars is also inseparable from what we might call war’s confinement, or even its quarantine. By that I mean the drive to insure that wars, wherever and against whatever or whomever they may be waged, not be allowed to disrupt, damage, or affect in any significant negative way, the ongoing pursuit of business as usual among those who do the war-waging. (The most egregious example is probably President George W. Bush in effect declaring it unpatriotic for American consumers not to keep on consuming liberally—including taking their vacations and driving all over lickety-split—in order to keep the American economy humming along properly while American military might was shocking and awing the world in Baghdad and the rest of Iraq.)

Thus—as Badiou puts it in his session of March 26, 2003—in league with the expansion of war into global presence and the random proliferation of wars goes a movement whereby simultaneously, among the wagers of war, “[e]verything is subordinated to a sort of essential introversion.” That is a reference above all, of course, to America, the only superpower that remained once one could no longer go back to the USSR. On the one hand, as both Badiou and the Stimson report with which I began this post indicate, the American government does not hesitate to claim the right to “intervene” anywhere in the world that it perceives its “national interests” to be at stake, no matter where that may be. It claims for itself the right to make such interventions whenever, against whomever, and by whatever means it judges to be best, and irrespective of other nations’ claims to sovereignty—even, if need be, against the wishes of the entire “international community” as a whole (assuming there really is any such thing). Yet at the same time such interventionism is coupled essentially with a growing American tendency toward “isolationism.”

This counter-intuitive but very real American conjunction of interventionism and isolationism is closely connected, as Badiou also points out, to the ongoing American attempt to come as close as possible to the ultimate goal of “zero mortality” on the American side, whenever, wherever, against whomever, and however it does conduct military interventions under the umbrella of the claimed defense of its national interests, as it perceives them, on whatever evidence it judges adequate. That is best represented, no doubt, by the aforementioned increasing American reliance on using unmanned, armed drones to strike at its enemies, a reliance that began under the Bush administration and has grown exponentially under the Obama administration.

Furthermore, the drive toward zero war-wager mortality is coupled, in turn, with another phenomenon Badiou addresses—namely, what we might call the steady escalation of sensitivity to offense. The more American power approaches what Badiou nicely calls “incommensurability,” and the nearer it comes to achieving the zero American mortality that goes with it, the less it is able to tolerate even the slightest slight, as it were. Rather, in such an affair—as he says in the session of March 26, shortly after the American attack on Iraq under the second President Bush—“where what is at stake is the representation of an unlimited power, the slightest obstacle creates a problem.” Any American deaths at all, or any remaining resistance, even “the most feeble, the worst armed, . . . the most disorganized,” is “in position to inflict damage to the imperious power that it faces.” As there is to be zero American mortality, so is there to be zero resistance (or whatever origin, including on the part of Americans themselves).

*     *     *     *     *     *

All these interlocked features belong to what we have come to call “war” today. Or rather, the situation today is really one in which the very notion of war has come to be entirely flattened out, as I would put it. War itself has ceased to be any distinctive event—anything “momentous,” properly speaking: marking out a clear division between a “before” and an “after,” such that we might even speak of the “pre-war” world and the “post-war” one. That is what Badiou means by saying that we live today in the “post post-war” period. It is a strange “period” indeed, since there is, in truth, no “point” at all to it—either in the sense of any clearly defined limit, or in the sense of any clearly defined goal, I might add—which is what I had in mind in my earlier remark that war today has ceased to be any truly “punctuating” experience.

In one of my posts quite a while ago, I wrote that, in line with contemporary Italian philosopher Giorgio Agamben’s thought about sovereignty and subjectivity, an insightful hyperbole might be to say that it had been necessary to defeat the Nazis in World War II in order that the camp-system the Nazis perfected not be confined to Nazi-occupied territory, but could go global—so the whole world could become a camp, in effect, and everyone everywhere made a camp inmate subject to being blown away by the winds of sovereignty gusting wherever they list.

Well, in the same way it might be suggested that the whole of the long period of preparation for, and then eventual outbreak and fighting of, the (“two”) World War(s), as well as the whole post-war period of Cold War that followed, was just the long ramp-up necessary for the true going global of war in our post post-war period.  That is, the whole of the unbelievably bloody 20th century, ushered in by the whole of the 19th, back at least to the French Revolution of the end of the 18th, can be seen as nothing but the dawning of the new, ever-recurring day of our present post post-war, unpunctuated period.

Indeed, war today has become so enveloping spatially, temporally, and communally, all three, that it is no longer even perceivable as such, except and unless it breaks out in some ripple of resistance somewhere, by some inexplicable means. Whenever and wherever and from whomever, if anywhere any-when by anyone, the power into whose hands the waging of war has been delivered suffers such an offense against it, no matter how slight the slight, then the only conceivably appropriate response is, as the old, post-war saying had it, to “nuke ‘em.”

Furthermore, since offenses are in the feelings of the offended, none of us, “the people,” has any assurance at any time that we will not, even altogether without our knowingly having had any such intent, be found to have done something, God knows what, to offend. If we do, then we may also come to be among those getting nuked (or at least deserving to be)—probably by an armed drone (maybe one pretending to be delivering us our latest order).

*     *     *     *     *     *

By now, even the most patient among my readers may be wondering what this whole post, devoted as it is to discussion of the meaning of “war” today, has to do with “the future of culture,” which is supposed to be the unifying topic in the entire current series of posts of which this one is supposed to be the second. That will only become evident as I proceed with the series—though perhaps it will not become fully evident until the whole series draws together at its close. At any rate, I will be continuing the series in my next post.

Killing to Heal: Robert J. Lifton on the Nazi Doctors, #4


This is the fourth in my series of posts of philosophical journal entries I wrote last fall concerning Robert J. Lifton’s The Nazi Doctors.  As was true for the journal entry in my immediately previous post, the first entry below begins with a remark about Alain Badiou, before shifting to Lifton.  The two entries below were written at the Benedictine Monastery of Christ in the Desert, near Abiquiu, New Mexico, where I have been making personal retreats for years.


Thursday, October 28, 2008–at Christ in the Desert

During Vespers here yesterday, it struck me that the crucifixion and resurrection of Christ could  be taken in the sense I’ve been exploring a bit in recent entries on the “reality” of what is experienced–or, better, on “reality,” period.  That is, the resurrection could be taken to be the revelation to the apostles and then generations of the faithful that suffering, destitution, and pain are not “ultimate reality,” any more than, for Badiou [see my immediately preceding post], “the sad passions” such as “death and depression” are “loyal feelings,” or “licit passion” (so they are il-licit!).  The resurrection–which, for Badiou’s own account, is the sole truth [which Badiou, however, insists did not “really” happen] that makes of the human animal Saul, the subject Paul, with claim to universality–would then be the event of just that truth, at the very heart of the crucifixion itself, dispelling the later as “a dream one wakes from,” to borrow [again] from the Psalms.


Lifton, The Nazi Doctors, on Dr. Ernst B., the Auschwitz doctor who was able to help and rescue many, to become, in the words of one survivor, used as the title for this chapter in Lifton’s book, “a human being in an SS uniform”–p. 333: 

An important part of B.’s post-Auschwitz self and worldview is his unfinished business with Auschwitz.  His conflicting needs are both to continue to explore his Auschwitz experience and to avoid coming to grips with its moral significance.  His insistence that Auschwitz was not understandable serves the psychological function of rejecting any coherent explanation or narrative for the events in which he was involved.  He thus remains stuck in an odd post-traumatic pattern:  unable either to absorb (by finding narrative and meaning) or to free himself from Auschwitz images.

But isn’t that, indeed, how it is with all trauma, finally?  One cannot get past it!  One cannot “free” oneself from its “images” (and note how the ability of “finding narrative and meaning” for any trauma is just a way to “free”oneself from it–or, more accurately, to bury and avoid it).  (Lifton himself knows this, as his comments on p. 13, which I site in an [earlier] entry, shows, to give one good example.)  Isn’t that what [Eric] Santner [in his Psychotheology of Everyday Life], for example, distilled from his reading of Freud with Rosenzweig?  And doesn’t Santner’s analysis point to a “recovery” from trauma which respects it, so to speak, by neither explaining nor otherwise avoiding it, in its very inexplicibility and one’s own “stuckness” on it?

Related:  Lifton’s book came out before, a few years later, [Claude] Lantzman’s [film] Shoah, and Lantzman’s argument that any attempt to make Auschwitz “understandable” is a blasphemy, tantamount to compounding the brutality of the camps and the “Final Solution.”  That would complicate Lifton’s picture here,  and I’m curious what he thought of  Lantzman’s film and assertion.

There may be some advantage in distinguishing two different places from and in which one can get traumatically “stuck.”  One such place would be that of the perpetrators, to which in some sense Ernst B. continues to belong despite his attempts at (relative) “humanity” in his role there (as Lifton correctly insists).  From that place, as Lifton suggests in the quote I began with, there is a definite self-serving (by way of self-exculpating) dimension of “payoff” that comes from denying the explicability of Auschwitz.  But precisely for that reason, the specific nature of the stuckness at/from this locus is basically an exploitation of the very inexplicability at issue. 

In contrast, there is the place of the victim, where no such  exploitation occurs in the acknowledgement–here, genuine; when exploitative, disingenuous–of the inexplicability.  And it is here, in this place, if anywhere, that any “resurrection” must occur. (As, perhaps, it does in D. M. Thomas’s The White Hotel?  I’m not sure:  Need to look at that novel again, maybe.)


Wednesday, October 29, 2008–at Christ in the Desert

Yesterday, a propos Lifton, I forgot to note this thought that came to me when reading the passage I cited yesterday:

It is as if Auschwitz mirrors an event of truth, most especially in its “excessiveness,” its irreducibility to any explanation.  Because it (Auschwitz–and other [pseudo-?]events like it) mimics truth in that way, the illusion of it–specifically, it’s being “how things really are“–can only be dispelled by the event of a genuine truth, one that dismisses the illusion as a phantom.

There is also, perhaps, a sense in which such points of the mocking mimicry of a truth-event opens, despite its mimicking intentions, a site for the striking of truth.

Killing to Heal: Robert J. Lifton on the Nazi Doctors, #3


This is the third in my series of posts with journal entries I wrote last fall, on the dates indicated, concerning Robert J. Lifton’s The Nazi Doctors.  Today’s entry begins with some reflections on a work by Alain Badiou, which I soon connect up with my continued reflections on Lifton’s study of “medicalized killing and the psychology of genocide,” the subtitle of his book.


Sunday, October 26, 2008

Badiou, Petit panthéon portratif [Little Portrait Galley] (Paris:  La Fabrique editions, 2008), “Ouverture,” pp. 7-8 [my translation]: 

If philosophy serves for something, it is to remove the chalice of sad passions [in the preceding sentence he has said that he holds “that death should not interest us, nor depression”], to teach us that pity is not a loyal feeling, nor complaint a reason to have reason, nor the victim that from which we should start to think.   On one hand, as the Platonic gesture establishes once and  for all, it is of truth, declined as necessary as beauty or the good, from which every licit passion originates and every creation of universal  aim.  On the other hand, as Rousseau knew, the human animal is essentially good, and when he is not, it is by some exterior cause that constrains him, a cause that must be detected, combatted, and destroyed as possible, without the least hesitation.

It seems to me that Badiou could be used here as a commentary on the following, from Lifton’s The Nazi Doctors, p. 238, concerning the “prisoner doctors” at Auschwitz: 

As Henri Q. explained, “We suffered and [acted] within the limits of the possible. . . . Doctors did provide some comfort, I believe.  There was the comfort for the patient, the fact that he was not alone, that someone understood and was trying to help to do something for him–and that was already a lot. . . . We were a group, not just the [individual] doctors of our block.”  He could then conclude . . . that he and his friends “remained doctors . . . in spite of everything.”

Helping children could greatly contribute to the prisoner doctors’ struggle to maintain a healing identity.  Dr. Henri Q., for instance, told of the impact of a nine-year-old boy from a Jewish ghetto in Poland, who [was helped to survive the war and Auschwitz]. . . . He spoke  even more intensely of a still younger, Russian child (“a rare think in the camp”) whom he once took to the infirmary:  “I walked in front of all the blocks, and you could feel all the men, ten thousand men, who  were looking at this child.  I was very proud to walk with him. . . . as if I were walking with the president of the Republic.  There is only one president and there was only one child.”

Viewed through the lens of Badiou’s comment, such prisoner doctors at Auschwitz proved themselves to be philosophers.  And the philosophical reality was revealed to them–in and as their own form,  described by Dr. Henri Q., of “resistance.”  Philosophically, that reality was the presidency of that simple child.

Zymunt Bauman, Modernity, and the Holocaust


Today and in my next post, I will be sharing entries from my philosophical journal that pertain to sociologist’s Zygmund Bauman’s Modernity and the Holocaust.  The entries were first written on the dates indicated below.


Thursday, October 16, 2008

Zygmunt Bauman, Modernity and the Holocaust (Ithaca, New York:  Cornell University Press, 1992), “Appendix:  Social Manipulation of Morality, Moralizing Actors, and Adiaphorizing Action” (originally a lecture given in 1990), [gives a good, succinct statement concerning Emmanuel Levinas’s moral/ethical philosophy on] p. 214:

Moral behavior, as . . . Levinas tells us, is triggered off by the mere presence of the Other as a face, that is, as an authority without force.  The Other demands without threatening to punish or promising reward; his demand is without sanction.  The Other cannot do anything; it is precisely his weakness that exposes my strength, my ability to act, as responsibility.  Moral action is whatever follows that responsibility.  Unlike the action triggered off by fear of sanctions or promise of rewards, it  does not bring success or help survival.  As, purposeless, it  escapes all possibility of heteronomous legislation or rational arguments, it . . . elides the judgment of “rational interest” and advice of calculated self-preservation, those twin bridges of the world of . . . dependence and heteronomy.


Friday, October 17, 2008

Bauman, back to main text, “Preface,” p. xii:

The overall effect [of standard scholarly accounts of the Holocaust in terms of its causes, and making it either a uniquely “Jewish” or a uniquely “German” affair] is, paradoxically, pulling the sting out of the Holocaust memory.  The message which the Holocaust contains about how we live today–about the quality of the institutions on which we rely for  our safety, about the validity of the criteria with which we measure the propriety of our own conduct and the patterns of interaction we accept and consider normal–is silenced, not listened to, and remains undelivered.  If unravelled by the specialists and discussed inside the conference circuit, it is  hardly ever heard elsewhere and remains a mystery for all outsiders.  It has not entered as yet (at any rate in a serious way) contemporary consciousness.  Worse still, it has not as yet affected contemporary practice.

Like 9/11, the Holocaust never happened–at least not yet.  But, also like 9/11, it will happen?

Has the Christian Crucifixion happened yet?  The Resurrection?

Aren’t the same issues involved in all these cases?

Might the example of ongoing Christian conversion and liturgical time/community provide a clue here?


Bauman, pp. 6ff, articulates the idea that the Holocaust is neither an aberration of modernity, nor its “truth,” but is, rather, a definitive aspect or potentiality within modernity, one that can be actualized–and will be actualized–under certain circumstances (just as, under pressure of calamity, both the worst and the best in individuals can be actualized–the one in one, the other in the other–and where it cannot be predicted which will be which in advance).


Pp. 17-18, Bauman: 

This is not to suggest that the incidence of the Holocaust was determined by modern bureaucracy or the culture of instrumental rationality it epitomizes; much less till, that modern bureaucracy must result in Holocaust-style phenomena.  I do suggest, however, that the rules of instrumental rationality are singularly incapable of preventing such phenomena; that there is nothing in those rules which disqualifies the Holocaust-style methods of “social engineering” as improper or, indeed, the actions they served as irrational.  I suggest, further, that the bureaucratic culture which prompts us to view society as an object of administration, as a collection of so many problems to be solved, as “nature” to be “controlled,” “mastered” and “improved” or “remade,” as a legitimate target for “social engineering”, and in general a garden to be designed and kept in the planned shape by force (the gardening posture divides vegetation into “cultured plants” to be taken care of, and weeds to be exterminated), was the very atmosphere in which the idea of the Holocaust could be conceived, slowly yet consistently developed, and brought to its conclusion.  And I also suggest that it was the spirit of instrumental rationality, and its modern, bureaucratic form of institutionalization, which had made the Holocaust-style solutions not only possible, but eminently “reasonable”–and increased the probability of their choice.


In Le Monde on-line this morning, there was a piece by [Alain] Badiou on the current global finance/credit crisis.  Basically, his argument was that the “real” is nothing of the market, but is, rather, the misery of the excluded masses, excluded by the “barbarity” that is capitalism, as Marx already saw 160 years ago.  The only real solution/response to the present crisis of “capitalism-parliamentarism” (and “democracy”) is,  in effect, the parallel, autonomous coming together of the excluded themselves.  Therein lies an altogether new politics–or, rather, the point of breakthrough for the genuinely political as such, no longer reducible to, or peripheral to, the economy.

[Naomi] Klein’s closing pages [in The Shock Doctrine–see my recent posts on her work] about disaster victims taking recovery in their own hands, despite and independent of government, points to the same conclusion.

So does my own thoughts on AA and the reply to addiction [it represents].

It is all  a matter of the dispelling of the illusion of the reality of the capitalist world–a waking from the dream, to dismiss the vanishing phantoms.

The irrelevance of the economy, and everything tied to it (e.g., “electoral politics”).


Saturday, October 18, 2008

Bauman, p. 64: 

Heterophobia [which he wants sharply to distinguish from racism] seems to be a focused manifestation of a still wider phenomenon of anxiety aroused by the feeling that one has no control over the situation, and that thus one can neither influence its development, nor foresee the consequences of one’s action.  Heterophobia . . . is a fairly common phenomenon at all times and more common still in an age of modernity, when occasions for the “no control”experience become more frequent, and their interpretation in terms of the obtrusive interference by alien human groups becomes more plausible.

Borrowing the distinction [Gilles] Deleuze makes focal in his reading of Nietzsche, I’d say “heterophobia” is a reactive formation, which fits with Bauman’s characterization, in this part of his book, of ressentiment.  In contrast, I’d say addiction [for one] is an active formation in response to the same ” ‘no control’ experience.”

Trauma Come Home to Us on 9/11–#2 of 4


This is the second of four posts with entries from my philosophical journal concerning some of the pieces in Trauma at Home:  After 9/11, edited by Judith Greenberg (University of Nebraska Press, 2003).  What I wrote in my last post also applies to this one, and to the remaining two posts in the series:  “the following entry, under the date I originally wrote it, consists of little more than my selection of passages from some of the essays in that collection.  However, even without much explicit commentary of my own, the fact that I singled out just these passages says something of importance about how I am trying to approach the notion of trauma.”

Before returning to my remarks on Trauma at Home, however, I want to reproduce four brief and closely related paragraphs from my journal, where they occur between the entry from my preceding post and the entry to follow below.  I originally wrote the first paragraph as my only entry on  July 28, 2008.  I then wrote the remaining three the next time I picked up my journal, on July 29.

[Michel] Henry, L’essence de la manifestation [Paris:  PUF, 1963], p. 561:  The sort of “opposition” of what is absolutely different is not dialectical unity, under a common essence, but “indifference.”  (So do the visible and the invisible stand indifferent to one another, in his  account–each remaining in itself and “ignoring” and not able to “know” the other.)

It is in terms of such “indifference” of two things “opposed” to one another by an opposition of absolute difference that (p. 563) Henry reads Christ’s remark about rendering Caesar’s to Caesar, God’s to God, and also the oppositions that structure the Sermon on the Mount.

[Alain] Badiou’s use [in his Saint Paul:  The Foundations of Universalism, translated by Ray Brassier, University of Stanford Press, 2003] of  “indifference” to characterize the message of St. Paul [concerning such differences as Greek and Jew] is strikingly similar, though Badiou makes no reference [as I recall] to Henry on that point,  and may not have been influenced by him.

It is, at any rate, a valuable insight into “indifference” and into freedom.

It is also, I would argue, by just such “indifference” to differences–as, for example, in Paul, who says that “in  Christ” there is no Jew or Greek, slave or free, male or female–that what I have been calling “remnant communities” stand in relation to the differences between the members of those communities, differences that are often all too  important in the dominant, “non-remnant” societies wherein the given remnant communities occur.  As I have already explored in some earlier posts (see the table of contents for this blog), genuine communities arising in the process of recovery from trauma are all prime instances of what I call “remnant” communities.  Accordingly, this important notion of “indifference,” as articulated by Henry and then later again by Badiou, is all but indifferent for the study of trauma and, especially, recovery from it.

Now, to return to Trauma at Home, what follows is the remainder of my journal entry for the date at issue.


Tuesday, July 29, 2008

In Trauma at Home–Ann Cvetkovich, “Trauma Ongoing,”p. 65: 

It is important that the archive of September 11 becomes something more than the reification of the traumatic moment, something more than an endless videoloop or repeated image of the planes hitting the buildings.  Oral history can help break out of that potentially obsessive focus because it documents the process of people making meaning out of a rift in their lives [!].  It is too soon to tell what exhibition strategies will work best, but I hope  to see oral histories combined with other media to facilitate public forums about how September 11 continues to affect the present.

Feminists have focused on how trauma is linked to  everyday life, emphasizing that it takes root [i.e., is traumatic!] because it  is connected to ongoing violence and systemic structures of oppression [cf. historical/structural trauma]. . . . I resist the idea that after September 11 everything has changed and nothing will be the same again.  The need to connect cataclysmic moments to our everyday life persists; I’m interested not just in what happened one day in September but also in how shock is absorbed into the textures of  our ongoing lives.”


E. Ann Kaplan (prof. of English and comparative literature SUNY, [who was a] child in England during WW II), “A camera and a Catastrophe:  Reflections on Trauma and the Twin Towers.”  Pp. 98-99:

The media [right after 9/11] aided the attempt to present a united front.  But this was a fiction–a construction of a consensus in a Eurocentric and largely masculine form.  On the streets, I experienced the multiple spontaneous activities from multiple perspectives, genders, races, and religions, or nonreligions.  Things were not shaped for a specific effect or apparently controlled by one entity.  By contrast with what I witnessed locally, the male leaders on TV presented a stiff, rigid, controlling, and increasingly revengeful response–a response I only gradually understood as about [American] humiliation [at 9/11].  While a “disciplining” of response was at work through the media, on the streets something fluid, personal, and varied was taking place.

P. 100:   Contra Žižek,

why must confrontational, thorough, and critical political discourse be opposed to a discourse of empathy for those who  suffer, for those who have lost loved ones, for pain, trauma, hurt?  Is it really impossible to have solid, left-leaning political analysis, highly critical of the United States’ actions, in the past and today, and yet welcome public discourse about trauma, posttraumatic stress disorder, vicarious traumatization, and ways to help  those suffering those disorders?

She has prepared for this by a telling personal anecdote on the preceding page (99): 

It reminds me of a colleague who, when I arrived at work at the university on September 11 about three hours after the attacks, said:  “What about Hiroshima?  Didn’t we do that?”  Yes, indeed, but to evoke Hiroshima at this moment indicated an intellectualizing of present, highly emotional happenings, a distancing and displacement characteristic of many political scholars.  As leftists and political people, can’t we also live in the present and relate to emotions?


Susannah Radstone (teaches cultural theory and film studies at U. of East London), “The War of the Fathers:  Trauma, Fantasy, and September 11,” seems to think “trauma theory” conflicts with emphasizing the role of fantasy in such events–or the processing of  them–as 9/11.  So, for example, p. 120, she says her view is that

trauma and fantasy need not be sharply counterpoised.  An event may prove traumatic, indeed, not because of its inherently shocking nature but due to the unbearable or forbidden fantasies that it prompts.  Or , conversely, an event’s traumatic impact may be linked to its puncturing of a fantasy that had previously sustained a sense of identity–national as well as individual.


In the same, what strikes me as [slightly] off-key, way, she writes at the start of  her essay (p. 115) that according to “current understandings of trauma,” “experiences that elude sense making and the assignment of meaning” are traumatic, and she sees this for some reason as contesting (her word) “earlier (Freudian) psychoanalytic understandings” that emphasize “the part played by the conflict that arises from unconscious fantasy–perhaps, but not necessarily, prompted by an event–in the emergence of symptoms.”

Remnant Communities and the Trauma of Sovereignty


Today’s brief entry from my philosophical journal, an entry which I first wrote last May, concerns once again the thought  of Jenny Edkins, whose field is international relations, and about whom I have written before in this blog.  What is at issue in the entry below, as it is at issue in the essay by Edkins I address, is a question raised by the work of contemporary Italian philosopher Giorgio Agamben on the concept of sovereignty.  Agamben argues that modern sovereignty grows from a seed planted long before modernity by the ancient Greeks when they distinguished–in Aristotle’s thought especially–between bios, or life in the fully human sense, such as can be captured in “biographies,” which literally means “life-writings,” on the one hand, and zoe, or life in the purely “zoological” sense, on the other.  

Following Carl Schmitt, the right-wing political theorist who eventually used his thought to provide the Nazi state with legal justification, Agamben defines modern sovereignty as that power that draws the line between supposedly fully human life and what Agamben calls “bare life.”  Agamben goes on to argue that the emergence under the Nazis of the concentration camp system, above all the death-camps where the Nazis carried out the “extermination” of the Jews, was the culmination and flowering of sovereignty, so defined.  What is more, he argues that insofar as everyone today is subject to  such sovereignty everyone today is at least potentially, by virtue of the decisions of whoever holds the position of sovereign over one–the position of being “the decider,’ as George W. Bush notoriously identified himself in his role as President–an inmate in “the camps.”

The question that Professor Edkins raises in the article I am considering in the entry below is, in short, that of the form that resistance to such sovereignty can take.  If resistance to sovereignty, as Agamben analyzes sovereignty, is still possible at all, then just how might we make our resistance effective?  Or are we in fact doomed henceforth to trying merely to continue surviving, eking out as best we can one day at a time the “bare life,” as Agamben names it,  to which sovereignty reduces life in “Auschwitz,” “the camps”?

My own very brief remarks interspersed below within and between citations from Edkins point toward what I have come to call “remnant communities” as places where such effective resistance may occur.  My selection of that name is indebted to Agamben, Franz Rosenzweig, and German and Jewish studies scholar Eric L. Santner, each of whom makes use of the term remnant in a way that has become important for me in my own thought.

One of the  key books in which Agamben himself works his thought of sovereignty and “bare life” is tellingly called Remnants of Auschwitz.  Not only were the inmates of Auschwitz remnants–cast off by-products, as it were–of the Nazi state, but all we have for testimony from those inmates themselves are remnants of what would constitute full testimony to the horror in which so many perished, a testimony that could only be made by those who so perished themselves, but who in being exterminated were denied any possibility of bear their own witness.

Then in The Psychotheology of Everyday Life:  Reflections on Freud and Rosenzweig (University of Chicago Press, 2001) Santner makes central use of the idea of the “remnant,” the “useless,” “good for nothing” cast-off remainder of the processes wherein we establish our “identity.”   It is only as such remnants, or at that level of ourselves where each of us is just such a good-for-nothing, ready-to-be-discarded remnant, that we can be encountered in our pure singularity, our “ipseity” as Santner calls it, to distinguish it from our “identity,” which is always a matter of social construction and what he calls “symbolic investiture” (for example, such investitures as establish my own  identity as a philosophy professor, father, husband, etc.). 

Santner’s use of the idea of the remnant is itself based in part on Agamben’s just mentioned text.  Even more crucially however, Santner’s thought and terminology is grounded in Franz Rosenzweig’s The Star of Redemption.  In that work Rosenzweig traces what he argues is an essential connection between Judaism and the  idea of “the remnant.” For him, the Jewish diaspora community is just a “remnant community” as I have in mind:  a community alongside and within the dominant–we can say the “sovereign”–society, one which does not set itself up as any alternative to that society, any competitor for sovereign power, but which instead lives out its own rich life as a community without reference, we might say, to that environing, dominant,  sovereign society, outside its laws, in that sense, though the individual members of that remnant community continue to play their various roles in that same sovereign society. 

Another model of a “remnant” community is provided by Benedictine monasticism, which is an insistently “cenobitic” form of monasticism–that is, the monastic life lived out in communities of monks, which is to say communities of solitaries, who live “alone together,” to use a formulation I find helpful.  Each Benedictine monastic community lives out its communal life in a certain, definite “withdrawal” from “the world,” yet a withdrawal in which the monastery–in the sense of the monastic community as such–always remains connected to, and interactive with, that same “world” in various complex ways.  The monastery is a community “in the world, but not of the world,” as one common formulation has it.  It is a place where the irrelevancy of what in medieval Christian discourse is called “the world” is made known, simply by the fact of communal  life being lived at such a place “outside” yet “in” that same “world.”

Yet a third example of what I would call  a “remnant community,” providing yet a third model of the formation and continuance of such a community, would be a “Twelve Step fellowship,” such as Alcoholics Anonymous, as I suggest at the end of the entry below.  Interested readers might wish to refer back to some of my earlier posts, in which I offer further remarks, all relevant to the topic of today’s post, about AA and other such fellowships.

 This is a topic that, in one way or another, will occupy me in many of the entries I will be posting here in the future.


Monday, May 19, 2008

Jenny Edkins, “Whatever Politics,” in Matthew Calarco and Steven DeCaroli, editors, Giorgio Agamben:  Sovereignty and Life (Stanford University Press, 2007), pp. 70-91.  Page 73:  “Sovereign distinctions [especially between bios and zoe] do not hold; to refuse them, and to demonstrate being in common, is  not to make a new move but only, yet most importantly, to embrace that insight [namely, the insight that such sovereign distinctions do not hold], and to call sovereignty’s bluff.”  Then, page 76:  “Sovereign power is happy to negotiate the boundaries of the distinctions that it makes; what it could  not tolerate would be the refusal to  make any distinctions of this sort.”

Compare [Alain] Badiou’s summation of the truth that comes to pass/takes place as the Sparticist uprising [in ancient Rome–which Badiou discusses in Logiques des mondes]:  [the simple but incontrovertible truth–incontrovertible even by the eventual rout of the Sparticist troops by the Roman legions sent against them, and the crucifixion of Sparticist and his followers–that, as Spartacus was just the first among the slaves of Rome to realize,] “We can go home.”

Compare, also, Yossarian in [Joseph Heller’s novel] Catch 22 [who finally just does “go home,” which in his  case means to check out of the insanity of the World War II Allied war enterprise by deserting to a neutral country].