Pulling Out of the Traffic: The Future of Culture (3)

This is the third in a series of posts under the same general title.

*     *     *     *     *     *

Getting things to run smoothly, working to achieve a lack of resistance, this is the antithesis of art’s essence, it is the antithesis of wisdom, which is based on restricting or being restricted. So the question is: what do you choose? Movement, which is close to life, or the area beyond movement, which is where art is located, but also, in a certain sense death?

                                    –Karl Ove Knausgaard, My Stuggle. Book Two: A Man in Love*


Just where is art “located”?

That interrogative sentence may be grammatically well formed, but the question it tries to pose may not be. One thing (one of many, really) on which Alain Badiou and Martin Heidegger are in agreement is that it is more nearly art that does the locating, rather than itself being located. The work of art is not, properly regarded, at some place, according to them both. Rather, the work of art is itself a place–and a place-ment—in the strongest sense.

Plato somewhere mentions the common case of the child to whom some adult holds out two closed hands, in each of which is a desirable gift, and asks the child to choose. Any self-respecting child in such a situation will, of course, want both. Plato uses that as a metaphor for the philosopher. The philosopher, he says, is the child who, made such an offer of two good things and told to choose between them, always begs for both.

As deficient a philosopher as I may be in other regards, I am a still a good enough one to meet at least that particular Platonic standard—which I would like to call the standard of the essential childishness** of philosophy. Just so, in the present case I want to have both my Knausgaard and my Badiou (and my Heidegger!) too.

In the passage I quoted above, Knausgaard speaks of art itself being located somewhere. He locates it in a certain “area.” That is the area—or to show, as usual, my own Heideggerian underwear (“foundation garments”), what might better be called the region—“beyond movement.” That same area/region is also where one is to find, Knausgaard says, “death,” at least “in a certain sense.” That last phrase—at any rate, in the English translation—can be read, I want childishly to suggest, to apply both to a certain sense of death and to a certain sense of location. The death in the vicinity of which art is located is not just any old sort of death, but only a certain sort of death. At the same time, art and death themselves can be located in one another’s vicinity not in just any old sort of location (or any old sort of vicinity, for that matter), but only in a certain sort of location.

The certain sort of place or location in which a certain sort of death or end of life lies near to art is like no place at all in the entire world (which itself is only in a certain sense world) of our day (which is only in a certain sense day). In our globally collective present times—which are both present and times only in a certain sense—neither art nor death can be located at all. In our present times, there is neither art nor death.

*     *     *     *     *     *

Nor is the area, region, or realm in which art and death come into one another’s vicinity any place we can reach from our own certain sort of day’s certain sort of area, region, or realm, even though the latter is all-inclusive, both geographically and socially speaking—all-inclusive, that is, with regard both to such places as the states of Afghanistan, Iraq, Syria, the Sudan, the Ukraine, etc., and with regard to such places as the states of poverty, intolerance, illegal/undocumented-immigrant-hood, etc. (to allude to some remarks I made in my preceding post).

The place where art and death draw near to one another?

You can’t get there from here.

The only places you can get to from here, that is, from where we are today, in these present times, are such places as points on the globe. Or, we could also add, points “off-globe,” in interstellar space.

Most of us, of course, will never be able to get to any extra-global places from here, of course, since most of us are nowhere near rich enough to pay for a seat on one of the commercially driven spaceships now being readied for a very few of us to go to some such places. But that doesn’t matter. It doesn’t affect the fact that such places can still be reached from here by some of us, even if not by 99.9% of us. Nor does it affect the status of all of us actual or only logically possible potential travelers universally, insofar as we all without exception count as citizens of democracies, actual or even only logically possible, where everyone is equal.

That’s because however rich or however poor any of us may be, the only places any of us at all can get to from where we are now are, anyway, places such that it really doesn’t make any difference whether we are there or somewhere else. They are all alike places the place of which doesn’t matter. After all, it you’ve seen one McDonalds, you’ve seen them all.

That indifference of the difference in go-to-able places stems from the underlying basic fact that the only sense of place for which our world today makes any room—the only sort of place that has any place in such a place—is that of what can be placed at some point in the grid of spatial coordinates that applies indifferently to any and every place alike in the one and only, all-inclusive cosmic space of physics and the other sciences (which are never guilty of childishness, by the way).

Thus, in the world of our present times today, what in my preceding post I called the “flattening” that transpires with the concept of war also transpires with the concept of space. Indeed, that same flattening also transpires with regard to the concept of time, as it does yet again with the concept of a person and even, finally, with the concept of an event.

Badiou is good on that.  So is Heidegger. Let us choose both.

*     *     *     *     *     *

In the third session—held on December 4, 2002—of his three-year seminar on “images of the present times,” Badiou begins by addressing how the movement of reactionary endeavor is always toward “the installation of the idea that the world is not transformable, that the world is as it is, and that it’s fitting to accommodate oneself to it.” That can take the form either of presenting the world as never changing, or of presenting it as ever changing—that is, as changing constantly. In the former case all effort to change the world is futile. In the latter case, for fear of falling behind one cannot ever dare even to pause long enough to take stock of what things even can be changed, let alone should.  So either, like Zeno’s arrow, one can never take flight at all, no matter how fast one flies. Or, like a certain Rabbit, one must always just keep on running, running, running . . . to go nowhere.

All that perfectly fits what Badiou goes on to call the “general tendency of the present times,” which is “manifestly the dissolution of the present in a general regime that is that of communication, [in the sense, standard today,] of circulation”—just as money and the merchandise it is used to buy must be kept constantly in circulation to keep things running smoothly everywhere today.

Thus, the “general tendency” at issue is toward the reduction of time as suchto a never-present present. At issue is the reduction of the “present” (itself taken to define time, as Aristotle said so long ago) to what is, in effect, no particular time, but just any old time. In such times as ours, any given time is interchangeable with any other–just like the money that, as the old cliché rightly has it, time today is. Time today is reduced to what, in effect, has no particular time—“has no particular time,” both in the sense that no moment of today’s time differs essentially from any other, and in the sense that time today grants or gives no time, no time to pause and draw aside, no time one can “bide.”

Conjoined with that reduction of time to what has no particular time, goes the reduction of place—Badiou goes on to observe a little later in the same session—to what has no particular place. He makes that observation in the context, specifically, of a discussion of Rimbaud and the colonial enterprise of Rimbaud’s day, but what he says applies no less to every day since Rimbaud’s day, even if the nature and status of the imperial enterprise itself has undergone considerable cosmetic do-over in the meantime.

“The imperial abstraction,” Badiou remarks, “is to transform the here [ici] into an it doesn’t matter where [en un ne importe où].” He gives an explanatory example clear to everyone (it doesn’t matter who): “That’s a feeling one experiences in the most anguishing manner when one is in an airport: you are sure you’re in an airport, but you could just as well be in Rio de Janeiro as in Paris or in Singapore. The airport is the absolute doesn’t matter where.” Just a bit latter he adds: “The contemporary savagery, the contemporary barbarism, is a barbarism that treats place [lieu] as if it is not a place. That treats place as if that place was nothing but a point in space.”

In contrast, for Badiou, the work of art is itself a truly singular place, not just any old place at all. Indeed, art as such is one of his standard four ways in which truth itself takes place. The other three ways, to repeat what I’ve said in earlier posts, are science, love, and politics. All four are, as it were, place-makers for truth. They are truth’s own em-place-ments, literally speaking.

In more than one place of his own, Heidegger says the same thing, at least about art, place, and space.  It’s become a sort of Heideggerian commonplace about place, in fact. Nevertheless, I will briefly cite two places he says such things. The fist is his lecture “On the Origin of the Work of Art,” first delivered in 1935. In that lecture he says emphatically that works of art as such—which means insofar as they are still “at work as art-works,” we might say—are not things that are located at certain places (such as in museums where paintings are hung, like corpses on nooses, or in the cities where the ruins of dead works of architecture can be visited still today, like the bones of dead ancestors in reliquaries). Rather, works of art are themselves places—places where whatever does take place, from people to rivers and gods to crickets, is allowed to take place. Thus, to use just one of Heidegger’s own examples from that lecture, the battle in ancient Greece between the old Minoan gods and the new Attic ones of the northern invaders, who came to define the very concept of “the Greeks” for us, is itself something that takes place in Sophocles’ Antigone, rather than being something that once took place somewhere else, then just got “represented” in Sophocles’ tragic drama. The Antigone itself is the battlefield, and the fighting of the battle takes place on that very battlefield.

My second Heideggerian reference it to something he wrote more than thirty years later,a short piece from his later works called “Art and Space” (“Die Kunst und der Raum”), which was originally published only in 1969, just seven years before Heidegger’s death. In it, Heidegger explicitly draws a strong, sharp contrast between the cosmic, place-less space of the physicists, on the one hand, and the place-scaped space, we might well call it, of the artwork—specifically, in this essay, the work of sculpture, which is itself a matter of spacing as the literal em-bodi-ment, the making into a body, of truth.   As one can easily see, at that point in making his point about spatial points, Heidegger may as well be Badiou. They both occupy the same space—which tells you the space they share is no longer Greek, by the way, or at least no longer Aristotelian.

*     *     *     *     *     *

In the same session of December 3, 2002, already mentioned, Badiou remarks that Rimbaud, in poems written during his time as an enlistee in the Dutch foreign legion, referred to himself as a “conscript of good will,” which is to say one who conducted himself as befits a willing conscript. Badiou says that Rimbaud’s usage of the expression good will is “exact,” in the strictly Kantian sense of good will, which Badiou also labels “the good democratic will.” That is, Rimbaud is a “conscript of good will” insofar as he is a willing “soldier of the rights of man, of civilization,” as Badiou puts it, and willing to help carry those rights and that civilization to those who do not yet share in its blessings. (Just the kind of conscript of good will George W. Bush still needed well over a century later!)

As Badiou notes, Rimbaud also coupled being such a good democratic conscript with serving what Rimbaud himself called a “ferocious philosophy.” According to Badiou, that means “a philosophy of aggression and of the in-differentiation of place,” that is, of the washing out of all differentiation between one place and another.

One should surely add: between one person and another, too!  After all, everyone (no matter who), everywhere (no matter where), at every time (no matter when) is entitled to the “universal rights of man” (please forgive the sexist language of the standard Enlightenment phrase). Furthermore, those rights boil down, essentially, to being allowed to vote (no matter for whom) in free and open elections, and being free to live out one’s life however one chooses (no matter how, so long as it doesn’t hurt anyone else).

Who cares if the elections we vote in and the lives we live out are all equally meaningless? All that finally matters is that all our votes get counted equally, and all our lives lived equally out.

*     *     *     *     *     *

Once again, Heidegger also points to such a flattening out of the notion of the human, to go with the flattening out of the notions of time and space. And once again he does so in more than one place.  This time, I will cite just one brief passage. It is one I read just recently, alongside Badiou’s seminar. The passage in question is from “Zu Ereignis III,” one of the six manuscripts about the “thinking” (Denken) of “the event” (das Ereignis) recently published together as Zum Ereignis Denken (volumes 73.1 & 73.2 of Heidegger’s Gesamtausgabe). This third of the six manuscripts is from the same Nazi decade as “On the Origin of the Work of Art,” cited above.

In ¶58 of “Zu Ereignis III” (GA 73.1, page 375) Heidegger discusses “the singularity [Einzigkeit] of Dasein,” which is to say the singularity of that being each of us human beings is given and called to be—however many of us may fail at that task, however often. Such singularity, he writes, is “precisely not individuality [Einzelnheit]—but also not the empty generality of what’s common.”

The terminology—which I have rendered as “singularity” and “individuality”—is not the crucial thing. What matters is the distinction itself, the one being marked by that terminology. That is the same distinction Badiou calls to our attention in his discussion of Rimbaud: the difference between what we might call two different sorts of “one of a kind.” On the one hand, there is what is “one of a kind” in the usual sense of that expression, where it means something that has no like, something truly unique, something altogether irreplaceable by anything else. That is the sense in which, for example, Muhammad Ali can rightfully be said to be “one of a kind.” On the other hand, there is what we might call “one of a kind” in a minimizing, even pejorative sense. In that sense, “one of a kind” would mean: just one of any number of possible instances of some given “kind,” that is, some common or general class of things of which any one member of that class could serve just as well as any other as an example, since they are all equal, all interchangeable with one another, as instances of the kind or class at issue.

Take Fluffy as an example.

Fluffy was my daughter’s pet guinea pig when she (my daughter—Fluffy was a “he”) was a child. One day Fluffy went belly-up in his cage. My daughter was, of course, troubled by Fluffy’s passing. She cried. That, in turn, troubled me, her father. Utterly lacking in the pertinent skillful means myself, at least at that particular time in that particular situation, I attempted to console my daughter by telling her it was all right, we could just go to the pet store and get her another guinea pig to have as a pet. Her voice and expression full of the disgust and contempt such a wholly clueless attempt to “fix” everything warranted, she replied indignantly that she did not want any “other” guinea pig—she wanted Fluffy.

For me, Fluffy was just in a certain sense one of a kind, the sense of being no more than one instance of the general kind, guinea pig. For my daughter, Fluffy was—well, Fluffy, who was one of a kind.

Now, there is absolutely nothing wrong with guinea pigs, or with liking them as such. And if all there is to it is it that you happen to like guinea pigs just because they’re guinea pigs, then it’s no big deal if your guinea pig of the moment dies on you, so long as you have access to others. All you need do is go out and get another guinea pig, any other guinea pig will do, since being guinea pigs is what you like about them all equally.

However, if you make the mistake of coming to love whichever guinea pig fate may have sent your way at some given time, and your beloved guinea pig dies on you, then things are not so easy. Indeed, should such a thing happen, should your beloved guinea pig pull a Fluffy on you and go belly up—as, of course, it eventually will, unless your beloved guinea pig just happens to outlive you, like the last coat a given tailor cuts might well outlive the tailor that cuts it, to borrow another example from Plato—then you will find yourself, in fact, at a point of decision.

At that point, you may decide to remain true to your love, with all the pain that entails under the circumstances—since it does indeed hurt to lose someone you love, as my daughter could testify it hurt to lose Fluffy. Or you may decide to betray your love and seek you own comfort by rushing out to find some replacement for the irreplaceable—as I shamefully encouraged my daughter to do, in my rush to escape my own discomfort over her pain for the same Fluffy loss. You can choose, that is, to numb your love, and thereby deny it. Or you can choose to feel it in all its pain, and thereby affirm it.

At such points, the decision is up to you. That’s what defines them.

*     *     *     *     *     *

My next post, continuing this series, will start at the same point, with points of decision.


* Translated by Don Bartlett (New York: Farrar, Straus and Giroux, 2013), page 506 of the e-book edition.

** The right term! Presuming to display charity, some might try to substitute “child-like” for “child-ish.” But—as is true for so much charity— the caritas in such charity, however well intentioned it may be, is utterly lacking in skillful means. Endeavoring to help, it actually harms. From the point of view of what passes for a world in what passes for today, philosophy can only manifest as an enterprise that it is utterly childish, not just childlike, to pursue; and the dignity of philosophy demands that its true rank in relationship to our “present times,” as Badiou’s puts it, be acknowledged and granted. To pursue philosophy today, a day of such times, is utterly childish: Philosophy is really useless, something no serious adult can afford to waste any time on.

Published in: on July 18, 2014 at 8:51 pm  Comments (1)  
Tags: , , , , ,

Pulling Out of the Traffic: The Future of Culture (2)

This is the second in a series of posts under the same general title.

*     *     *     *     *     *

In the New York Times for Thursday, June 26 of this year—which was also the day I put up the post to which this one is the sequel—there was a news-piece by Mark Mazzetti under the headline “Use of Drones for Killings Risks a War Without End, Panel Concludes in Report.” The report at issue was one set to be released later that same morning by the Stimson Center, “a nonpartisan Washington think tank.” According to Mr. Mazzetti’s opening line the gist of the report was that “[t]he Obama administration’s embrace of targeted killings using armed drones risks putting the United States on a ‘slippery slope’ into perpetual war and sets a dangerous precedent for lethal operations that other countries might adopt in the future.” Later in the article, Mr. Mazzetti writes that the bipartisan panel producing the report “reserves the bulk of its criticism for how two successive American presidents have conducted a ‘long-term killing program based on secret rationales,’ and on how too little thought has been given to what consequences might be spawned by this new way of waging war.”     For example, the panel asked, suppose that Russia were to unleash armed drones in the Ukraine to kill those they claimed to have identified as “anti-Russian terrorists” on the basis of intelligence they refused to disclose for what they asserted to be issues of national security. “In such circumstances,” the panel asks in the citation with which Mr. Mazzetti ends his piece, “how could the United States credibly condemn Russian targeted killings?”

Neither Mr. Mazzetti nor—by his account at least—the panel responsible for the Stimson Center report bothers to ask why, “in such circumstances,” the United States would want to “condemn” Russia for such “targeted killings” on such “secret rationales.” It is just taken for granted that the United States would indeed want to condemn any such action on the Russians’ part.

That is because, after all, the Russians are among the enemies the United States must defend itself against today to maintain what, under the first President Bush, used to be called “the New World Order”—the order that descended by American grace over the whole globe after the “Cold War,” which itself characterized the post-war period following the end of World War II. Today is still just another day in the current “post post-war” period that set in after the end of the Cold War—as Alain Badiou nicely put it in 2002-2003, during the second year of his three-year monthly seminar on Images of the Present Times, just recently published in France as Le Seminaire: Images du temps present: 2001-2004 (Librarie Arthème Fayard, 2014).

It is really far too late on such a post post-war day as today to begin worrying, as the Stimson panel penning the report at issue appears to have begun worrying, about entering upon the “slippery slope” that panel espies, the one that slides so easily into “perpetual war.” For one thing, what’s called the Cold War was itself, after all, still war, as the name says. It was still war, just “in another form,” to twist a bit a famous line from Clausewitz. Cold as that war may have been, it was still but a slice of the same slope down which the whole world had been sliding in the heat of World War II, which was itself just a continuation of the slide into which the world had first swiftly slipped at the beginning of World War I.

Let us even go so far as to assume that the great, long, European “peace” that ran from the end of the Franco-Prussian War in 1870 all the way down to 1914, one hundred year ago this summer, when it was suddenly interrupted by a shot from a Serbian “terrorist” in Sarajevo, was peace of a genuine sort, and not just the calm of the proverbial gathering storm. Even under that assumption, peace has never really been restored to the world again since the guns began firing in August or that same year, 1914, if the truth is to be told. Instead, the most that has happened is that, since then, from time to time and in one place or another there has occurred a temporary, local absence of “hot” war, in the sense of a clash of armed forces or the like. The guns have just stopped firing for a while sometimes in some places—in some times and places for a longer while than in others.

So, for example, even today, a quarter of a century after the end of the post-war period and the beginning of the post post-war one, the western and west-central European nations have remained places where “peace,” in the minimal, minimizing sense of the mere absence of “active hostilities,” has prevailed. Of course, elsewhere, even elsewhere in Europe—for example, in that part of Europe that during part of the time-span at issue was Yugoslavia—plenty of active hostilities have broken out. In many such cases (including the case of what once had been Yugoslavia) those episodes have often and popularly been called “wars,” of course.

Then, too, there have been, as there still are, such varied, apparently interminable enterprises as what Lyndon Johnson labeled America’s “war on poverty,” or what Richard Nixon labeled the American “war on drugs.” In cases of that sort, it would seem to be clear that we must take talk of “war” to be no more than metaphorical, in contrast to cases such as that of, say, America’s still ongoing “war in Afghanistan,” where the word would still seem to carry its supposedly literal meaning.

Another of the wars of the latter, “literal” sort is the one that began with the American invasion of Iraq on March 20, 2003. As it turned out, that particular war broke out right in the middle of the second year of Badiou’s seminar on “images of the present times.”  In fact, the hostilities in Iraq started right in the middle of some sessions of his seminar in which Badiou happened to be addressing the whole issue of “war” today, during our “post post-war” period—as though tailor-made for his purposes.

In his session of February 26, 2003, less than a month before the start of hostilities in Iraq, Badiou had begun discussing what war has become today, in these present times. He resumed his discussion at the session of March 26—following a special session on March 12, 2003, that consisted of a public conversation between Badiou and the French theatre director, Lacanian psychoanalyst, and philosopher François Regnault. President George W. Bush had meanwhile unleashed the American invasion of Iraq.

In his session of February 26, 2003, Badiou had maintained that in the times before these present times—that is, in the post-war period, the period of the Cold War—the very distinction between war and peace had become completely blurred. Up until the end of World War II, he writes, the term war was used to mark an “exceptional” experience. War was an “exception” in three interconnected dimensions at once: “ a spatial exception, a temporal exception and also a new form of community, a singular sharing, which is the sharing of the present,” that present defined as that of “the war” itself.

We might capture what Badiou is pointing to by saying that, up till the end of World War II and the start of the Cold War, war was truly a punctuating experience. That is, it was indeed an experience in relation to which it did make clear and immediate sense to all those who had in any way shared in that experience to talk of “before” and “after.” It also made sense to distinguish between “the front” and “back home.” Some things happened “at the front,” and some “back home”; some things happened “before the war,” and some only “after the war.” And war itself, whether at the front or back home, and despite the vast difference between the two, was a shared experience that brought those who shared it together in a new way.

During the Cold War, however, all that changed, and the very boundaries of war—where it was, when it was, and who shared in it—became blurred. Badiou himself uses the example of the “war on terror” (as George W. Bush, who declared that war, was wont to call it, soon accustoming us all to doing so) that is still ongoing, with no end in sight. The war on terror is no one, single war at all, Badiou points out. Instead, the term is used as a cover-all for a variety of military “interventions” of one sort or another on the part of America and—when it can muster some support from others—its allies of the occasion. Indeed, the term can be and often is easily stretched to cover not only the invasions of Afghanistan and Iraq under the second President Bush but also the Gulf War unleashed against the same Iraq under the first President Bush, even before the war on terror was officially declared—and so on, up to and including the ever-growing use of armed drones to kill America’s enemies wherever they may be lurking (even if they are Americans themselves, though so far—at least so far as we, “the people,” know—only if those targeted Americans could be caught outside the homeland).

So in our post post-war times there is an erasure of the boundary between war and peace, a sort of becoming temporally, spatially, and communally all-encompassing—we might well say a “ going global”—of the general condition of war. Coupled with that globalization of the state of war there also occurs, as it were, the multiplication of wars, in the plural: a sort of dissemination of war into ever new locations involving ever new aspects of communal life. Wars just keep on popping up in more and more places, both geographically and socially: the war in Afghanistan, the war in Iraq (just recently brought back again—assuming it went away for a while—by popular demand, thanks to ISIS), the war in Syria, the wars in Sudan, Nigerian, Myanmar, Kosovo, the Ukraine, or wherever, as well as the wars against poverty, drugs, cancer, “undocumented”/“illegal” immigration, illiteracy, intolerance, or whatever.

At the same time, this globalization of war and proliferation of wars is also inseparable from what we might call war’s confinement, or even its quarantine. By that I mean the drive to insure that wars, wherever and against whatever or whomever they may be waged, not be allowed to disrupt, damage, or affect in any significant negative way, the ongoing pursuit of business as usual among those who do the war-waging. (The most egregious example is probably President George W. Bush in effect declaring it unpatriotic for American consumers not to keep on consuming liberally—including taking their vacations and driving all over lickety-split—in order to keep the American economy humming along properly while American military might was shocking and awing the world in Baghdad and the rest of Iraq.)

Thus—as Badiou puts it in his session of March 26, 2003—in league with the expansion of war into global presence and the random proliferation of wars goes a movement whereby simultaneously, among the wagers of war, “[e]verything is subordinated to a sort of essential introversion.” That is a reference above all, of course, to America, the only superpower that remained once one could no longer go back to the USSR. On the one hand, as both Badiou and the Stimson report with which I began this post indicate, the American government does not hesitate to claim the right to “intervene” anywhere in the world that it perceives its “national interests” to be at stake, no matter where that may be. It claims for itself the right to make such interventions whenever, against whomever, and by whatever means it judges to be best, and irrespective of other nations’ claims to sovereignty—even, if need be, against the wishes of the entire “international community” as a whole (assuming there really is any such thing). Yet at the same time such interventionism is coupled essentially with a growing American tendency toward “isolationism.”

This counter-intuitive but very real American conjunction of interventionism and isolationism is closely connected, as Badiou also points out, to the ongoing American attempt to come as close as possible to the ultimate goal of “zero mortality” on the American side, whenever, wherever, against whomever, and however it does conduct military interventions under the umbrella of the claimed defense of its national interests, as it perceives them, on whatever evidence it judges adequate. That is best represented, no doubt, by the aforementioned increasing American reliance on using unmanned, armed drones to strike at its enemies, a reliance that began under the Bush administration and has grown exponentially under the Obama administration.

Furthermore, the drive toward zero war-wager mortality is coupled, in turn, with another phenomenon Badiou addresses—namely, what we might call the steady escalation of sensitivity to offense. The more American power approaches what Badiou nicely calls “incommensurability,” and the nearer it comes to achieving the zero American mortality that goes with it, the less it is able to tolerate even the slightest slight, as it were. Rather, in such an affair—as he says in the session of March 26, shortly after the American attack on Iraq under the second President Bush—“where what is at stake is the representation of an unlimited power, the slightest obstacle creates a problem.” Any American deaths at all, or any remaining resistance, even “the most feeble, the worst armed, . . . the most disorganized,” is “in position to inflict damage to the imperious power that it faces.” As there is to be zero American mortality, so is there to be zero resistance (or whatever origin, including on the part of Americans themselves).

*     *     *     *     *     *

All these interlocked features belong to what we have come to call “war” today. Or rather, the situation today is really one in which the very notion of war has come to be entirely flattened out, as I would put it. War itself has ceased to be any distinctive event—anything “momentous,” properly speaking: marking out a clear division between a “before” and an “after,” such that we might even speak of the “pre-war” world and the “post-war” one. That is what Badiou means by saying that we live today in the “post post-war” period. It is a strange “period” indeed, since there is, in truth, no “point” at all to it—either in the sense of any clearly defined limit, or in the sense of any clearly defined goal, I might add—which is what I had in mind in my earlier remark that war today has ceased to be any truly “punctuating” experience.

In one of my posts quite a while ago, I wrote that, in line with contemporary Italian philosopher Giorgio Agamben’s thought about sovereignty and subjectivity, an insightful hyperbole might be to say that it had been necessary to defeat the Nazis in World War II in order that the camp-system the Nazis perfected not be confined to Nazi-occupied territory, but could go global—so the whole world could become a camp, in effect, and everyone everywhere made a camp inmate subject to being blown away by the winds of sovereignty gusting wherever they list.

Well, in the same way it might be suggested that the whole of the long period of preparation for, and then eventual outbreak and fighting of, the (“two”) World War(s), as well as the whole post-war period of Cold War that followed, was just the long ramp-up necessary for the true going global of war in our post post-war period.  That is, the whole of the unbelievably bloody 20th century, ushered in by the whole of the 19th, back at least to the French Revolution of the end of the 18th, can be seen as nothing but the dawning of the new, ever-recurring day of our present post post-war, unpunctuated period.

Indeed, war today has become so enveloping spatially, temporally, and communally, all three, that it is no longer even perceivable as such, except and unless it breaks out in some ripple of resistance somewhere, by some inexplicable means. Whenever and wherever and from whomever, if anywhere any-when by anyone, the power into whose hands the waging of war has been delivered suffers such an offense against it, no matter how slight the slight, then the only conceivably appropriate response is, as the old, post-war saying had it, to “nuke ‘em.”

Furthermore, since offenses are in the feelings of the offended, none of us, “the people,” has any assurance at any time that we will not, even altogether without our knowingly having had any such intent, be found to have done something, God knows what, to offend. If we do, then we may also come to be among those getting nuked (or at least deserving to be)—probably by an armed drone (maybe one pretending to be delivering us our latest Amazon.com order).

*     *     *     *     *     *

By now, even the most patient among my readers may be wondering what this whole post, devoted as it is to discussion of the meaning of “war” today, has to do with “the future of culture,” which is supposed to be the unifying topic in the entire current series of posts of which this one is supposed to be the second. That will only become evident as I proceed with the series—though perhaps it will not become fully evident until the whole series draws together at its close. At any rate, I will be continuing the series in my next post.

Pulling Out of the Traffic: The Future of Culture (1)

 Is there any future for culture? That is the question with which I ended my previous post, more than three months ago now. It is where I want to resume now, after that long break.

To get right to the point, the answer to that question is no, there is no future for culture. The only future that what presents itself today as our global reality permits us is the endless continuation of the circulation of commodities, a pseudo-future that precludes all cultural production. We can only expect more of the same, that is, yet ever more new commodities, newly circulating. Culture today is impossible.

Accordingly, the creation of a future for culture—of a future itself—can today be only an impossible possibility. Since cultural production is no longer possible today, any cultural product that comes upon us must come to us on some other day than this one, this endless day of ceaseless commodity production and circulation.

Culture is no commodity, and no commodity is a cultural product.

*     *     *     *     *     *

Martin Heidegger’s so called “Schwarze Hefte,” the “Black Notebooks” he kept from the period of his Nazi involvement early in the 1930s all the way down to the beginning of the 1970s, near the end of his life, have begun to appear in German in the Gesamtausgabe (GA), or Complete Edition, of his works. So far, three volumes containing fifteen notebooks labeled Überlegungen (Reflections) have been issued (GA 94-96).

In a note early in “Überlegungen IV,” written in the1930s after Heidegger’s controversial year as Rector under the Nazis at the University of Freiburg from 1933-1934 had ended, Heidegger writes (GA 94, page 210): “The ‘world’ is out of joint; there is no world any more, more truly said: there never was yet world. We are standing only at its preparation.” He then begins the immediately following note with the italicized remark that “[w]ith the gods, we have also lost the world.”

Where there is no world, there is no culture; and where no culture, no world. Nor is there anything of gods or the divine in such an indifferent, placeless place.

(What all that may have to do with Nazism, and with Heidegger’s relationship to it, I will leave for subsequent reflections of my own sometime somewhere.)

*     *     *     *     *     *

Norwegian author Karl Ove Knausgaard has already come to count as something of a sensation of 21st century literature—if there is any such thing as literature any longer, which is a question with which Knausgaard is himself concerned—with the publication of his multi-volume autobiographical novel entitled My Struggle. Particularly in the original Norwegian, Min Kamp, that title was immediately controversial because of its obvious allusion to Hitler’s notorious Mein Kampf. Despite the expectations such a title might inspire, there certainly seems to be nothing of Nazism, anti-Semitism, Fascism, or the like in Knausgaard’s text. At least no critics I know of have suggested that there is, nor can I personally detect anything of the sort in what I’ve read of it so far—which admittedly is not that much, relatively speaking, since I am still only midway through the second of the six volumes of the work.

At one point well along in the first volume of My Struggle Knausgaard remarks on the common contemporary feeling that (as he puts it on page 221) “the future does not exist.” He explains that he means the feeling that what lies ahead for us today is “only more of the same,” never anything really new or surprising any more, vibrant with possibility. What that feeling indicates, he says, also “means that all utopias are meaningless.” However, he continues: “Literature has always been related to utopia, so when the utopia loses meaning, so does literature.” He suggests that the literary enterprise, or at least his own literary enterprise, has always been an endeavor “to combat fiction with fiction.” That is, by conjuring up a “no-place”—which is the literal meaning of the word utopia—literature aims to put the lie to what presents itself as being present, but is really no more than a sort of convenient lie or confabulation—something the proverbial powers that be, whoever or whatever those powers themselves may really be at any given time, would have us all take to be “reality” itself, rather than see the very different real reality behind such mere appearances. Telling tales that tell the tale on the tales we are told (often even telling them to ourselves): that is the work of literature, as I take Knausgaard to be articulating it.

What that which passes for “reality” today kept telling Knausgaard himself he “ought to do,” he goes on to say in the passage at issue, “was to affirm what existed, affirm the state of things as they are, in other words, revel in the world outside instead of searching for a way out, for in that way I would undoubtedly have a better life.” Surely that is indeed what he “ought” to do, instead of pursuing all this literary nonsense that leads straight to nowhere; “but,” he says, “I couldn’t do it, I couldn’t, something had congealed inside me, and although it was essentialist, that is, outmoded and, furthermore, romantic, I could not get past it, for the simple reason that it had not only been thought but also experienced, in the sudden states of clear-sightedness that everyone must know, where for a few seconds you catch sight of another world from the one you were in only a moment earlier, where the world seems to step forward and show itself for a brief glimpse before reverting and leaving everything as before.”

*     *     *     *     *     *

Perhaps the most shocking thing about our present age is that today we can no longer be shocked by anything. Such moments as Knausgaard describes, when we are suddenly shocked out of the somnambulism of our daily conduct of business as usual, where there is only ever more of the same old same old—moments when we are brought alive in the world again—are perhaps no longer possible for us. At any rate, if even a glimmer of such an impossible possibility dares show itself to us, then the dark that wants to be taken for the real rushes in to close back over it again immediately.

That is just what it does for Sally Elliott, a character in another novel I have recently been reading.

Only a few weeks ago American novelist Robert Coover’s The Brunist Day of Wrath was published, and I immediately downloaded a Kindle copy and read it. It is the long-awaited—and very long—sequel to his first novel, The Origin of the Brunists, which first appeared long ago, way back in 1966, when it won that year’s Faulkner Prize for best first novel.

Briefly, Coover’s fictional Brunists are a typically American, whacko fundamentalist Christian extremist sect. In the first of the two novels about the Brunists, Cover traces the sect’s emergence. The Brunists then return to the scene of their cultish birth five years later, in Coover’s eventual follow-up. That story of their return to the scene culminates in a typically American, eruptive and violent bloodbath, a sort on anti-apocalyptic apocalypse that, once it has happened, ultimately just lets everything keep right on going pretty much the same as before, really.

Sally Elliott appears as one of the many characters that people both novels. She is anything but a Brunist herself, being not only atheistic but also even anti-theistic—or more properly put anti-religious, since she does not confine her critique to theism as such. For the most part Sally stands aside from the main action of the story of the Brunists, to serve Coover as a sensitive observer registering the events that unfold around her. Still just a child during the action in the story of the Brunists’ origin, she becomes the very anchor of moral sanity in the narrative of their eventual day of wrath.

Relatively early on in the later novel, Sally pays a spy’s visit to the Brunist camp. There she encounters some young Brunists with hopes of converting her.   When Sally grows faint, they become concerned and lead her into the communal tent to rest, and where they give her a cream soda to refresh herself. Coover pauses with her there to write (starting at location 3,844 of a total of 15,901 in the Kindle version I read): “Sometimes, it seems to her [despite or at least apart from all her anti-religious sanity] that she grasps or is embraced by a great cosmic mystery, and for a moment she enjoys a certain rapt serenity. But usually the mystery eludes her of it evolves into some familiar banality, like the cream soda she burped then, and it never comes close to happening when she’s bummed out with the blahs.”

The very point of what presents itself as present today is to bum us all out with the blahs, so that nothing of the future may ever come—and even if it does, will fizzle out again right away, like bubbles from some cheap carbonated soda.   What presents itself as present today lacks all presence. It cannot hold. It has no grounding.

Nor can it, accordingly, offer any ground for anything else to grow in it. Nothing can be cultivated in such soil. No culture can take root there.

*     *     *     *     *     *

Nietzsche remarks somewhere that his ambition is to say more in a single aphorism than others say in an entire book. Then he immediately corrects himself and says, no, his goal is to say more in a single aphorism than others do not say in a whole book.

Indeed, Nietzsche aims to say the whole world in a single aphorism. At least one aphorism where he succeeds in doing just that is in a passage about the very nature of “world” itself, a passage from The Twilight of the Idols entitled “The History of an Error: How the ‘True’ World Finally Became a Fable.” At the end of his telling of that history, Nietzsche asks just what’s left of the world, once the belief in some “true” world has finally shown itself up as no longer worthy of any belief. When the “true” world finally vanishes, just what world remains? The “apparent” one, perhaps? But no, Nietzsche answers. Along with the “true” world, he says, the “apparent” one also vanishes.

Half a century and two World Wars (at least by one count) later, Maurice Merleau-Ponty in his Phenomenology of Perception glosses Nietzsche’s remark by saying that, with the collapse of the very grounds for any distinction between a supposedly merely apparent or “false” world, on the one hand, and a supposedly “true” one, on the other, the world itself at last comes forth clearly for itself, as the very place where sense and non-sense, meaning and the lack of it, themselves emerge. This world itself is neither “true” nor “false.” The world is just that, the world—of which, as Merleau-Ponty nicely says, “the true and the false are but provinces.”

Unfortunately, however, there is another possibility, one which neither Nietzsche nor Merleau-Ponty would have welcomed at all, but of which both were all too much aware, as I read them. That is the possibility that, to borrow a way of putting it from Heidegger, who came between those two, the world itself might simply cease to world at all.

Framed in those terms, to continue considering whether culture has any future today confronts us with the no doubt strange-sounding question of whether, in the world of today, there is any longer any world—or, with it, any today—at all. Can anything really present itself at all in what presents itself today as what is present?

That is precisely the question with which contemporary French philosopher Alain Badiou occupies himself in yet another book I’ve been reading just recently, since my last post to this blog more than three months ago now. I will start with Badiou in my next post (which I do not think will take me another three more months to put up).

*    *     *     *     *     *

The Traffic in Trauma: Commodifying Cultural Products (3)

This is the third and final post of a series under the same title.

*     *     *     *     *     *

In the gravelled parking space before the station several cars were drawn up. Their shining bodies glittered in the hot sunlight like great beetles of machinery, and in the look of these great beetles, powerful and luxurious as most of them were, there was a stamped-out quality, a kind of metallic and inhuman repetition that filled his spirit, he could not say why, with a vague sense of weariness and desolation. The feeling returned to him–the feeling that had come to him so often in recent years with a troubling and haunting insistence–that “something” had come into life, “something new” which he could not define, but something that was disturbing and sinister, and which was somehow represented by the powerful, weary, and inhuman precision of these great, glittering, stamped-out beetles of machinery.  And consonant to this feeling was another concerning people themselves:  it seemed to him that they, too, had changed that “something new” had come into their faces, and although he could not define it, he felt with a powerful and unmistakable intuition that it was there, that “something” had come into life that had changed the lives and faces of the people, too.  And the reason this discovery was so disturbing—almost terrifying, in fact—was first of all because it was at once evident and yet indefinable; and then because he knew it had happened all around him while he lived and breathed and worked among these very people to whom it had happened, and that he had not observed it at the “instant” when it came.  For, with an intensely literal, an almost fanatically concrete quality of imagination, it seemed to him that there must have been an “instant”—a moment of crisis, a literal fragment of recorded time in which the transition of this change came.  And it was just for this reason that he now felt a nameless and disturbing sense of desolation—almost of terror; it seemed to him that this change in people’s lives and faces had occurred right under his nose, while he was looking on, and that he had not seen it when it came, and that now it was here, the accumulation of his knowledge had burst suddenly in this moment of perception—he saw plainly that people had worn this look for several years, and that he did not know the manner of its coming.

They were, in short, the faces of people who had been hurled ten thousand times through the roaring darkness of a subway tunnel, who had breathed foul air, and been assailed by smashing roar and grinding vibrance, until their ears were deafened, their tongues rasped and their voices made metallic, their skins and nerve-ends thickened, calloused, mercifully deprived of aching life, moulded to a stunned consonance with the crashing uproar of the world in which they lived. These were the dead, the dull, lack-lustre eyes of men who had been hurled too far, too often, in the smashing projectiles of great trains, who, in their shining beetles of machinery, had hurtled down the harsh and brutal ribbons of their concrete roads at such a savage speed that now the earth was lost for ever, and they never saw the earth again:  whose weary, desperate ever-seeking eyes had sought so often, seeking man, amid the blind horror and proliferation, the everlasting shock and flock and flooding of the million-footed crowd, that all the life and luster and fire of youth had gone from them; and seeking so for ever in the man-swarm for man’s face, now saw the blind blank wall of faces, and so would never see man’s living, loving, radiant, and merciful face again.

Thomas Wolfe, Of Time and the River (1935)


Not long after Thomas Wolfe published the novel from which I’ve taken that lengthy citation, Walter Benjamin, in his essay on “The Work of Art in the Age of Mechanical Reproduction” (section XIV) wrote:  “One of the foremost tasks of art has always been the creation of a demand which could be fully satisfied only later.”  To that remark, Benjamin appends a note, which itself begins with a quotation from the definitive “Surrealist,” André Breton:  “The work of art is valuable only in so far as it is vibrated by the reflexes of the future.”  In turn, both Breton’s and, even more clearly, Benjamin’s remarks resonate strongly with the one from Jean Laplanche, which I already cited in my first post of this three-post series on the commodification of cultural products, his remark that “in the cultural domain” it is “a constant” that “the offer . . . creates the demand.”

What demand is the work of art today creating?  What future vibrates in it?   How and when could the demand it draws forth ever be fully satisfied?

Benjamin contrasts painting—and poetry—with film.  By his account, which is also the account of many others both before and after him, a painting evokes contemplation.  As Salvador Dali’s The Last Supper did years ago to me, as I recounted in my preceding post, the painting arrests us before itself, bringing us to a stop, interrupting our daily rush of business, calling upon us to look, behold, and ponder.  “The painting,” writes Benjamin, “invites the spectator to contemplation; before it the spectator can abandon himself to his speculations.”  Similarly, a poem makes its reader or other “recipient,” to use Laplanche’s term, pause and reflect over language itself and its power to say.  The poetic work also brings us to a stop, interrupting the flow of the daily chatter wherein we subordinate language and its saying to its mere utility as a means for conveying information.

The history of art, however, is for one thing the history of the emergence of new art forms called up the better to satisfy demands eventually created by developments in older forms.  Slightly earlier than his line about art’s tasks including the creation of new demands vibrant with the future, Benjamin writes:  “The history of every art form shows critical epochs in which a certain art form aspires to effects which could be fully obtained only with a changed technical standard, that is to say, in a new art form.”  He sees one such “critical epoch” emerging for both painting and poetry in the late nineteenth and early twentieth centuries, with the emergence of Dadaism, in which, as Benjamin puts it, “poems are [reduced to]‘word salad’ containing obscenities and every imaginable waste product of language,” just as in their paintings the Dadaists “mounted buttons and tickets” and the like.    What was in play in such developments, by Benjamin’s analysis, was “a rentless destruction of the aura of their [own] creations”—and, indeed, of the “aura” of paintings and poems and works of art in general.

What new art form was preparing its own way in advance in Dadaism and the entire epoch of art it represents?  Benjamin’s answer is that “Dadaism attempted to create by pictorial—and literary—means the effects which today the public seeks in the film.”  By the “studied degradation of their materials,” the reduction of their works to the status of trash and waste, what they aimed to achieve was the “uselessness” of their works “for contemplative immersion.”  Dadaist works systematically eschewed the contemplation to which art before them had called its recipients, and instead sought distraction.   To attain that end, “One requirement was foremost:  to outrage the public.”  The Dadaist work thereby “became an instrument of ballistics.  It hit the spectator like a bullet, it happened to him, thus acquiring a tactile quality” whereby it “promoted a demand for the film, the distracting element of which also primarily tactile, being based on changes of place and focus which periodically assail the spectator.”  Comparing the traditional painting to the film, Benjamin writes:

The painting invites the spectator to contemplation; before it the spectator can abandon himself to his associations.  Before the movie frame he cannot do so.  No sooner has his eye grasped a scene than it is already changed.  It cannot be arrested.  [Georges] Duhamel, who detests the film and knows nothing of its significance, though something of its structure, notes this circumstance as follows [in Scènes de la vie future, published in Paris in1930 after a trip to the United States, and translated one year later as America the Menace:  Scenes from the Life of the Future]:  ‘I can no longer think what I want to think.  My thoughts have been replaced by moving images.’  The spectator’s process of association in view of these images is indeed interrupted by the constant, sudden change.

It is at just this point that Benjamin comes to speak—as Heidegger had done a bit earlier and differently, as I discussed in my preceding post—of “shock” in relation to the work of art.  He writes that this catching, controlling, and manipulation of the spectator’s attention by the devices of film—cuts, camera angles, etc.—“constitutes the shock effect of the film.”  Whereas Dadaism insisted on outraging the public, and in that very insistence remained within the bounds of the moral—“outrage” as such ultimately being a matter of moral offense—“[b]y means of its technical structure, the film has taken the physical shock effect out of the wrappers in which Dadaism had, as it were, kept it inside the moral shock effect.”

Cinema’s unwrapping of shock from it moral wrapping—unmooring shock from its moral anchoring, loosening and abstracting it from its moral setting—is, in fact, more than a merely moral matter, in any ordinary understanding of morality.  It is, rather, a literal de-contextualizing of shock that sets shock altogether free of any context that might give it any “sense” or “meaning” that might enclose it, buffer it, cushion shock’s shock.  To put the same point differently, by riveting attention to itself, forcing and manipulating that attention, stripping it of all autonomy and making it conform to wants not its own, distracting it persistently and insistently from itself, the cinematic manipulation of images uproots shock from the temporality that has always heretofore defined it, the very temporality that gives shock itself time to “register.”  That is, it unhinges shock from the very “belatedness,” Freud’s “Nachträglichkeit,” that permits shock to be felt and registered in its after-shocks.  In the same way, for the repetition with which shock continues to hold on to its recipients, the techniques set to work in film substitute the incessant multiplication of shocks.  No sooner is one shock delivered than another, new shock is on the way, one shock following right upon the preceding one, coming one after another without let-up, like fists reigning down upon someone undergoing a lengthy, brutal beating the end of which comes only with death or coma.  Instead of the Nachträglichkeit of traumatic time one has the endless Nacheinander of the ticks of clock-time, the “after-one-another” of the seconds as they click by without cease.  The compulsive repetition whereby shock arrests those it strikes, demanding that they finally stop and accept the invitation to contemplation—and to “abandon [themselves] to [their] associations,” as Benjamin nicely puts it, just as one might when encouraged to share one’s “free associations” during psychoanalysis—gives way to the cascade of distractions whereby modern life assaults us all.

After all, that’s where all the profit lies waiting to be made!

In La Cité perverse, his discussion (which I’ve cited before in this three-post series) of the perversity that founds and grounds the contemporary global “city”—from civitas, the Latin translation of the Greek polis:  the public place, the commons, the dis-enclosed enclosure of community we build together every day in our communications with one another—Dany-Robert Dufour makes use of the by now old idea of the “monkey trap” that uses the monkey’s own appetites to catch it fast.  The trap is very simple.  It consists of a small but solidly tethered contraption inside of which an appropriately monkey-directed enticement has been placed, so that the monkey has to reach inside the trap to retrieve the treat.  The aperture to the trap, however, is just large enough for the monkey to insert its reaching, fingers-extended paw, to grasp the monkey-goody inside, but too small to permit the monkey to withdraw the same paw once it has closed into a fist around its trophy.  All that the monkey would have to do to escape the trap would be to open its paw and retract it.  To do that, however, it would have to let go of the treat it first reached inside the trap to grasp.  The monkey’s appetite—its “greed,” if you will—just will not let it let go, that it might itself be let go from the trap.  So the monkey just stays there, trapped by its own wants, until the trapper at his leisure comes to collect his catch.

I have repeatedly cited Laplanche’s remark that in “cultural” matters—which is to say in matter’s of Dufour’s “city,” the place of “civilization”—it is always the offer that first creates the demand.  However, when demand gets perverted into the need for commodities, then citizens are transformed into consumers, and we all become caught in a trap from which our own efforts to extricate ourselves can only entrap us more tightly.  When the exchange of commodities replaces the exchange of cultural communications (another redundant expression, when heard as I’d like it to be heard here), we are all made into monkeys caught in a monkey-trap by our own demand.

At that point, demand has become the death of desire, in just the sense of that latter term in which Jacques Lacan, for instance, admonishes us all not to let go of our desire.  Once our desire itself, with no will or intention on our part, gets associated altogether un-freely with a manipulatively produced demand for commodities that have been expressly designed to entice us to confuse them with our desire itself and to grasp for them, we find ourselves caught in a self-made bondage.  It is a situation in which what is really no true choice at all is forced upon us as the only “choice” available.

On the one hand, we can “choose” to put our hands in the trap.  We can reach out to grasp the goods and goodies held out to us as the key to our happiness, only to find ourselves frustrated, depressed, and despairing when the commodities we have been made to long for finally come our way, and we find to our chagrin that they do not satisfy our desire after all.  Far from it!  “Is that all there is?” we ask—as we pick ourselves up and dust ourselves off and start all over again, reaching for the next commodity presented to us as the royal road to happiness, only to be led again to the same frustration, depression, and despair, and so on time after time after time, one time after another till all our time runs out.

On the other hand, the always have the “option” simply—contrary to Lacan’s wise injunction against doing any such thing—to give up our desire itself.  Since desire has now become inextricably confused with the market-produced demand for those very market-produced commodities the securing of which leaves us empty and looking for more each time it occurs, to let go of our grip on those commodities in order to free ourselves from the monkey-trap, opting out of such pursuit of commodities unavoidably presents itself to us as just such a relinquishing of our definitive desires themselves.   But to let go of our very desire itself is, as Lacan saw, to consign ourselves once again to frustration, depression, and despair.

Only if something happens to bring us up short, to make us pause and reflect, inviting us, in contemplation, to abandon ourselves to our own free associations, does the opportunity present itself for the trap in which we are caught suddenly to spring open, letting us loose at last.  To repeat what I’ve said before:  that’s what art’s for.  However, how are we to find hope in art any longer, when art itself long ago now ceased to invite and invoke contemplation, and itself became a device of sheer distraction?  Diverted into distraction, art becomes subservient to commerce, and no less a caught-monkey than each of us, art’s recipients.  To that extent, at least, art no longer offers any interruption of the flow of goods around the globe, but has instead simply become part of that traffic.  Art, voiding itself of all “usefulness for contemplative immersion,” which is to say voiding itself of all of what Marx called its “use value,” retains only whatever “exchange value” the market may give.   That exchange value is often considerable, even astronomical, to be counted in the hundreds of millions of dollars for a single painting, but in the process of becoming such a valuable commodity for exchange art altogether loses its dignity, and any worth it may once of had for itself.  Nor does the art-work itself, as offer, any longer create the demand that answers to it.  Instead, it is the demand for art, the “buzz”-built clamor in the art-market for a given commodity, that produces the supply—that is, makes whatever the buzz builds the clamor for count as “art” in the first place.  “Art” thus becomes no more than that which gets taken as art, in effect, in the art market.  Art becomes whatever so “counts” as art, whether paintings by Van Gogh or literal pieces of shit—such as those produced by the machine created for that purpose in 2000 by Belgian artist Wim Delvoye as the first of eight versions of a work he entitled Cloaca, and selling for roughly $1,000 per shitty piece, to borrow an example from Dufour.

None of this even shocks us at all any longer, of course.  We have long ago grown quite numbed to it, just as nurses and doctors in the emergency room of a big-city hospital become inured to all the pain and suffering that perpetually surrounds them.  Writing of the situation in the industrialized nations of 1936, Benjamin observes in one of the notes he appends to the passage from which I have been drawing citations that “film corresponds to profound changes in the apperceptive apparatus—changes that are experienced on an individual scale by the man in the street in big-city traffic, on a historical scale by every present-day citizen.”  As he discusses both in his article on “the work of art in the age of mechanical reproduction” and elsewhere, everyday modern urban life is a life in which the individual is subjected every moment of the day to one shock after another, and made thoroughly numb in the process.  Such numbing is always the result of being made the recipient of persistent, uninterrupted pummeling, one shock after another with no time any longer even left for any after-shocks wherein the shocks might be registered by those who undergo them.  We monkeys are thereby kept always with our hands in the monkey-trap, being the good little monkeys our trappers would have us be.

A dismal picture indeed!  For one thing, it is a picture of art in its death-throes.  The commodification of cultural products which is at work in the globalization of the market economy puts out the light of the truth that used to put itself into work in art-works.

Much has happened, of course, in the arts themselves during the century and more since the Dadaism that Benjamin discusses first came along.  In painting we have traversed multiple newer developments, fads, and fashions, from Cubism to Surrealism, Abstract Expressionism, Op Art, Pop Art, Conceptual Art, Hyper-Realism, and various other developments.  Poetry and literature have gone through modernism to post-modernism to post-post-modernism and whatever lies beyond that.  Then there is the proliferation of brand new art forms from the Happenings of the 1960s to Body Art to the many permutations of Performance Art today.  And all that’s not even to mention the progression in film itself, let alone the movement from mechanical to digital reproduction that Benjamin never really dreamt of, with all the possibilities for the production, reproduction, and dissemination of new works of art, and what amounts to the radical democratization of art and artistic creation that is taking place as the digital explosion continues to expand, like the universe since the Big Bang.

None of that, however, is any proof against art’s death.  Death takes time, and the greater the life that comes to its end, the longer the dying.  Concerning art, it is as Heidegger writes in his “Afterword” to “The Origin of the Work of Art”:  “The dying proceeds so slowly, that it takes a few centuries.”  And even after that, it may take far longer yet for the news of the death to get around—just as Nietzsche said it would no doubt take a couple of millennia before the news of God’s death was heard everywhere.

As for what, if anything, may be still to come, after the death of art, that is really just a form of the question of whether there is any longer any “culture” at all possible after that.  Is there any future for culture?  Or has the future itself closed down on us, consigning us all forever to an endless, trapped-monkey existence as good consumers, spending freely for the good of the economy, as President Bush urged us all to do during our wars in Afghanistan and Iraq, and especially after the first forward surge of the Great Recession of 2008 that those wars did so much to help unleash?  In our benumbed and distracted consumer-condition, can there ever again be a new demand that gets through to us, if not from art then from elsewhere?

Benjamin himself offers some hope.  So even does Heidegger.  Neither could be accused of optimism, certainly.  What is more, the hope that each offers is one that can only rise, Phoenix-like, from hopelessness.  Both suggest, nonetheless, that there may be a way of pulling out of the traffic.

Can we?  Can we somehow do that—pull out of the traffic in trauma, and the commodification of cultural products that is inseparable from it?

That is a topic I will leave for another occasion—another series of posts perhaps.

The Traffic in Trauma: Commodifying Cultural Products (2)

This is the second of a series of posts under the same title.

*     *     *     *     *     *

In 1936—only three years after the Nazis were given power, two years before Kristallnacht, and four years before he himself committed a life-affirming act of suicide to rob the Nazis of the chance to exterminate him—Walter Benjamin, of German-Jewish provenance, wrote his well known essay on “The Work of Art in the Age of Mechanical Reproduction” (in Illuminations, translated by Harry Zohn, New York:  Schocken Books, 1968).  Only a few months earlier, in November of 1935, Martin Heidegger, another German, Catholic born and eventually Catholic buried, who joined the Nazi party in 1933 and continued to pay his party dues as long as there remained a party to pay them to, first delivered his probably even more well know lecture on “The Origin of the Work of Art.”  The comparison of those two cultural products, Heidegger’s lecture and Benjamin’s essay—both of which cultural products are themselves about those cultural products par excellence called works of art—is revealing on a number of counts, only one of which will concern me here in this post.  That is how each of the two addresses the “shock” that, according to both, pertains essentially to the work of art.

Since Heidegger’s lecture came first, I will start with that.  Heidegger addresses how the work of art as such always comes as a “shock” to those upon whom it works art’s work.  The German term Heidegger uses is Stoß, which can also be variously translated as “push, poke, punch, kick (as, say, moonshine liquor has, when one swallows it), nudge, butt (as a goat might, with its horns), stab (as with a knife), thrust, stroke, or (with less punch or kick) impact.”  The Stoß of the work of art is how it strikes a blow to those who receive it, bringing them up short, knocking the wind out of them, as the sudden revelation of beauty in the face of another can strike us so forcefully that it renders us, as we say, “breathless.”  The work of art always comes as such a shock, if it truly comes at all.  That such a thing as the work can even be, says Heidegger, that is the “shock” of the work.

An example from my own experience that I have used before (namely, in my first published book, The Stream of Thought*) happened to me when I was a teen-ager, on a foundation-sponsored trip one winter to Washington, DC, that included a visit to the National Gallery of Art, where surrealist Salvador Dali’s painting of The Last Supper was on loan for display.  When I entered the room where Dali’s painting hung, I was indeed “shocked,” in Heidegger’s sense.   All I could do was stand transfixed before the painting, gaping at it.  I remember clearly that what transfixed me were the colors on Dali’s canvas, which presented themselves to me as impossible—that being the very word that came to my adolescent mind at the time.  No doubt not altogether inappropriately, given the term and notion of “sur-realism,” there was nothing at all “real” or “natural” about those colors, as they gave themselves to my perception then.   Yet there they were, totally redefining the whole domain of “color” for me, shattering my old, familiar, taken-for-granted understanding of just what that word ‘color’ even meant.

In my very experience, those colors, precisely as “impossible” and altogether outside the domains of anything that might occur in “nature,” also riveted my awareness to the sheer createdness of the painting.  Heidegger points to this by saying that, in the work of art, the very having been created of the work is, as it were, co-created into the work itself.  In The Stream of Thought I spoke of that as the “self-presenting” character of the art-work, and contrasted it with what might be called the “self-effacing” character of, for example, a good snapshot in a family photo album.  A snapshot as such (as contrasted, say, with one of Ansel Adams’ photographs, which is itself a work of art) is just a tool, an instrument, there to be useful and used, no different in that regard than a hammer or a computer; and the utility of a tool is inversely proportional to demands it places upon users to attend to it, rather than staying focused on what they are trying to do with it.  A tool or instrument is not supposed to call attention to itself, but instead to facilitate the accomplishment of the task for which it is employed.  In contrast, the work of art does call attention to itself, and in so doing it delivers us a blow, bowls us over—shocks us out of our complacent everyday going about our usual business.

As with any shock, the shock delivered by the work of art exceeds the capacity of those to whom it is delivered to “process” it.  That is to say it is always traumatic.  And as Freud has taught us, its impact—the very delivery of the shock with which it shocks us—is marked by a certain “belatedness,” as I prefer to translate Freud’s German term Nachträglichkeit, which in the Standard Edition of Freud’s works in English is rendered by “differed action.”  The shock of the work of art is really felt and fully at work, as it were, only in its after-shocks, which keep on coming after the first, definitive shock has struck, allowing the shock itself to “register.”  That’s precisely the job of what Freud identifies as the “repetition compulsion,” the compulsion to repeat the original, shocking experience, until the numbness, the “going into shock” as we say, that is the other side of the two-sided effect of traumatic shock (a redundant expression:  “traumatic shock”), finally breaks down, creating the possibility that it may at last be broken through.

If such a break-through finally does occur, then what it breaks through to—the “other side” to which musical artist Jim Morrison, for example, long ago urged listeners to “break on through”—is nothing other than letting oneself at last be shocked.  It is ceasing, so far as one can (which means moment by moment), to numb oneself against the shock, and instead opening oneself to it and (again, moment by moment) holding oneself open within it.  In short, to adopt and adapt a formula I’m fond of from Heidegger, it is a break-through into maintaining oneself in the truth opened up in the shock itself, “preserving” that very truth by continuing to stand firm within it, with-standing it, as we might well say.

The origin of the work of art, says Heidegger, is truth’s setting itself into work in the work.  Truth sets itself into work in the work in an at least double sense.  First, it sets itself up there, fixes itself fast there, takes form there.  That’s what art needs artists for:  to create works of art as places where truth takes form, fixes itself fast, sets up.   Second, it goes to work there, in the work, as mechanics goes to work in their garages:  Truth is at work there, in the work, “doing” its work there.  That brings us to what the rest of us are for, that “rest” of us who are not ourselves artists—or insofar as we are not the artists who created the given works of art at issue—but to whom those works are “addressed,” its “recipients” (to use Jean Laplanche’s way of speaking).  If what “artists” are for is creating works of art, then what we “recipients” of those works are for, is (to go back to a Heideggerian locution) “preserving” those works.

Such “preservation” of works of art has nothing to do with keeping them locked safely away in closets, attics, or vaults–or even in art museums.  Or, rather, it does have something to do with that, since locking the works away somewhere, even if that place is a museum, is only possible if those works are no longer “preserved,” but are instead taken out of their original circulation, the circulation of truth itself around the circuit of artists, art-works, and recipients, and forced into a very different sort of circulation (today, ever more around the circuit of the provision and consumption of pleasures, ultimately to somebody’s profit).  It’s only the remains of dinosaurs that one will find in museums of natural history, not the real, living thunder-lizards themselves.  Likewise, it’s only the remains of dead works of art that can be visited in art-museums.  Insofar as the very works whose carcasses we can see put on display in museums are still somehow at work in our world, it is not in museums that we will find those works at their work, but in our daily lives together.  They will be at work there only if and insofar as we continue to hold ourselves open to and within the blows that they deliver to us, letting them shock us out of the usual rush of busy-ness with which we strive to avoid all such blows.

What Heidegger calls “preserving” the work of art is a matter of persevering in exposure to the shock it delivers.  Only in such perseverance does the truth that has set itself into work in the work still keep on working.

So much for Heidegger!  Now on to Benjamin!

*     *     *     *     *     *

When Walter Benjamin talks about the “shock” delivered by the work of art, in his own way he says the same thing as Heidegger, but then he also adds something of major significance.  That important addition derives from Benjamin’s concentration on the fate of the work of art “in the age of mechanical reproduction,” as he puts it in the well-chosen title to his piece.

In the process of articulating his thoughts on art today, Benjamin develops a vocabulary of his own that diverges from the one Heidegger is simultaneously developing to articulate his own thoughts on the same topic.  Both vocabularies, however, have a common provenance, as readers should be able to see for themselves in what follows.

In the second section of a total of fifteen (plus a brief introduction and epilogue) of his essay, “The Work of Art in the Age of Mechanical Reproduction,” Benjamin writes:  “The situations into which the product of mechanical reproduction can be brought may not touch the actual work of art, yet the quality of its presence is always depreciated.”  Once again, I can use my own teen-aged experience with Dali’s The Last Supper to exemplify the point I take him to be making.**  Before taking the student trip that brought me before the actual, original painting itself, I had often seen reproductions of Dali’s paintings, including that one, The Last Supper.  In fact, in all the reproductions of his work that I had seen by that time, his painting of Jesus’ last meal with his disciples had always interested me the very least of them all.  Looking back now, I would say that it was precisely my dis-interest in that particular painting, as it was delivered to me in all the reproductions I had seen of it, that set me up—like a bowling pin, as it were—to be knocked flat when I suddenly found myself in the actual presence of the painting itself.  It is precisely that “quality” of the “presence” of the work that, as Benjamin writes, is “depreciated” in any “mechanical reproduction” of it.

My own experience, not only of Dali’s painting but also of other cases, tells me that Benjamin is speaking very cautiously when he uses the word ‘depreciated.’  I would say ‘lost’ or ‘buried’ is better.  By all my experience, the “presence” of the art-work as such is just what, in and of the work, simply cannot be reproduced, at least in any “mechanical” reproduction:  any striking of copies off of some original—or some “first” copy of the original, as in an initial photograph of a painting—used as a template.***  Benjamin himself a few lines later refers to this “quality” of the work’s “presence” as “the eliminated element” in the work, and proposes calling it the work’s “aura.”  At any rate, whether it is only depreciated or totally eliminated, it is this “aura” of the work, Benjamin says, that “withers in the age of mechanical reproduction.”

Significantly, Benjamin does not confine the notion of “aura” solely to works of art, or even to what he calls “historical objects”—what I’m following Jean Laplanche in calling “cultural products”—in general.  Rather, he extends it to cover “natural objects” as well.  “If,” he writes in section III of his essay, “while resting on a summer afternoon, you follow with your eyes a mountain range on the horizon or a branch which casts its shadow over you, you experience the aura of those mountains, of that branch.”    He defines aura, in effect, as that “quality” of the very “presence” of each and every thing in its uniqueness, its very irreproducibility.

At this point, we can combine Heidegger with Benjamin to observe that it is the very way the work of art has of bringing us up short, literally arresting us before its presence, that also—through and in the work, that “historical” or “cultural” product—breaks through our ordinary numbness in the face of the presence, the aura, of what is “natural” as well.  So, to stay with the same example from my own younger life, when my attention was first riveted by the “impossible” colors of Dali’s painting of The Last Supper during my adolescence, what also riveted my attention was what might well be called the aura of color itself.  Even at the time, as I’ve already noted, the thought came to be that until that moment I had never really seen color at all.  I never saw color in its full presence or aura until then.

To “preserve” the work of art, to revert for a moment to Heidegger’s way of speaking, is keep open to aura as such—to the presence of what is present.  That is what it means to stand within the truth of the work, to hold open the truth, namely, that very truth first opened up in and as the work itself.  It means to persist, to persevere, in holding oneself open to and in the aura of things, the aura itself first opened up to one in the work.  It is to bring all one’s saying and doing, thinking and speaking, into that opening of the aura of things, and to maintain it there.

That, in turn, is what’s called “living.”

To lapse back into what today has become an ordinary yet—as befits the day—distorting way of speaking, the “job” of art, what art’s “for,” by both Heidegger’s and Benjamin’s accounts, is to open the way to living, which, like all things human, always comes belatedly, as a sort of after-birth to birth itself.  In that sense, we are all still-born, all born dead, and only subsequently shocked into life.  If we are lucky!

Art brings us luck.  That’s what art’s for.

*     *     *     *     *     *

In this post, the second in my series under the title “The Traffic in Trauma:  Commodifying Cultural Products,” I have focused on the nature of cultural products, as paradigmatically exemplified in works of art.  In my next post, the final one of the series, I will focus on what happens to art, and to cultural production as such, when it gets shanghaied by the market—which is to say commodified. 

* A couple copies of which I still have available.  So let me hasten to commodify my own cultural product by repeating an offer I made already in my second-before-last post:  you may purchase an author-autographed copy of The Stream of Thought from me in person for the bargain-basement price of $14.95 (for a book that originally cost a whopping $27.50!), plus shipping and handling expenses of $5.17, for a total of $20.12.  To make purchasing arrangements, contact me via email right away at frank.seeburger@me.com.

** It wasn’t until a few years after my experience with Dali’s painting in the National Gallery of Art in Washington, D. C., that I read Heidegger’s essay on the origin of the art-work, and then a number of years after that before I read Benjamin’s on the art-work in our age of mass reproduction, but both readings brought my experience with Dali’s painting back to my mind.  My experience helped me to understand the two essays, and they both in turn cast light back upon that experience for me.  

*** Exploring the difference between the multiplication of mechanical re-productions of such works of art as paintings, on the one hand, and multiple productions of such works of art as plays, symphonies, or comedy sketches, on the other, would certainly be well worthwhile.  Even more worthwhile, perhaps, would be to go on from there to an exploration of what further shift occurs with the move from mechanical reproduction to digital proliferation, where once again, as with multiple performances of the same work of music, there is, taken strictly, no “copying” of any “original,” but in which, rather, multiple iterations of one and the same work occur.  I will, perhaps, take up such matters in eventual later posts.

The Traffic in Trauma: Commodifying Cultural Products (1)

(This is the first of what will be a series of posts under the same title.)

Culture is traumatic.  It is not that some cultures are traumatic, and others not.  Culture as such is traumatic.  Thus, in the way I want to use it here, the phrase ‘culture of trauma’ is redundant, like ‘caninity of dogs.’  There are not some cultures that are cultures of trauma, and other cultures that are not—even “ideally.”  Rather, there is either culture, which is always as such traumatic, or else there is no culture at all, but rather at most the cultivation of trauma for the sake of someone’s profit, what I call the traffic in trauma.

Twentieth century French psychoanalyst Jean Laplanche, whom I also cited in my recent series of three posts, “Traumatic Selfhood:  Becoming Who We Are,” gives us insight into the traumatic nature of culture itself.  According to Laplanche, not just some but every cultural product gives itself to its recipients as “intrusive, stimulating, and sexual”—which is to say traumatic.

That remark comes at the very end of a passage I already quoted in the same earlier series of posts on selfhood,* a passage Laplanche begins by saying that “in the cultural domain” it is “a constant” that “[i]t is the offer which creates the demand.”  Before continuing to cite the rest of the passage, it is worthwhile for my purposes in this post to call attention to something in Laplanche’s statement of that “cultural constant.”  Notice that he does not say that it is the supply of what he calls “cultural products” that creates the demand for them.  Rather, he says that it is the offer.

Nor does he say, in this particular passage or anywhere else that I am aware of, that to create demand the offer that is the cultural product needs to be advertised.

It is a jaded cliché of our economic system and the global market in which we all live today to talk about “supply” and “demand,” as well as about how important it is for a sound economy to maintain a proper balance between the two, and how advertising can—and, effectively used, does—generate new demands that can then be met with proper supplies, either already extant (such as unsold overstock) or yet to be produced (like the yet to be generated next generation of I-phones).   Even the most cliché-ridden among us knows that supply alone does not create demand.  It may still be that the world will beat a path to my door if I build a better mousetrap (I confess that I have not really kept up on such matters), but even it that is still so, I first have to let the world know that I have built such a mousetrap before the path to my door will get any new traffic.  Merely building the mousetrap does not trap the particular sort of “mice” that I, the builder, am really most interested in trapping—namely, customers to buy my new invention.  I need different sorts of traps for that.

Not so with “cultural products,” says Laplanche.  Here, it is indeed the offer itself, we have already heard him say, that creates the demand in the first place.  The cultural product is not offered to fill an already pre-existing demand, as mousetraps are manufactured to fill the already well-established demand for ways to get rid of mice (or, perhaps, for the sheer sport of it, if “catch-and-release” has by now become de rigueur among mouse-trappers—as I said, I haven’t really kept up on such matters).  Nor does the cultural product need to do any advertising to call attention to itself, in order to attract or manufacture demand for it.  The offering itself, which is to say the cultural production as such, creates the very demand for what is offered.

That is clear enough from the rest of the passage from Laplanche, most of which I already cited in the earlier posts I’ve mentioned.  Having called attention to the just-discussed “constant” of “cultural production,” Laplanche continues:  “The dominance of human needs, undeniable but truly minimal in the domain of biological life, is completely covered over by culture.  The biological individual, the living human, is saturated from head to foot by the invasion of the cultural, which is by definition intrusive, stimulating, and sexual.”**

That applies, for one, to the addressee of the cultural offer, the one to whom the offer is made, eliciting—by the “cultural constant” mentioned above—its own demand.  Laplanche calls that addressee the “recipient” of the cultural product, in pointed opposition to calling that addressee the “consumer” of that product, I will add, and to which I will shortly return.  “It is of the essence of the cultural product,” Laplanche writes, “that it reaches [the recipient] with no pedigree, and that it is received by him without having been addressed to him” (the exclusive language is in the original).  It reaches its recipient as sent by an unknown other.  Even if the creator of the cultural offer is known by name and personally to a given recipient, then the latter still receives it as though it were written by someone unknown, since it arrives as something that speaks or itself, and not in the context of any personal connections.

The cultural offer thus comes to the recipient as an “enigma.”  By its nature, that enigma is also there for the one who makes the offer, the creator of the cultural product, though it is there in a different way, reflecting the different positions of the sender and receiver of the offer.  Cultural products—such as Goethe’s Faust, to use an example pertinent to Laplanche’s own essay, itself a discussion of Freud’s “Der Dichter und das Phantasieren” (“The Poet and Fantasying”), which focuses on Goethe—are addressed to recipients who remain “essentially enigmatic” for those who create cultural offerings in the first place.  They are addressed, to borrow Nietzsche’s subtitle to Thus Spoke Zarathustra, to “everyone and no-one.”  Laplanche compares the cultural product to the proverbial “message in a bottle,” a message sent to no one in particular, but to whomever it happens to reach (if anyone), whenever it may arrive (if ever), and even if it doesn’t arrive till well after the sender of the message has died.

Just as the cultural message comes to the recipient as from an unknown sender, even if that sender happens to be known personally and by name to a given recipient, so (as already discussed a bit more fully in my earlier posts referring to Laplanche’s essay on “transference”) the essential anonymity of the recipient of the cultural message is preserved even if the recipient “sometimes takes on individual traits,” and is known by name to the sender, as Vincent Van Gogh famously sent letters to his brother Theo.

As befits the anonymity of the recipient, the cultural message in the bottle is also sent without any particular motive, any expectations of doing anything special to the recipient, once received.  As discussed in my earlier series of posts on becoming ourselves, Laplanche points out that the cultural product is “beyond all pragmatics, beyond any adequation of means to a determinate effect.”    As it is only incidental if the intended recipient of the cultural message has a face and name known to the sender, as Theo was known to Vincent, so is it only incidental if the sender of the message has ulterior intentions toward the recipient, such as impressing, seducing, or enslaving*** that recipient.

Thus, as Laplanche explicitly observes himself, although “[t]he recipient’s relation to the enigma is . . . different from the author’s, [constituting] a partial inversion of it,” nevertheless “the relation is essential”—the relation, namely, to “the enigma” that the cultural product as such is.

As I read it, Laplanche’s notion of the “cultural” is defined by being any sort of “communication” insofar as that communication is not subject to any “pragmatics,” but is instead—to use a way of speaking I already began to use in my preceding series of posts on “Traumatic Selfhood”—a sharing that builds, and a building that shares, world.  By ‘world,’ in turn, I mean, following Heidegger’s usage, the “wherein” of our being ourselves with one another.  We might say that cultural communication communicates, first, last, and above and beyond whatever else it may incidentally “do” of any “pragmatic” sort, such as seduce, reduce, induce, or exploit:  It actively brings together into and as community, in the same way that in Christian liturgy the sharing of the Eucharistic meal makes all those who so share be of one body and blood.

The cultural profits no one.  That’s what makes it culture.

When cultural products are turned to making some profit for someone, they are turned against themselves.  They are perverted, in the strictest sense of that word, whereby what is perverted is turned inside out, made to be its own very opposite.  The commodification of culture, which is to say the turning of cultural production into a means for the production of profit, rather than for the production of our common world, is perverse—and perverts in turn whatever touches it, as in ancient Judaism touching the unclean made unclean themselves whoever touched it.

Today, people everywhere live no longer in any true “world” at all, insofar as the human being today has become homo economicus, “economic man,” denizen of the vast “global market.”  Indeed, for economic man,**** the world itself has been perverted into no more than the “globe,” over all of which the vast and still growing wasteland of “the market” continues to grow.

What in the bygone days of the 1960s Marshall McLuhan touted as the “global village” long ago morphed into what French philosopher Dany-Robert Dufour aptly dubbed “the perverse city” in a book of that name (La Cité perverse) published in French a few years ago (Éditions Danoël, 2009).  That city is everywhere today, even when its citizens are allowed to stay in their country homes rather than being bodily removed from them and moved into sprawling urban blights of high-rises, as is currently happening in China.  It doesn’t matter in the slightest whether the force is exerted by a state that joking continues to call itself “communist” (or to use the now not often heard phrase ‘communism with market elements’ to describe the thing they have been forcing into being since getting rid of Mao), or solely by “market factors” themselves (that is, going where the “jobs” are being “created” by the rich, to their further enrichment in their own wildly successful but never-to-be-spoken-of program of “income redistribution”), regulated or not.  The result is the same:  the demolition of the world in the erecting of the global city of perversity.

Whenever what should only be done for love must be done instead for money, there is perversion.  Whoever is forced to make a living by doing for money what should only be done for love is made to be a whore.  In what globally passes for the “world” today, we are all being pimped by “the market.”

*     *     *     *     *     *

Just last weekend I came across a wonderfully amusing/disgusting (tastes differ, I suppose) instance of just the perversion I’m trying to point to in this post.  I came across it last Sunday in the paper, which is regularly a good source for finding amusing/disgusting things with which to while away one’s time between johns.  It was an op-ed piece by the conservative hack George F. Will for The Washington Post, with the cutesy headline, “Lessons from the Abbey.”  The Abbey at issue was the fictional “Downton Abbey” of the popular BBC-TV series of that name, and Mr. Will was parading his credentials as a good, egalitarian, freely enterprising American, as opposed to the class-dominated, tradition-bound British folks depicted in the TV series.  The lesson that Mr. Will would have us take from “Downton Abbey” is one that he formulates so seductively himself that I would not dream of trying to improve upon his own words, which themselves also include others’ words, as will be seen.

Mr. Will begins the end of his piece by remarking how strange he thinks it is that “a normally wise and lucid conservative such as Peter Augustine Lawler, professor of government at Berry College,” would “celebrate the ‘astute nostalgia’ of ‘Downton Abbey’” and hold it up as “a welfare state conservatives can revere,” namely, one in which, as Mr. Will quotes Professor Lawyer writing, we are shown “[w]hat aristocracy offers at its best.”  That, in turn, is “a proud but measured acceptance of the unchangeable relationship between privileges and responsibilities in the service of those whom we know and love.”

To that Mr. Will—who is surely no less “a normally wise and lucid conservative” than Professor Lawler—replies as follows:

Good grief.  Americans do not call the freedom to figure out one’s place in the world a burden; they call it the pursuit of happiness.  And to be “given” a “secure” place amid “unchangeable” relationships is not dignified, it is servitude.

“Downton Abbey” viewers should remember the following rhapsodic hymn to capitalism’s unceasing social churning:  “Constant revolutionizing of production, uninterrupted disturbance of all social conditions.  …All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones becoming antiquated before they can ossify.  All that is solid melts into air.”

This (from “The Communist Manifesto”) explains why capitalism liberates.  And why American conservatives should understand that some people smitten by “Downton Abbey” hope to live upstairs during a future reign of gentry progressivism.*****

One thing that amused me in reading that last Sunday was recalling that Dufour cites the very same text, as part of a larger citation, from the very same source.  Dufour, however, does not take the passage out of context, the context in which the authors of “The Communist Manifesto” use the very remarks Mr. Will cites as part of a broad call to the “workers of the world” to “unite,” since by so uniting, according to those authors, those workers “have nothing to lose but their chains”—and in the process of losing which those same workers can and will liberate not just themselves but everyone, rich and poor, male and female, Jew and Greek, whatever and whatever else, all alike, without exception.

Whoever reads “The Communist Manifesto” as the cultural product it is, can only marvel at how deftly Mr. Will can take even that ringing call to arms to construct a true world for human beings to call home, and turn it into no more than another cheap commodity, available to all alike at no more cost than their souls, to be put to whatever perverse use each pleases.

* The passage at issue is from Jean Laplanche, “Transference:  Its Provocation by the Analyst,” in Essays in Otherness (London:  Routledge, 1999).

** In a way, that last remark is redundant, since the intrusive, the stimulating, and the sexual, though distinct in concept, are identical in the occurrence, just as according to Aristotle form and matter are distinct in the mind but not in the thing itself.  One can’t have one without the other/s.  What intrudes stimulates those upon whom it intrudes, whatever stimulates intrudes upon whomever it stimulates, and that interplay of intrusion and stimulation defines the sexual as Laplanche articulates it.

*** As in pre-Civil War America free African-American citizens such as Sam Northup, author of 12 Years a Slave, were forced into slavery after responding to cultural messages that of themselves had nothing to do with such self-serving, immoral, economic purposes (please excuse my own redundancy, insofar as ‘self-serving,’ ‘immoral,’ and ‘economic’ say pretty much the same thing).

**** Here, the exclusionary usage of the masculine term ‘man’ as what Mary Daly labeled a “pseudo-universal,” supposedly being as “gender-free” as dominant segments of American society today would like to have Americans believe America is “color-blind,” is wholly appropriate.

***** By which, of course, Mr. Will would have all his “wise and lucid” readers understand him to mean the sort of thing that will come about if we follow the siren song of such as President Obama, who call for such aristocratic things as increasing the minimum wage or regulating big banks and other “job creators.”


The Traffic in Trauma: Learning Whom to Hate

Jean-Paul Sartre once wrote in recommendation of a book* that it had the great merit of teaching the young whom to  hate.  That is a lesson still well worth learning, not only for the young but for all ages.

Just the other day I read a passage in newly published book by a well-known author that, under the guise of teaching that same lesson, actually teaches anything but.

Out in Colorado where I live, we were just recently treated to the news of the retirement of Grayson Robinson, the Sheriff of Arapahoe County, who not long before retiring presided over the various press hearings concerning the shootings just this last December at Arapahoe High School.  Sheriff Robinson refused throughout all such proceedings to use the name of the shooter, whose final shot took his own life, lest by using his name he be granted a celebrity that, even posthumously, Sheriff Robinson wanted no part in granting.  (Although he restrained himself from using the young man’s name, the Sheriff did not refrain from labeling the shooter “evil”—a point I will not pursue further, though it certainly deserves careful reflection, above all about who is served by such talk, and who is not.)  I will take at least one page from Sheriff Robinson’s own book.  I will not name the work in which I read the passage I want to discuss, the one I just read recently, the one that fails to teach the lesson that Sartre praised Nissan’s novel for teaching.  Nor will I name the author.  I see no good reason, either humanitarian or selfish, for doing so.

At any rate, the passage at issue comes at the end of a discussion—itself to the point and worthwhile, in my judgment—of how offensive, indeed how truly obscene, the normalization of torture in the relatively recent, for the dominant part positively received film, Zero Dark Thirty, which tells the back-story to the long trail of sleuthing that eventually culminated in the American killing of Osama bin Laden, really is.  The author then goes on to mention the linguistic sleight-of-hand wherein the Bush administration, long before that actual killing, replaced the term ‘torture’ with the expression ‘enhanced interrogation techniques,’ to classify and talk about such then (at least) standard American practices as water-boarding those from whom the American government hoped to extract information thought to be of possible use in pursuit of what that government defined to be America’s own self-interest.

So far, so good:  To that point I have no objections.  However, I do object to what the author at issue goes on to do, which is to posit an analogy—in fact, not just an analogy, but also an identity.  He compares the verbal substitution of ‘enhanced interrogation techniques’ for ‘torture,’ on the one had, with the substitution of ‘physically challenged’ for ‘disabled,’ on the other.  Then he asserts that both substitutions are, in fact, just two different instances of one and the same underlying malady, which he characterizes, following what has become an almost universally dominant current linguistic fashion, as being the malady of “Political Correctness,” to adopt the author’s own device of capitalizing the two words of that expression in his usage of it.

Nothing could be more politically correct today than such usage of the buzz-word “Political Correctness.”

Communication is not coercion.  Communication is co-mund-ication, as I wrote in my preceding post—from the Latin mundus, “world.”  That is, it builds, in sharing, a shared world.  In contrast, coercion calls a halt to sharing.  It imposes limits, barriers, and blockages to communication, stopping it, or at least trying to.  It breaks apart the world.  Words, phrases, or in general expressions have what is deserving of being called “meaning” or “sense” only in the stream of communication, to paraphrase a line from Wittgenstein.  Taken out of that stream and pressed into forced service as implements of coercion, they lose all meaning and cease to make any sense, properly speaking (and by “proper” here, I mean “appropriate to ongoing communication,” since what expressions as such are for is just that).

Long ago now, the term ‘politically correct’ was simply gutted of all meaning.  It was hollowed out completely.  All that was left was the mere verbal shell, which could then be filled with something other than sense or meaning—filled, namely, with coercive force, used to accomplish a no longer communicative but now anti-communicative, purely coercive purpose.  In short, ‘political correctness’ was replaced by  ‘Political Correctness,’ to adopt my passage’s author’s convention.

Before it underwent evisceration of sense, of saying power, and was stamped into ‘Political Correctness,’ a mere tool of coercive power, the term ‘politically correct’ would have meant that which was required to maintain political viability in the concrete circumstances under discussion. Accordingly, just what sort of talk or action might have been politically correct at any given time and setting would have been a function of the political conditions and circumstances of that time and setting.  The term would not have named any one, single style of speech and action, whether of the left, of the right, or of the middle.  In one case—for example, America during the McCarthy era—espousing left-wing political causes might be tantamount to committing political suicide, whereas the same speech and action in another case—perhaps in the Soviet Union during the same era—would have been required to exert any political effectiveness.  What would have been “politically correct” would have varied according to the specifics of the given situations to which the term was applied.

The moment came, however, when the term ‘politically correct’ ceased to have any meaning within the stream of conversation, and instead was shanghaied by the American right-wing for use as a quick and handy label by which to dismiss and ridicule one specific sort of communication.  The sort of communication at issue is any that tries to address instances in which our everyday ways of talking themselves embody extra-communicative—indeed, anti-communicative, which is to say world-destroying, rather than world-building through world-sharing—elements that function coercively, and do so at the greatest price to those who can least afford to pay for it.  That is, the term was co-opted by the American right wing and made to apply exclusively to what my dictionary, as its sole entry for the expression ‘political correctness,’ characterizes this way:  “the avoidance, often considered as taken to extremes, of forms of expression or action that are perceived to exclude, marginalize, or insult groups of people who are socially disadvantaged or discriminated against.”

Thus does even The New Oxford American Dictionary itself succumb to the reigning linguistic coercion, not even bothering to mention the meaning that the same expression would once have had, prior to its capture and torture precisely by those who “consider” the “avoidance” at issue “often” to be “taken to extremes”!  Just how “often,” a thoughtful reader might ask?  Well, for those who abducted the expression and pressed it into slavery to serve their own interests in the first place the answer is:  AlwaysWhenever such avoidance—any such avoidance—manifests itself at all!

As for me, I must admit that in my own judgment it is “often” (please read:  “always and in every instance”) the case that those who use the terms ‘political correctness’ or ‘politically correct’ in the way my dictionary defines them are abusing those terms.  They are, as some readers may already have caught me remarking, torturing those terms.

Of course, as “often” fits the interests of torturers, they would prefer not to call it that.  They would prefer to call it, perhaps, the employment of “enhanced meaning-clarification techniques.”  So it goes.

Such torture of language would perhaps not matter much—unless, perhaps, to someone who is “going to extremes” in order to be Politically Correct—if all it concerned was language itself (pace language lovers, wimps that they may be).  But such language abuse abuses more than language, unfortunately.  It abuses those who, through such linguistic sleights-of-hand, are effectively robbed of the very possibility of voicing objection to being abused, or finding such voice through those who speak on their behalf.  The abuse against them is thereby, as others have often pointed out before me, compounded—indeed, exponentially so, especially when coupled with the further abuse, as it often is, of being blamed for their own being abused.

As is common to abusers, those who abuse language like to blame their abuse on those they abuse and whom they are using their language-abuse to abuse even further.   The substitution of ‘enhanced interrogation techniques’ for ‘torture’ is anything but “exactly the same” as the substitution of ‘physically challenged’ for ‘disabled,’ despite the author of the passage with which I began this post saying so.  In truth, the two operations operate in exactly opposite ways.  The first substitution is one in the service of the torturers, whereas the second is—or is at least intends to be—in the service of the tortured.  The conflation of those two opposed operations of verbal substitution, the washing out of the crucial, defining difference between them, can itself only serve the interests of the torturers, and not of the tortured.

What the author of the passage at issue goes on to say right after first equating those two utterly divergent operations of verbal substitution is that they both also operate the same way yet another imagined substitution would operate.  The two substitutions already considered, according to that author, both operate as would the substitution—patently absurd and offensive, as the author intends readers to hear—of ‘enhanced seduction technique’ for ‘rape.’

But if one asks oneself just who would ever suggest such a substitution as that third one, of ‘enhanced seduction technique’ for ‘rape’—that is, if one asks just whose interests would possibly be served by it—the answer would, I think, be obvious:  Only rapists themselves and their accomplices would be served by such a substitution, hardly the raped.

There are three sets of terms involved in the passage at issue.  The first set is ‘enhanced interrogation technique’ and ‘torture.’  The second is ‘physically challenged’ and ‘disabled.’  The third is ‘enhanced seduction technique’ and ‘rape.’  The author of the passage at issue is apparently so intent on verbally abusing those who would seek to avoid “forms of expression or action that are perceived to exclude, marginalize, or insult groups of people who are socially disadvantaged or discriminated against,” as my dictionary puts it, that he ends up (whether deliberately or not I will leave up to readers to decide) using the obvious analogy between substituting ‘enhanced interrogation technique’ for ‘torture,’ on the one hand, and substituting ‘enhanced seduction technique’ for ‘rape,’ on the other—to hide the dis-analogy between either of those two, on the one hand, and substituting ‘physically challenged’ for ‘disabled,’ on the other.  As I have already argued, that substitution is in not at all analogous to the other two.  The attempt to equate all three simply does not at all hold, since the substitution of ‘physically challenged’ for ‘disabled’ is, at least in its intention, done in the service of the abused, whereas the substitution of ‘enhanced interrogation technique’ for ‘torture,’ like that of ‘enhanced seduction technique’ for ‘rape,’ cannot, regardless of anyone’s intention, serve anyone but the abusers.

In fact, if one is looking for a genuine analogy to the substitution of ‘enhanced interrogation technique’ for ‘torture’ (or of ‘enhanced seduction technique’ for ‘rape’) then here is one:  As the substitution of ‘enhanced interrogation technique’ for ‘torture’ (or ‘enhanced seduction technique’ for ‘rape’) is in the service of the torturers (or the rapists), so is the use of the term ‘political correctness’ to stigmatize, ridicule, and silence anyone who dares to advocate avoidance of “forms of expression or action that are perceived to exclude, marginalize, or insult groups of people who are socially disadvantaged or discriminated against,” as my dictionary puts it, in the service of those who practice exclusion, marginalization, and insulting of the socially disadvantaged or discriminated against.   Both the substitution of ‘enhanced interrogation techniques’ for ‘torture,’ and the dominant contemporary usage of the term ‘political correctness,’ are designed to obfuscate, confuse, and hinder, if not altogether halt, serious, ethically and morally informed, genuine discussion.  They are designed to do the opposite of keeping the conversation going, to borrow a favorite phrase from Richard Rorty.

The replacement of the expression ‘torture’ by the expression ‘enhanced interrogation’ operates in exactly the same way as would the replacement of the expression ‘rape’ by the expression ‘enhanced seduction technique.’  Both in turn operate in exactly the same way as does the regnant usage of the expression ‘political correctness.’  All three cut off communication rather than fostering it.  They block off the stream of life in which alone expressions have meaning, as Wittgenstein said, and deal death instead.

All three are in the service of the traffic in trauma.

*His friend Paul Nissan’s novel Aden Araby, if I remember correctly.

Reading Trauma, Trauma Reading

“Reading can be traumatic, but then trauma can also teach us how to read.” That thought itself came to me recently as I was reading.

Specifically, I was reading a line by the cantankerous but important and influential eighteenth century German “counter-Enlightenment” figure Johann Georg Hamann (1730-1788).  My thought of the crossing between reading and trauma was triggered by one line of his that especially caught my attention.  In “Miscellaneous Notes on Word Order in the French Language,” at one point Hamann writes:  “Readers who see not only what one is writing about but also what one intends to be understood can easily and happily continue these notes . . .”*

Long ago, Aristotle is said to have said of himself that he was a friend of Plato, his great mentor, but that he was a better friend of truth.   Well, if Plato wanted the sorts of students who would read him the way Hamann is pointing to in the line above, then in being a better friend of truth Aristotle was being a true friend of Plato.  At any rate, I read Hamann himself as wanting such friends for readers—or such readers for friends: it’s the same thing.

When I read the line from Hamann recently, I was also working on my preceding post, “Traumatic Selfhood:  Becoming Who We Are (3),” in which I cited a passage from Heidegger’s 1924 treatise, The Concept of Time.   No doubt at least in part because of that coinciding, when I read Hamann’s line I also thought of what Heidegger says about friendship in his short preface to that manuscript, which he tells us was occasioned by his own reading of the then-recently published correspondence between Wilhelm Dilthey and Count Paul Yorck von Wartenburg, addressing Dilthey’s great subjects:  the nature of history as such, and the essential historicity of human being.     At one point Heidegger admonishes his own readers—at least as I read him—that the “proper appropriation” of Yorck’s contributions to Dilthey’s work can only take place by understanding Yorck’s letters “as those of a friend, whose sole concern is to help, through living communication, the one with whom he is philosophizing” to arrive where that co-philosophizer is trying to go in that very philosophizing, and thereby also to help himself to arrive at his own goal beside him.

A good friend is precisely someone who can in a certain important sense see where one is going more clearly than one can oneself.  It is someone who, in our communications with her or him, can, as it were, pick up on the pointers we ourselves give as to where we are ourselves most inwardly tending, and help us see it more clearly ourselves.  Such a friend literally gives us ourselves.

To do that, a friend—at least one not lacking in what Buddhists’ call “skillful means,” which is to say the know-how not to lose track of her own intent as a friend, and to end up hurting rather than helping the one she has befriended—communicates with her friend through an exchange of what, following psychoanalyst Jean Laplanche (who I also cited in my preceding, 3-post series on “Traumatic Selfhood”), we might call “cultural products.’’  That is, a friend skilled (by gift and/or training) in friendship will receive and respond to whatever communications, written or oral, her friend sends her way, the same way his brother Theo—according to Laplanche in “Transference:  It’s Provocation by the Analyst,” one of his Essays on Otherness (London:  Routledge, 1999)—received and responded to Vincent Van Gogh’s letters.  Theo, writes Laplanche (page 224), was “as much an analyst without knowing it as Fliess was for Freud” (and the knowing reader, even one who never heard of Fliess, presumes from that remark that Fliess was quite a good such analyst for his friend Freud). Laplanche explains that, as the addressee of Vincent’s letters, Theo—although bearing that definite name, well known to his brother Vincent, of course (just as any analyst bears a name known to those who come to that analyst for analysis)—served not as any particular named, but as an essentially anonymous recipient, behind whom, as Laplanche puts it, “looms the nameless crowd, addressees of the message in the bottle.”

The lesson Laplanche is teaching in such passages—the lesson Theo Van Gogh himself seems somehow to have learned:  the lesson of how to be an analyst of the very best kind, which is to say “an analyst without knowing it”—is the same as that taught by any effective practice of what, in our pop-psych culture, is most often called “reflective listening.”  In such practice one keeps oneself still and silent as one listens to another (or to “oneself as another,” to borrow the title of one of Paul Ricoeur’s fine late books), who says whatever that other says.  But, precisely in order to hear what the others says as clearly as one possibly can hear it, one listens to the other with one’s two ears simultaneously cocked in two different but complementary directions, as it were.  One listens with one ear cocked to what the other is saying, while at the same time keeping the other ear cocked to oneself—that is, to one’s own emotional response to what that other is saying.  One does that precisely to perform a sort of phenomenological reduction on oneself and one’s own responses to the world, as I would put it.  That is, one does it just so that one can put one’s own responses, and all the assumptions that go with them, “out of play,” as Husserl liked to say, “suspending” them, “putting them within brackets.”  One thereby opens and holds open a space, and holds oneself open in that space, open to receive whatever communication the other has to offer, rather than choking it off immediately with one’s own voice.

True listening requires the listener to open such a space, and to inhabit it, waiting upon the other as that other may communicate.  Such a listener waits upon the other to communicate herself as she will, rather than waiting for any particular communication to come from her.  The listening is filled with expectancy, but without any expectation, opening to the other the possibility freely to be whoever she may turn out to be, surprising herself along with the listener.

Most of the time and for the most part, we don’t really listen to one another at all.   Instead, we just wait for the other to shut up, so we can lip off in turn.  In my classes before I retired, I used to ask my students to attend to the difference between two different sorts of “conversation.”  The first, which I called “cocktail-party conversation,” was the sort in which some topic emerges in the course of the conversation, and then the parties to the conversation take turns—whether politely or rudely depending upon how many cocktails each has consumed—expressing their opinions on that topic.  Often, all the expressing of opinion includes a strong component of attacking one another’s opinions (or ways of expressing them).  Such attacks can range from reasoned tweakings of minor points to withering sarcasm that ridicules not only the other’s opinion as egregiously ridiculous but also the other herself for holding such an opinion.  Less often, though still frequently enough, it can also include the conversation partners, in whole or in (itself sometimes divisive) part, giving support to one another’s opinions.  At any rate, what is involved in such conversation is a matter of informing one another of what one already thinks (or at least thinks one thinks) about a given topic, the point of such conversations always being to have one’s own say—that is, to get one’s turn in the turn-taking so that one can inform the others of what one already thinks, or at least thinks one thinks, or would like others to think that one thinks, or think others would like/loathe one to think, or the like.

Much talking but no actual thinking takes place in such coctail-party conversations.  Thinking is not the point of them.  Almost always all the thinking (insofar as there is any at all, which is often debatable) has been done before the conversation has even begun, which is to say that all the opinions have already been formed.  Even if it turns out that some of those opinions only arise during the very course of the conversation, such opinion-formation is not the point of the conversing.  Rather, the point is to inform one another of those opinions.  The process is one of exchanging information.

In contrast, there is another sort of conversation, one I will leave nameless, for certain reasons that will soon become apparent to many Hamannianly attentive readers.  This second sort of conversation is the sort we might have when, together, we discuss some topic of which none of us already has a formed and cherished opinion, on the expression of which one is fixed.  In such a second sort of conversation, the parties conversing do not already know “what they think”—that is, what “opinion” they have—about the matter under discussion.  Rather, their conversing together about it is itself the process whereby, together, they “think it through.”  Furthermore, the end point of the conversation, that point at which it reaches its goal or purpose, the point at which the thinking through itself is through, is not the point at which the parties to the conversation have at last formed opinions of their own.  The end point of the thinking through of the matter under discussion is no “fixation of belief,” in Charles Sanders Pierce’s sense of that expression:  It is not the formation of any “opinion” at all.  It is more nearly the opposite, namely, the letting go of all opinions and beliefs, of everything one thinks one knows, in order to think together.  The goal is not to bring the thinking to rest, so that it can then cease, but to bring it to an every more thoughtful ongoing—or “on-thinking,” an on-going thinking-on about whatever it is that has given itself to be thought about in and through the conversation.

Conversation of that second sort is utterly lacking in value.  It is good for nothing.  It accomplishes no purposes, achieves no goals, serves nobody’s interests, scores no points, gains no adherents, produces no profits, wins no friends, and influences no people.   It is without use to anyone—or at least it is an altogether inefficient means for doing anything that serves anyone’s self interest, and it can be made so to serve only in complete disregard of its intrinsic nature.  Of and in itself, it is no more than what Laplanche calls a “cultural product.”

It is conceivable, of course, that someone could try to seduce somebody else, to give one possible example, through engaging in such conservation, either with the same person one was trying to seduce, or with some third party in the presence of the object of one’s sexual interest.  One might not even have anyone in particular in mind, or care if it is even anyone one already knows.  One might be trying to attract someone—anyone (at least anyone who fits one’s own sexual taste), God knows who—to one’s bed, by engaging in such a conversation all alone by oneself, for example, by writing a book (or a blog post) and then publishing it.

That would be like writing a novel and seeing it into print in a hardbound edition in order to provide oneself with a good doorstop.  As I put it myself long ago in my first book, The Stream of Thought** (the second of the three volumes of which is, among other things, a sort of novel, by the way), one would thereby indeed have a novel doorstop!  But that has nothing to do, really, either with being a novel or with being a doorstop.  Similarly, one might write a novel with the intent of thereby getting rich and famous and going on the Oprah Winfrey Show—or whatever has become the equivalent to that by the time I post this (when, many years ago, I first started using that same line in my classes, I said “the Phil Donahue Show”).

At any rate, Laplanche himself suggests that one might indeed try to write a book of such a remarkable quality and wit that it would make one sexually desirable to some reader somewhere, who would then contact the author and provide the latter with an opportunity to score a sexual conquest.  “But,” Laplanche remarks, “what an extraordinary going-beyond it takes”—that is, what unnecessary, uncertain extremes that goes to, in order to get where one is trying, by the assumption, to get!  “Going beyond oneself,” he adds, “but above all going toward another who is no longer determinate, and who will only incidentally [if ever!] be the object of an individual sexual conquest.”  How inefficient!

Laplanche insists on what I already remarked above myself:  all that has nothing to do with the “cultural product” itself at issue in such cases.  It has nothing to do with the novel, essay, or other communication that one writes as such.  No matter how novel my imagined novel doorstop might be, it leaves unaffected how novel the novel itself might be as a novel:  I can just as well use a copy of a schlocky romance pot-boiler as of Anna Karenina to prop open my door, if that’s all I’m looking for, but that says nothing at all about the literary heft of either novel.

As Laplanche points out:  “Modern studies of language have clearly shown that communication [or, if I see what Laplanche’s intends to be understood here, at least the sort of communication that occurs in what I call a cocktail-party conversation—though far from only there] is a pragmatics:  to communicate is to manipulate, to produce an effect on someone.”  But, he immediately goes on to argue, by addressing itself to a “no longer determinate,” anonymous other (even if that other, at no expense to such anonymity, is known and spoken to by name, as Vincent Van Gogh wrote his letters to his brother Theo), “cultural production,” and therefore the sort of communication which is such a thing, “is situated from the first beyond all pragmatics, beyond any adequation of means to a determinate effect.”  A bit later, he adds that it is “a constant proposition in the cultural domain” that “[i]t is the offer which creates the demand.”  He expands and explains:  “The dominance of human needs, undeniable but truly minimal in the domain of biological life, is completely covered over by culture.  The biological individual, the living human, is saturated from head to foot by the invasion of the cultural,” which breaks into and breaks apart all pragmatics.  As chipmunks are not for anything, save for chipmunking itself, so are communications as “cultural productions” not for anything, save communicating.  They are in no way dependent upon any “pragmatics.”

As “cultural production,” however, communicating, we might say, is co-mund-icating, from the Latin mundus, “world”:  Communicating with one another, we share, and in sharing build, our world, a human place to dwell, which—“dwelling— is itself, in turn, keeping on communicating.

So understood, communication as such has nothing to do with the transfer of information.  It is solely a matter of speaking and listening to one another, for no other purpose than just to keep on doing ever more of the same.  Or, rather, it is a matter of speaking together with one another and listening together with one another to what is being said in our talk, using that talk to give voice to itself.  In that process, we refuse to reduce ourselves, as parties to the conversation, to anything we may know—or think we do—of who we are.  Eschewing such presumption, we share a friendship that clears an opening for each of us to be whoever we may chance to come to be.

To return to reading (in fact, we have never left it, as those of us who read as Hamann would have us read will already have read):  Reading is a form of listening to another—an always anonymous, unknown other—attentive not only to what that other says but also and above all to what that other intends to be understood by what she says, as Hamann puts it.  It is to leap ahead of the other, and help clear the way for the other to get where that other is going—even if that does not occur “this side of the grave,” by the way, to borrow a phrase from Gregory Bateson.   Reader and writer go hand in hand together to wherever it may turn out they are going together.  That is even and especially so when the going just keeps on going, generation after generation, as it will with reading anything worth reading (even if nobody ever reads it).

As with all listening, the challenge in reading is to become and remain an equally nameless and unknown friend to a nameless, unknown other whose writing one reads, which is to say to whom one listens.  In turn, the challenge in becoming and remaining such a friend lies in steadily refusing to care one whit for whomever it may be who authored whatever one is reading—that “cultural product” of communication written to no end other than that of communicating.

To put the same point personally, the best way to read what I write is not to give a fig about me (to substitute a euphemism for a certain, oft-used, scatological phrase).  If you think you already know me, and that you can somehow help me to see—and to be—the same “me” you think you see when you look at me, then you are not going to read me.  At most, you’ll be involved in some pragmatic enterprise of coercing things to come into agreement with your own preconceptions, cutting everything (yourself included) down to a chosen idolatrous size of your own.

If you really want to do some “going-beyond,” as Laplanche puts it (at least in the English translation), then just read, which is to say listen.  Read/listen, and do nothing else—even and especially when you write/speak in turn yourself.  If you do that, then you will be open to hearing what itself goes beyond anything that may be said, to what is there to be heard in what is said, sounding through it.  You will hear not only the sounded speech but also the silence to which the sounds of speech give voice by breaking it, like a bell ringing out in the night.

If your reading becomes such listening, you will have become a Hamannian kind of reader, which is the very best kind.

* The translation is that of Kenneth Haynes, from the volume he edited of Hamann’s Writings on Philosophy and Language (Cambridge University Press, 2007), page 29.

** To my chagrin, when The Stream of Thought was published in 1984 in a hardbound edition in New York by The Philosophical Library, it did not sell enough copies at its market price of $27.50 to make me rich and famous and put me on television talk-shows, as I had, of course, hoped it would.  Though it long ago went out of print, I still have some authors’ copies of it left, which I’d be happy to sell today for the bargain-basement price of $14.95, plus shipping expenses of $5.17—it’s a big book—for a total of $20.12.  As a special bonus offer, for the first five book-lovers who send me their checks (I think I still have that many left somewhere), I’ll even include my autograph inside the front cover.  Just let me know if you’re interested, by emailing me at frank.seeburger@me.com, so we can arrange payment and you can get your very own copy.  (It makes a great doorstop, I should mention, in case you don’t want to read it.)

Published in: on January 31, 2014 at 4:32 pm  Leave a Comment  
Tags: , , , , , ,

God, Prayer, Suicide, and Philosophy

A very brief post just to let my readers know that my new book, God, Prayer, Suicide, and Philosophy: Reflections on Some of the Issues of Life is now available.  If you would like a copy, please use the link I’ve provided to the right of my blog site.

Traumatic Selfhood: Becoming Who We Are (3)

This is the last of a series of three posts under the same title.

*     *     *     *     *     *

I want to begin by indicating where I am going.  So, to sum up now what is to follow, here’s what I’ll end up saying:

We become who we are not by coming to be, but by being to come.*

In other words, our being is a being underway.  To become ourselves is not to get to the end of our journey, but to stay always on our way.  Becoming ourselves at last is not finally getting all the becoming done.  Instead, it is giving up, finally, of all expectation of ever being done with becoming, which is to say with always keeping going along our way, always being underway.

*     *     *     *     *     *

At one point in MetaMaus (New York:  Pantheon Books, 2011) graphic author Art Spiegelman addresses his interaction with his father, Vladek, a Holocaust survivor, during the time the former was researching and creating Maus, his now classic graphic novel about the Holocaust and its aftermath.  In the context of that discussion, Spiegelman remarks (on page 36) on how, during the course of that interaction with his father, “Vladek displayed himself to be a much more complex character than I’d, literally, have imagined.”  He then writes:  “In a sense it’s like when people talk about a friend and say, ‘He’s not himself today.’  Well, we’re reduced down for convenience sake to a series of tropes and twitches, but we are none of us ourselves.  And that’s what makes us a self . . .”

He’s right:  What makes us a self is precisely that we are never ourselves.  To be a self is always to be out of sorts.

*     *     *     *     *     *

In Being and Time Heidegger says that the conversion from inauthenticity to authenticity—that is, from not being our own, to being our own (or, to put it just a bit differently, from owning up to who we are, and not owning up to it)—is not a matter of leaving the inauthentic behind, like some discarded garment.  To use some terms and examples of my own, the conversion from inauthenticity to authenticity is not like the metamorphosis of the caterpillar into the butterfly.  Nor is it like a snake shedding its old skin.  Rather, such conversion is a matter, in effect, of the re-contextualizing of the whole—whatever whole it is that is undergoing the conversion.

According to Heidegger the authentic self always arises out of the inauthentic, and always returns it; the former never leaves the latter behind.  Another way of putting that is to say that one’s self is always one and the same self, both when it’s inauthentic and when it’s authentic.  It’s just that “authenticity” is authentically being the inauthentic self one always is anyway.

That’s always how it is for us—that is, for all of us, whoever we are:  each and every one of us.  And—just “by the way,” as it were—how it is for us is that most of the time we are mostly not ourselves at all, but just one of all those others.  Most of the time, we are mostly nobody in particular, but just anybody, just “someone or other,” a bunch of indifferent referents for the impersonal pronoun one. Heidegger’s good at pointing that out, too.

He points it out at length in Being and Time, first published in 1927.  He does the same thing in a much shorter—and, therefore, potentially much clearer—manner in The Concept of Time, written in 1924, containing an earlier version of much of the same material.  The German edition of that 1924 text was first published in 2004 as the 64th volume of the complete edition of Heidegger’s works, and the paraphrases and translations that follow are my own.

In the passage I have in mind from The Concept of Time (on pages 26 and 27 in the German version), Heidegger begins by observing that, in our everyday lives together with one another, we identify both ourselves and others with what we do—by which we ordinarily mean do “for a living,” as Americans, especially, put it.  That is, to speak in the vernacular of our global market culture of today, we define ourselves and one another by what we do for money, what we get paid to do:  our “jobs” or “occupations.”

What fits such monetary fixation especially well is what Heidegger says next, which is that, so identified, none of us is ever really her or his own.  Rather, we are all, as it were, owned by our jobs, or at least by whatever powers it may be who pay us for doing those jobs.  We might catch Heidegger’s drift by saying that through such common identification with what we do, we are all effectively dis-owned, which is to say stripped of belonging to ourselves:  Own-er-ship over ourselves is assigned elsewhere—namely, for the most part, to whom- or what-ever, even and above all if that turns out to be nobody and nothing in particular, holds the strings to the purse from which we draw our day’s pay.

As Heidegger observes, in such a situation, which is our everyday situation today, we are all equally dis-owned from ourselves.  In that situation, who each one of us is—in the jargon that has become universalized through modern philosophy: the “subject” of such everyday life—is captured by the indefinite personal pronoun “one.”  He writes (page 27):

The subject of everyday being with one another is “one.”  The differences maintained between one of us and another occur within a certain ordinariness of what is customary, what is fitting, what one let’s count and what one doesn’t.  This worn-down ordinariness, which in effect noiselessly suppresses every exception and all originality, pervades and dominates “one.”  In this “one” [we] grow up, and more and more into it, and are never entirely able to leave it.

In short, insofar as all of us are “one,” then we are none of us ourselves.

And that, as Spiegelman says, is what makes each of us a self in the first place.

*     *     *     *     *     *

The real problems start when we forget that what makes us ourselves is that we’re never ourselves but are always, as I put it earlier, “out of sorts” with ourselves—or, as I put it even earlier yet, that we are always “out of step” with ourselves.  In the struggle to get right with ourselves, to come into lockstep with who we are, so that we can be all of one sort, we enter that forgetfulness.  Surrounded by the fog of forgetting, we cling.

That closes off hope.

Recently, in a group setting, a friend of mine passed on something she’d herself heard—that the word hope should be heard as an acronym for “Hang on!  Pain ends!”  When I first heard that from her, what popped into my mind was the thought that hope could just as well be taken to be an acronym for “Hang on!  Pleasure ends!”  After all, both (pain and pleasure) do (end).

Hope itself need not.  However, it will, if one clings—which is to say “hangs on.”

Accordingly, the second thought that came to me after I heard the line about hope being a matter of “hanging on,” was that in my own experience it was the very opposite that opened into hope.  That is, for hope to spring up in one’s heart, all one really needs to do in the face of either pain or pleasure is to remember that both do indeed end, and let that memory bring one relaxation.

It’s worth noting here that one can practice such hoping.  Or, to articulate that a bit more fully, one can practice holding oneself in openness to the gift of hope.  Yet holding oneself in openness to receive what is given—and please notice the difference between “holding on,” as one might to some idolatrously cherished opinion, and “holding oneself in,” in the sense that one might hold oneself in openness to new ideas, rather than clinging as tightly as one can to old, familiar ones—is itself already to hope.  Therefore, to practice staying open to the gift of hope is already, as such, to have received that very gift.  So what I said at first is still perhaps best:  the practice at issue is the practice of hope itself.

That’s “victory”—the very victory that Kierkegaard says is the expectancy of faith!

In my own case, it was my faith in another friend at another time, a time quite some time ago now, that allowed that other friend to teach me how to be victorious—or, rather, to help me realize that I already was, by my very faith itself.  That other friend was a former student who had become a family friend, but whom we hadn’t seen for about ten years.  He came back into our lives at just the right point for me to be receptive to what he had to give me—in the process repaying me handsomely, by Nietzsche’s lights,** for having once been his teacher.

What my friend taught me was the essence of the practice of meditation, at the very most basic level of responding to one’s own body’s response to the physical pain that accompanies holding oneself in an assigned physical posture when one meditates.  The particular form of mediation he practiced himself and passed on to me was a Buddhist one of sitting meditation, and my back-then-not-even-old body sent me signals of pain, primarily but not exclusively from my knees, when I tried holding myself steady in even the least stressful basic positions on a cushion.  My natural bodily response to those signals was, of course, to tense toward the pain, trying to isolate it and draw away from it.  What my former student taught me, his erstwhile teacher, in turn was to try to counter—as in “en-counter,” and not as in “go against,” which is to say resist—that tendency.  I was, instead of holding on against the pain, to hold myself open to it.  He promised (and I trusted his promise, since he spoke with no more authority than that of love, by the way, a way to which I’ll return below, I promise) that if I practiced doing that, I would discover something that is easy to say but not so easy to do.  I would discover that the very endeavor to avoid pain, to tense in the presence of it and struggle to withdraw from it—that is, to hold on against it—only worsened the pain, and prolonged it.  Whereas, of course (and as therapy for chronic pain sufferers teaches them), by relaxing toward pain—letting oneself go into it—one cleared the way for the pain to pass in its own time, and to end, as all pain (as well as all pleasure) will end, if we but let it.

*     *     *     *     *     *

The love that my younger friend, my former student, gave me that day was nothing smarmy or sentimental.  That is, it had nothing of the clinging, voluntary or involuntary, to self and selfishness that itself so often clings to our love, distorting and perverting it, robbing it of the fulfillment of its own most defining intention and making it altogether miss its own mark, dis-owning it of itself.  It was wholly “disinterested” love, in the best, truest sense of that:  a love that took no interest in itself at all, but gave all its interest to who or what it directed itself to—in this case, myself.  My former student now turned teacher in turn made no effort, on the occasion in question, to “fix” me in any way.  He made no effort to take any of my cares away.  Rather, to use Heidegger’s way of putting the matter, he went ahead of me and cleared the way a bit so that I might the better take up those cares for myself, since after all they were indeed my own.  By clearing the way a ways that way, he let me stay underway on my own way.

That’s what love’s got to do with it, with the business of becoming who we are (which is not at all the business of GM or any other business, by the way).  As St. Paul says somewhere, without that, everything else counts for nothing, and less than nothing.  That includes the other two of the true “Big Three” (to use a now no-longer very business-wise useful phrase from the business world) Paul names for us:  faith, hope, and love—those three.

*     *     *     *     *     *

I want to end this series of three posts on the trauma of selfhood, of becoming who we are, by going back to where I started, in the sense of where the thought of this series first arose for me, which was in reading the works of the twentieth century French psychoanalyst Jean Laplanche.  Or at least I want to begin to end this series there, since where I’ll actually end it will be somewhere else.

In “A Short Treatise on the Unconscious,” the second essay in the collection of his Essays on Otherness, Laplanche characterizes the classic psychoanalytic situation in which the analysand (the one being analyzed) lies on a couch behind which the analyst sits, out of sight and for the most part silent while the analysand does the speaking, as one of enclosure.  That is, it is a situation designed precisely to enclose the analysand, just as the dark of night encloses us as we walk along alone in it.  Laplanche is concerned to point out that it is precisely because of this being enclosed by and within it that, for the analysand, the analytic situation “constitutes an unprecedented site of opening, one which is, properly speaking, quite unheard of [elsewhere in ordinary] human experience.”

What the analytic enclosure opens the analysand to is nothing other than herself or himself—only herself or himself as always and ever outside herself or himself.  The analytic space, or its like (if it has any likes, as I will suggest it does, and yet still doesn’t, below), provides each of us who may enter into that space an enclosure, which is to say in effect a “safe” place, in which we are granted space to be those very selves we are, but which we can be only insofar as we are all always “beside ourselves” (to use a wise phrase from our everyday, ordinary way of speaking, which we ordinarily do not use so wisely).  In psychoanalysis, of course, that self that is always beside oneself, yet always at least a stutter-step off one’s own pace, is called the id, which is Latin for “it.”  That’s the psychoanalytic way of saying what one ordinarily says by “one,” in the sense Heidegger points to when he observes that the “subject” of everyday life is just “one,” which means everybody alike and nobody in particular.

Hence, right after remarking on how the analytic enclosure provides “an unprecedented site of opening,” Laplanche goes on by writing:  “Let us remember that if the id has its origin in the first communications, [nevertheless, and for that very reason, in fact,] what is proper to it [as “it”:  Latin id] is that it does not talk.  What brings the id to language, and more broadly to expression, can only be the result of the complex process which is the analytic treatment.”

The (very Heideggerian) note I made to myself when I first read those lines from Laplanche is also worth citing at this point:  “The id is the un-said of the said.  As such, it is what sounds by breaking into the silence broken by the speaking of language.”  What I mean is such speaking as the analysand does in voicing free associations, or recounting dreams, or, in general, just droning on and on in the enclosure provided by the (often deeply irritating) silence of the analyst, who just refuses to jump in and do the analysand’s work for her, and by that very refusal creates the remarkable—indeed, “properly speaking, quite unheard of”—“site” where the unheard of can be heard, precisely still in and as the never said, and therefore never heard from.  Such speaking is the breaking of the silence that lets the silence itself be heard.

I have never myself been in psychoanalysis.  Nor have I at present any plans to go into that particular place of enclosure, as fine—and frightening—as Laplanche makes that site sound.  Nevertheless, I have a strong sense of having been in similar en-closing-ly safe-scary places, where I have found analogues of the analyst Laplanche also discusses.  One such place I have been is the enclosure of meditation, and my analyst-analogue, the one who guided me to and in that place of enclosure, was my friend and former student.  Another such place I have personal familiarity with is a meeting, any meeting, of Alcoholics Anonymous or any other Twelve Step group inspired by AA.

By bringing up such analogues to the psychoanalytic site, I am in no way meaning to suggest that Laplanche is wrong is say that that site itself is truly “unprecedented.”  What I mean to say is that all such sites are equally without precedent, equally, “properly speaking, unheard of”—each and every one of them.

All such places are utterly irreplaceable.  That is, there is no substituting of one for another, any more than one can substitute one love for another, at least if love owns up to itself.

Indeed, all such places, each and every one, are places of love, which is to say unprecedented, unheard of places where we are at last allowed to become who we are, without every being it.  And it is only in such places, the places of love, that we are ever allowed to be ourselves, even and especially when we are utterly beside ourselves, out of sorts, not ourselves at all—but always betting everything on the come.

After all, that’s always what love awaits, isn’t it?

* Whether that remark is salacious or not, depends on the ears with which it is heard:  If the hearing is attuned to coming to be, it will be; otherwise, not.

** Nietzsche says somewhere that that one repays a teacher badly, who remains always only a pupil.


Get every new post delivered to your Inbox.

Join 78 other followers