I’ve been stewing ideas for blog posts over the last while about the election, my philosophical research, and assembling my final thoughts on Finnegans Wake since I finished it last month. But the past few weeks have been busy with work and plans to attend conferences. Then just as I happened to get a few minutes, Osama bin Laden was assassinated. And since I never manage to update this frequenly enough to generate a serious readership beyond immediate friends and any intelligence agencies scanning the internet presence of young intellectuals, I thought I’d just muse about this until I felt like stopping.
I can’t say much that foreign policy experts and the more frequently-updated on the internet haven’t already said. But when I heard the circumstances of bin Laden’s burial, I knew what was coming next: Donald Trump spinning a ridiculous series of accusations that Obama faked the entire raid just to embarrass him. After all, the raid came the day after Obama and Seth Meyers humiliated Trump at the White House Correspondents Association dinner, and that’s too much of a coincidence to be a coincidence.
This is the way conspiracy thinking works, after all: nothing is a coincidence if it can be understood to be integrally connected to different events. Actually, a long term philosophical project of mine is to analyze conspiracy thinking as the ultimate irrefutable argument: even clear statements of fact against the claims of the conspiracy prove its truth, because any argument or fact showing the conspiracy theorist to be wrong can be understood as planted by the evil conspirators themselves. In the context of philosophy, it challenges irrefutability as the most important feature of a true account. But in a political context, it’s working very differently.
Obama’s best joke against Trump at the Correspondents dinner was a line connecting his boosting of the Birther conspiracy with ridiculously outlandish ideas. Now that the long-form birth certificate has been released, said Obama, “we can move on to the truly important matters that face our nation: Did we fake the moon landing? What really happened at Roswell? Where are Biggie and Tupac?” These are scenarios so zany, they can be dismissed by most people.
But contemporary conspiracies – 9/11 Truth, Birtherism – are deeply politically partisan. I have a rather apolitical friend who actually both theories, or at least considers them plausible. But he’s an outlier, because the American conspiracies of the 21st century are firmly divided along political lines. 9/11 Truth, or Trutherism, is a conspiracy of the left, those who were so driven into partisan rage against the Bush/Cheney Administration that they took gaps in evidence, the sheer monstrosity of the event, and gave it enough anger for fuel that they grew convinced that the American government caused the September 11 attacks, whether by launching missiles into the buildings, destroying them from inside, or remotely controlling the planes themselves.
Then during Obama’s campaign for the presidency, rumours began swirling that he was not born in the United States, and so ineligible for the role of president according to their constitution. This is a conspiracy of the right, prevalent among the Tea Party, tacitly tolerated by congressional leaders like John Boehner, and openly endorsed by congressional rebels like Michelle Bachmann. And most recently, Birtherism has been the key rant of the Trump pre-campaign. Critics of Birtherism have connected it to accusations of implicit racism, the unspoken feeling, probably largely unacknowledged by many of its believers, that a black person should not be president of the United States. At least that’s the joke: if he were white, we wouldn’t be questioning Obama’s qualifications.
The sad part is that the Birther conspiracy was started by desperate partisans in the Hillary Clinton campaign in 2008, before it was picked up by the American right. However, I’m at least slightly bemused that conspiracy theorists of left and right in America can find some common ground in the overlap among their main paranoiacally concocted secret plans. If you watch Zeitgeist, one of the better-known underground documentaries advocating 9/11 Truth, it actually connects Truther principles about a government conspiracy to control the Middle East with the Jewish conspiracy to control the international financial system and erode democracy from within its institutions by implanting surveillance microchips in the human body. That’s right: the flagship conspiracy of the West’s paranoid secular left is a grandson of the anti-Semitic Protocols of Zion.
See, that’s how you can tell conspiracy theory lineage: look for which secret societies they have in common. The secret societies don’t really exist, of course, but the conspiracies acknowledge that they must exist in order for these real events to happen. If there’s one thing a conspiracist can’t tolerate, it’s that the world is just messed up and terrible things just happen without the need for a secret intelligence directing it all.
I was expecting conspiracy theories about faking bin Laden’s death to arrive soon, probably from the Trump camp. The best way to discredit Obama, after all, is to tar him with the brush of conspirator. And discrediting Obama results in Republican victories. But it seems that this could be a conspiracy of the left in America, as well as of the right. I’m sure Trump will advocate the falsity of the Abbottabad raid as soon as he and his Celebrity Apprentice writers assemble enough epithets. But the first advocate out of the gate saying the government faked bin Laden’s death is one we haven’t heard from in a few years: Cindy Sheehan. She’s the activist who led many protests against the invasion of Iraq, and she was the first one to capitalize on the lack of photographic evidence and the quick disposal of the body. So maybe the far left's disappointment with Obama will result in a merger of the Fake Bin Laden Death conspiracy with the 9/11 Truth conspiracy, and lump the Democrats in with the Republicans as evil manipulators of a free public.
I never thought I'd sound radical advocating for listening to the government and believing in simple answers to questions.
Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts
Monday, May 2, 2011
Thursday, April 14, 2011
Conference Diary: My Free Dinners with Marxists*
For the first time last week, I visited York University, an enormous modernist compound in the middle of industrial parks north of Toronto, adjacent to a distant, isolated slum. It was for a conference their department of Social and Political Thought organizes every year, and because my friend who attends Osgoode law school lives on campus there (thanks again for use of the couch, Kyle!), I decided I would go. I presented a paper that took some of my ideas about the contingency of existence and a Nietzschean political philosophy into the context of postcolonialism. Normally, my writing wouldn’t be quite so reaching, but going to a department that’s outside philosophy proper, I gave myself some liberty with composition.
I did see some very interesting presentations, including some people who knew a whole hell of a lot about Theodor Adorno, and a lot of Marxists. It’s rare that someone from a philosophy department comes across such a concentration of academics who genuinely seem to believe in political revolution of the global working class. It was refreshing, and I think more traditional philosophy departments could learn something from interacting more regularly with these differently oriented departments and groups.
McMaster University has a lot of guest speakers come to its department to give talks; we have a weekly Friday series during the Fall and Winter semesters just for that reason. For the most part, the guests are people from other universities around southern Ontario – some just commute in for the afternoon – but some come from far flung locations like southern Illinois and North Carolina. In the past year, we’ve hosted a conference on the anniversary of Russell & Whitehead’s Principia Mathematica that drew logic and history of philosophy scholars from all over the world. Our upcoming philosophy of law conference will have delegates with a similar diversity of origin. But going to a place like the Social and Political Thought conference made me realize that despite the diversity of people who visit McMaster, they’re all also kind of the same.
It’s not that every one who visits McMaster has the same answers to philosophical questions. I’ve seen some epic arguments on a variety of topics. But there’s a remarkable amount of common ground on what questions to argue about. In a way, I think this is just about the habits of people anyway. An area of philosophical inquiry is a region of thought that a person – professor, graduate student, general thinker – is comofortable moving in. But beyond simply the comfort of familiarity, a philosophical inquiry is a set of open questions that require continued exploration, literally a lifelong and life-defining project. If you’re interested in developing such a project, you’ll be drawn to people talking about the same types of problems, compatriots with whom you can work to develop the ideas that have come to define your professional existence.
There are no Marxists, critical theorists, Frankfurt School specialists, anti-capitalist revolutionaries, or postcolonialists at our department. So those problems aren’t going to be on their professional radar, and the types of questions they ask won’t come up. In the same way, a lot of the intriguing questions that are asked at McMaster Philosophy will never come up in York Social and Political Thought. I stuck out like a spotlight over there talking about Richard Rorty. If you’re the type of thinker who does good work through focus along developing a specific path, then it won’t matter to you whether other groups of people are interested in other problems. But I find myself thinking that an inquiry can be revitalized, or at least given a healthy shock, by exposure to ways of thinking that diverge from the habits you might be used to. It’s what draws me to interdisciplinary conferences, or gatherings of different sorts of people. Some folks would find that diversity confusing, while I find it challenging. At the same time, I find the inquiry style of a specializer to be boring, and in danger of insularity, while other folks do their best work in that context.
People are built differently, and are better and worse at different tasks.
* Ever since the “My Dinner with André” episode of Community a couple of weeks ago, I’ve been incorporating references to that movie into different conversations I’ve had. I like to think this isn’t sad.
I did see some very interesting presentations, including some people who knew a whole hell of a lot about Theodor Adorno, and a lot of Marxists. It’s rare that someone from a philosophy department comes across such a concentration of academics who genuinely seem to believe in political revolution of the global working class. It was refreshing, and I think more traditional philosophy departments could learn something from interacting more regularly with these differently oriented departments and groups.
McMaster University has a lot of guest speakers come to its department to give talks; we have a weekly Friday series during the Fall and Winter semesters just for that reason. For the most part, the guests are people from other universities around southern Ontario – some just commute in for the afternoon – but some come from far flung locations like southern Illinois and North Carolina. In the past year, we’ve hosted a conference on the anniversary of Russell & Whitehead’s Principia Mathematica that drew logic and history of philosophy scholars from all over the world. Our upcoming philosophy of law conference will have delegates with a similar diversity of origin. But going to a place like the Social and Political Thought conference made me realize that despite the diversity of people who visit McMaster, they’re all also kind of the same.
It’s not that every one who visits McMaster has the same answers to philosophical questions. I’ve seen some epic arguments on a variety of topics. But there’s a remarkable amount of common ground on what questions to argue about. In a way, I think this is just about the habits of people anyway. An area of philosophical inquiry is a region of thought that a person – professor, graduate student, general thinker – is comofortable moving in. But beyond simply the comfort of familiarity, a philosophical inquiry is a set of open questions that require continued exploration, literally a lifelong and life-defining project. If you’re interested in developing such a project, you’ll be drawn to people talking about the same types of problems, compatriots with whom you can work to develop the ideas that have come to define your professional existence.
There are no Marxists, critical theorists, Frankfurt School specialists, anti-capitalist revolutionaries, or postcolonialists at our department. So those problems aren’t going to be on their professional radar, and the types of questions they ask won’t come up. In the same way, a lot of the intriguing questions that are asked at McMaster Philosophy will never come up in York Social and Political Thought. I stuck out like a spotlight over there talking about Richard Rorty. If you’re the type of thinker who does good work through focus along developing a specific path, then it won’t matter to you whether other groups of people are interested in other problems. But I find myself thinking that an inquiry can be revitalized, or at least given a healthy shock, by exposure to ways of thinking that diverge from the habits you might be used to. It’s what draws me to interdisciplinary conferences, or gatherings of different sorts of people. Some folks would find that diversity confusing, while I find it challenging. At the same time, I find the inquiry style of a specializer to be boring, and in danger of insularity, while other folks do their best work in that context.
People are built differently, and are better and worse at different tasks.
* Ever since the “My Dinner with André” episode of Community a couple of weeks ago, I’ve been incorporating references to that movie into different conversations I’ve had. I like to think this isn’t sad.
Saturday, April 9, 2011
Wake Diary: Tangents of Philosophical Wisdom
When I would tell my friends and concerned loved ones that I was reading Finnegans Wake, they worried for my general sanity. After they realized that I had gone long enough without general sanity that I never really needed it in the first place, they were still concerned that I would waste months of my life reading a book that made no sense. This post isn’t about following the plot or symbolism of Finnegans Wake: there’s too little of one, and too much of the other for that. This is about a phrase I found a couple of weeks ago, but am only getting around to writing about now, that actually sums up rather well what I think about problems of individual knowledge in philosophy.
"What can't be coded can be decorded if an ear aye sieze what an eye ere grieved for."
You might think that strange. And it is. But this actually made quite a lot of sense to me as an expression of my attitude towards how knowledge problems are manufactured and solved. Go through the phrase bit by bit.
“What can’t be coded”
We fail to have knowledge of something in two general ways. We may have no way to access it because it might be too far away, too large, or too small, and we haven’t figured out the right technology to observe it yet. We had no knowledge of Jupiter’s moons until we developed telescopes to see them. They were always there, but couldn’t be seen. But this phrase responds to the second, more problematic kind of unknown: that which might be part of our everyday world, but which we don’t know how to understand. It’s the problem of the unknown unknown, an object or a situation which we don’t even know we’re oblivious to, because we can’t even conceive of it existing. We can’t search for it, because we wouldn’t even know how to search for such a thing.
“can be decorded”
I like the wordplay, combining the senses of the terms ‘decoded’ and ‘untangled,’ as if we were trapped in a mesh of ropes that we had to figure out how to disentangle ourselves. And the ropes in which we’re stuck are metaphors for our perceptual habits, the ways of thinking that we’ve become used to and don’t bother to question. But all ways of thinking are limited, leaving parts of the world unknown to us, and that we don’t even understand how to search for or conceive of. But we can discover unknown unknowns by decoding the patterns in language that we don’t understand, taking that pattern apart and reverse-engineering it.
How do we do that?
“if an ear aye sieze what an eye ere grieved for.”
I wish I could remember where I heard this joke, but someone once told a joke about being stuck on an airplane where a blind man was laughing at a video of Mr Bean. The joke is funny because someone with no visual perceptual ability can understand comedy that’s entirely visual performance. His ears should “grieve for” visual humour because they’re incapable of perceiving it. Our ability to think abstractly lets us experiment with concepts so we can develop new powers of thinking, which allow us to figure out the patterns by which some unknown unknown exists in the world, and we can learn to search for it. Once you learn how to search for something, you’re able to find it, and systematize your discovery about the world into the framework by which you understand the world. Conceptually speaking, we can grow an ear where before we may only have had eyes. That’s how you solve the most interesting problems of knowledge, by figuring out how to perceive the world differently than you ever had before.
"What can't be coded can be decorded if an ear aye sieze what an eye ere grieved for."
You might think that strange. And it is. But this actually made quite a lot of sense to me as an expression of my attitude towards how knowledge problems are manufactured and solved. Go through the phrase bit by bit.
“What can’t be coded”
We fail to have knowledge of something in two general ways. We may have no way to access it because it might be too far away, too large, or too small, and we haven’t figured out the right technology to observe it yet. We had no knowledge of Jupiter’s moons until we developed telescopes to see them. They were always there, but couldn’t be seen. But this phrase responds to the second, more problematic kind of unknown: that which might be part of our everyday world, but which we don’t know how to understand. It’s the problem of the unknown unknown, an object or a situation which we don’t even know we’re oblivious to, because we can’t even conceive of it existing. We can’t search for it, because we wouldn’t even know how to search for such a thing.
“can be decorded”
I like the wordplay, combining the senses of the terms ‘decoded’ and ‘untangled,’ as if we were trapped in a mesh of ropes that we had to figure out how to disentangle ourselves. And the ropes in which we’re stuck are metaphors for our perceptual habits, the ways of thinking that we’ve become used to and don’t bother to question. But all ways of thinking are limited, leaving parts of the world unknown to us, and that we don’t even understand how to search for or conceive of. But we can discover unknown unknowns by decoding the patterns in language that we don’t understand, taking that pattern apart and reverse-engineering it.
How do we do that?
“if an ear aye sieze what an eye ere grieved for.”
I wish I could remember where I heard this joke, but someone once told a joke about being stuck on an airplane where a blind man was laughing at a video of Mr Bean. The joke is funny because someone with no visual perceptual ability can understand comedy that’s entirely visual performance. His ears should “grieve for” visual humour because they’re incapable of perceiving it. Our ability to think abstractly lets us experiment with concepts so we can develop new powers of thinking, which allow us to figure out the patterns by which some unknown unknown exists in the world, and we can learn to search for it. Once you learn how to search for something, you’re able to find it, and systematize your discovery about the world into the framework by which you understand the world. Conceptually speaking, we can grow an ear where before we may only have had eyes. That’s how you solve the most interesting problems of knowledge, by figuring out how to perceive the world differently than you ever had before.
Monday, March 21, 2011
We’re All Different, But We Can All Be Understood
Errol Morris had an intriguing series of essays published this week at the New York Times. They are entitled “Incommensurability,” and are an exploration into a philosophical idea about the social nature of science and knowledge. It turns out that Morris took a graduate seminar in philosophy from Thomas Kuhn, a writer from whom I’ve stolen some very good ideas. The climax of this relationship, from Morris’ perspective, was when Kuhn threw an ashtray at his head. The reason for this assault was Morris needling Kuhn about a problem regarding incommensurability.
Kuhn was a scientist and a historian of science more than a philosopher, but the ideas he had to formulate to make sense of his interpretations of science’s history were deeply philosophical. Key to Kuhn’s own understanding of the history of science, and the focus of Morris’ essay, was the concept of incommensurability. Science was not a progress toward better and better knowledge of the world, as traditional ways of writing its history would have it. The history of science actually consisted of a variety of models, ways of understanding the world and articulating problems that are largely unrelated to each other.
Revolutionary periods in science were the time when new models were created and become prominent enough to challenge the old models. This usually happened when some problem that the old model couldn’t make sense of become too noticeable to ignore. Those practicing one model understood the world in a totally different way than those practicing another model. The terms of one model only make sense within that model; to translate terms from one model to another would remove all the distinctive characteristics from the translated model. This is what it means to be incommensurable.
Morris explains that he confronted Kuhn with a problem of incommensurability: If two broadly defined ways of seeing the world were truly incommensurable, which Kuhn assured him they were, then a historian of science in the mid 20th century could never truly understand the scientific worldview of the medieval Europeans or ancient Greeks. The history of science itself should be impossible. And the ashtray flew.
Morris goes through several intriguing examples from history and philosophy and the history of philosophy to illustrate his points about the problem of the incommensurability concept. They are quite fascinating, but they all add up to the same point: If different models of understanding the world are genuinely incommensurable, then holders of different models shouldn’t be able to understand each other at all. Yet the conflicts among models of understanding the world seem to be motivated precisely because their opponents understand the new model. Read the articles and think about it.
Are you finished? Good.
I first heard of Errol Morris when I saw his documentary about the career of Robert McNamara, The Fog of War. I thought it was a brilliant exploration of how a sharp, intelligent, and empathetic person found himself becoming the architect of one of the most terrifying mistakes the American government ever made: its invasion of Vietnam. As I started hearing more about Morris’ history, I was less impressed.
When his filmmaking career began, Morris was friends with Werner Herzog, and would always talk to Herzog about this idea he had for a documentary about pet cemetaries and the people who use their services. But he would always come up with excuses as to why the film could never get off the ground. Finally Herzog said that if Morris ever actually got his film made, Herzog would eat the very boots that he was wearing at the time of the challenge. Morris made Gates of Heaven, and at its festival premiere, Herzog ate the boiled shoes from the challenge. The result was another short documentary: the hilarious Werner Herzog Eats His Shoe. But it took a shoe eating challenge from Germany’s greatest living director to get it off the ground. I discovered on the commentary for Herzog’s Stroszek that this film was generated when Herzog went to rural Wisconsin to help make a documentary about Morris’ early life. But Morris never showed up, so Herzog wandered around small-town Wisconsin himself, coming up with ideas for the film that eventually became Stroszek. A wonderful result, but borne of Morris’ scatterbrained laziness.
Perhaps despite of these habits of his personality, Morris has written a fine series of articles that work a general audience through complex philosophical problems. The project suffers, I think, from the prominence that having an ashtray whipped at his head plays in his memories of Kuhn. That confrontation colours his entire view of Kuhn: With every interaction they had about what incommensurability meant, Morris thought Kuhn's anger was a sign that Morris was getting to the older man, forcing him to deal with something he didn't want to admit. Having won the staring contest, Morris presumes his suspicions were right, and doesn't think about the miscommunication he and Kuhn could have had from the beginning. I don't blame him for being affected by nearly being knocked out with an ashtray, but there is more nuance to Kuhn's (or at least Kuhn-inspired) thinking than Morris suggests.
It doesn’t require a purely objective perspective, a god’s eye view, or a view from nowhere to understand a way of making sense of the world that is alien to your own. All you need are skills of observation and disciplined, careful imagination. I think Morris makes a mistake in calling incommensurability the absolute separateness of some way of understanding from another, that someone who thinks according to paradigm A couldn't possibly understand anything of paradigm B. If this were true, there was no way for anyone to do any history of science at all: every view that differed from our own would be dismissed as nonsense. But one can think about one's own premises of thinking, and do so for any paradigm of thinking you care to investigate. In understanding how a paradigm of thought arises and evolves, one understands that paradigm.
Incommensurability is a matter of practical work, not pure understanding. A phlogiston chemist can't test for oxygen, because the structure of phlogiston chemistry doesn't include oxygen, or much of the periodic table. That phlogiston chemist could learn the basic concepts of a periodic table chemist, just as the periodic table chemist could learn how phlogiston theories work. But you couldn't do chemistry experiments using both theories at the same time. They can be understood, from a perspective of self-reflexivity, reflexive criticism. But when it comes to the work, you have to choose one or the other.
Kuhn was a scientist and a historian of science more than a philosopher, but the ideas he had to formulate to make sense of his interpretations of science’s history were deeply philosophical. Key to Kuhn’s own understanding of the history of science, and the focus of Morris’ essay, was the concept of incommensurability. Science was not a progress toward better and better knowledge of the world, as traditional ways of writing its history would have it. The history of science actually consisted of a variety of models, ways of understanding the world and articulating problems that are largely unrelated to each other.
Revolutionary periods in science were the time when new models were created and become prominent enough to challenge the old models. This usually happened when some problem that the old model couldn’t make sense of become too noticeable to ignore. Those practicing one model understood the world in a totally different way than those practicing another model. The terms of one model only make sense within that model; to translate terms from one model to another would remove all the distinctive characteristics from the translated model. This is what it means to be incommensurable.
Morris explains that he confronted Kuhn with a problem of incommensurability: If two broadly defined ways of seeing the world were truly incommensurable, which Kuhn assured him they were, then a historian of science in the mid 20th century could never truly understand the scientific worldview of the medieval Europeans or ancient Greeks. The history of science itself should be impossible. And the ashtray flew.
Morris goes through several intriguing examples from history and philosophy and the history of philosophy to illustrate his points about the problem of the incommensurability concept. They are quite fascinating, but they all add up to the same point: If different models of understanding the world are genuinely incommensurable, then holders of different models shouldn’t be able to understand each other at all. Yet the conflicts among models of understanding the world seem to be motivated precisely because their opponents understand the new model. Read the articles and think about it.
Are you finished? Good.
I first heard of Errol Morris when I saw his documentary about the career of Robert McNamara, The Fog of War. I thought it was a brilliant exploration of how a sharp, intelligent, and empathetic person found himself becoming the architect of one of the most terrifying mistakes the American government ever made: its invasion of Vietnam. As I started hearing more about Morris’ history, I was less impressed.
When his filmmaking career began, Morris was friends with Werner Herzog, and would always talk to Herzog about this idea he had for a documentary about pet cemetaries and the people who use their services. But he would always come up with excuses as to why the film could never get off the ground. Finally Herzog said that if Morris ever actually got his film made, Herzog would eat the very boots that he was wearing at the time of the challenge. Morris made Gates of Heaven, and at its festival premiere, Herzog ate the boiled shoes from the challenge. The result was another short documentary: the hilarious Werner Herzog Eats His Shoe. But it took a shoe eating challenge from Germany’s greatest living director to get it off the ground. I discovered on the commentary for Herzog’s Stroszek that this film was generated when Herzog went to rural Wisconsin to help make a documentary about Morris’ early life. But Morris never showed up, so Herzog wandered around small-town Wisconsin himself, coming up with ideas for the film that eventually became Stroszek. A wonderful result, but borne of Morris’ scatterbrained laziness.
Perhaps despite of these habits of his personality, Morris has written a fine series of articles that work a general audience through complex philosophical problems. The project suffers, I think, from the prominence that having an ashtray whipped at his head plays in his memories of Kuhn. That confrontation colours his entire view of Kuhn: With every interaction they had about what incommensurability meant, Morris thought Kuhn's anger was a sign that Morris was getting to the older man, forcing him to deal with something he didn't want to admit. Having won the staring contest, Morris presumes his suspicions were right, and doesn't think about the miscommunication he and Kuhn could have had from the beginning. I don't blame him for being affected by nearly being knocked out with an ashtray, but there is more nuance to Kuhn's (or at least Kuhn-inspired) thinking than Morris suggests.
It doesn’t require a purely objective perspective, a god’s eye view, or a view from nowhere to understand a way of making sense of the world that is alien to your own. All you need are skills of observation and disciplined, careful imagination. I think Morris makes a mistake in calling incommensurability the absolute separateness of some way of understanding from another, that someone who thinks according to paradigm A couldn't possibly understand anything of paradigm B. If this were true, there was no way for anyone to do any history of science at all: every view that differed from our own would be dismissed as nonsense. But one can think about one's own premises of thinking, and do so for any paradigm of thinking you care to investigate. In understanding how a paradigm of thought arises and evolves, one understands that paradigm.
Incommensurability is a matter of practical work, not pure understanding. A phlogiston chemist can't test for oxygen, because the structure of phlogiston chemistry doesn't include oxygen, or much of the periodic table. That phlogiston chemist could learn the basic concepts of a periodic table chemist, just as the periodic table chemist could learn how phlogiston theories work. But you couldn't do chemistry experiments using both theories at the same time. They can be understood, from a perspective of self-reflexivity, reflexive criticism. But when it comes to the work, you have to choose one or the other.
Labels:
Errol Morris,
Film,
Philosophy,
Science,
Thomas Kuhn,
Werner Herzog
Sunday, February 20, 2011
Maybe a New(ish) Way to Do History of Philosophy?
University of Western Ontario is starting a History of Philosophy roundtable, discussing, as the name implies, various topics in the history of philosophy. I’m of two minds about studying the history of philosophy – my attitude towards the practice is a mixture of enthusiasm, dread, dismissal. The reasons why are a little complex, but that’s what blogging is for.
In my time as a graduate student, I've come across two approaches to the history of philosophy that seem pretty mainstream. One is history of philosophy as antiquarian studies: philology on writer X that seeks to get X right. One is understanding historical developments in current terms: asking if Aristotle was a functionalist on philosophy of mind – that question makes no sense to me. It applies the concepts of a long-ago philosopher to current debates with little heed to the radically different context of two writings.
I did my first few years of training in philosophy in a very historically-minded department, and I think I came out better for it. When I engage the work of a complex, difficult philosopher, I put a lot of effort into understanding their terminology, concepts, historical context, and the reasons why they thought the problems at the focus of their work were worth the trouble. I emerged with the ability to read a complex work in a very deep and careful manner rather quickly. You might think this leaned toward the antiquarian definition, and to a degree this was true.
But the individuals who played the biggest role in my education treated their historical subjects as their specialties, but they had no particular loyalties to them. At Memorial, I never worked on history of philosophy with any professors who said their specialty writers were the apex of philosophy, or that those writers were the only ones to get the universe really right. I’ve come across that attitude among some students who work on history of philosophy, and I hope that disappears from them.
My friend Jeremy once came up with the perfect definition of such a slavish historical philosopher: For a devoted X-ian, the only time X was ever wrong was when X himself said some element of X’s own corpus was wrong.
However, I’ve discovered over the past few years that I don’t want to work on history of philosophy, or secondary material generally, as my main specialties. I didn’t want to use my intellectual capacities in the service of illuminating the work of another writer. I didn’t want to spend the bulk of my time arguing over interpretations of another writer, with other writers whose careers were also spent commenting on the same writer as me. I’m just not humble enough to be that subordinate, even to someone who had proven themselves as remarkable as Aristotle, Descartes, Hume, Kant, Heidegger, or Russell. I find secondary material to be writing about philosophy. But I want to write philosophy.
For me, the history of philosophy is a tool for creating concepts and working through contemporary social and ethical problems in philosophy. For example, I’m interested in Spinoza, but not just exegesis of Spinoza’s writings. He’s one of the few philosophers in the Western tradition for whom ontological matters – questions about being and what is – are closely integrated with ethical questions. This kind of reasoning is very important for my own work, but it’s difficult for mainstream philosophers to see this kind of convergence as legitimate. Being able to say that a big name like Spinoza did it too grants my ideas at least a small grasp on that legitimacy.
More than that, I engage with philosophy’s history to find the hidden subtlties of thought and strange concepts in dark corners that we usually don’t mention to undergraduates in the field. I’m looking for peculiarity that can inspire, or strange elements that could have sparked a completely different revolution in philosophy but never caught on because of some social or institutional factor beyond the writer’s control (this is my view of why Johann Fichte didn’t invent phenomenology in 1801).
I’m interested in taking part in this roundtable at Western, provided I can get transportation to London three or four times during the next term. I revere no one, although I respect them very much. And my applications of past to present are very indrect and convoluted. But I hope to find welcome, or at least sympathy. I’m not exactly someone who fits in.
In my time as a graduate student, I've come across two approaches to the history of philosophy that seem pretty mainstream. One is history of philosophy as antiquarian studies: philology on writer X that seeks to get X right. One is understanding historical developments in current terms: asking if Aristotle was a functionalist on philosophy of mind – that question makes no sense to me. It applies the concepts of a long-ago philosopher to current debates with little heed to the radically different context of two writings.
I did my first few years of training in philosophy in a very historically-minded department, and I think I came out better for it. When I engage the work of a complex, difficult philosopher, I put a lot of effort into understanding their terminology, concepts, historical context, and the reasons why they thought the problems at the focus of their work were worth the trouble. I emerged with the ability to read a complex work in a very deep and careful manner rather quickly. You might think this leaned toward the antiquarian definition, and to a degree this was true.
But the individuals who played the biggest role in my education treated their historical subjects as their specialties, but they had no particular loyalties to them. At Memorial, I never worked on history of philosophy with any professors who said their specialty writers were the apex of philosophy, or that those writers were the only ones to get the universe really right. I’ve come across that attitude among some students who work on history of philosophy, and I hope that disappears from them.
My friend Jeremy once came up with the perfect definition of such a slavish historical philosopher: For a devoted X-ian, the only time X was ever wrong was when X himself said some element of X’s own corpus was wrong.
However, I’ve discovered over the past few years that I don’t want to work on history of philosophy, or secondary material generally, as my main specialties. I didn’t want to use my intellectual capacities in the service of illuminating the work of another writer. I didn’t want to spend the bulk of my time arguing over interpretations of another writer, with other writers whose careers were also spent commenting on the same writer as me. I’m just not humble enough to be that subordinate, even to someone who had proven themselves as remarkable as Aristotle, Descartes, Hume, Kant, Heidegger, or Russell. I find secondary material to be writing about philosophy. But I want to write philosophy.
For me, the history of philosophy is a tool for creating concepts and working through contemporary social and ethical problems in philosophy. For example, I’m interested in Spinoza, but not just exegesis of Spinoza’s writings. He’s one of the few philosophers in the Western tradition for whom ontological matters – questions about being and what is – are closely integrated with ethical questions. This kind of reasoning is very important for my own work, but it’s difficult for mainstream philosophers to see this kind of convergence as legitimate. Being able to say that a big name like Spinoza did it too grants my ideas at least a small grasp on that legitimacy.
More than that, I engage with philosophy’s history to find the hidden subtlties of thought and strange concepts in dark corners that we usually don’t mention to undergraduates in the field. I’m looking for peculiarity that can inspire, or strange elements that could have sparked a completely different revolution in philosophy but never caught on because of some social or institutional factor beyond the writer’s control (this is my view of why Johann Fichte didn’t invent phenomenology in 1801).
I’m interested in taking part in this roundtable at Western, provided I can get transportation to London three or four times during the next term. I revere no one, although I respect them very much. And my applications of past to present are very indrect and convoluted. But I hope to find welcome, or at least sympathy. I’m not exactly someone who fits in.
Friday, January 28, 2011
Universally Rejected
One element of what I love about the internet is that random pieces of hilarity like this show up, The Journal of Universal Rejection. It perfectly illustrates one of the silliest paradoxes of academic culture, while working on multiple levels, especially given the weird philosophy I’ve been working on.
So the first interpretation I’ll explore is the simple satire. Academic journals have a way of measuring their relative prestige that I find remarkably strange. A journal gains prestige based on how many submissions they reject. Now, there are other criteria of prestige, like the age of the journal, the number of articles it has published that became pivotal in the evolution of its field, the reputations of its editors or regular contributors. But the shorthand prestige marker is the sheer statistical likelihood (or lack thereof) of actually having your submitted essay accepted for publication. If a PhD candidate like me gets an essay published in a journal with a 75% rejection rate, an established (if snobbish) professor or colleague may dismiss it as a relatively unimportant venue. “Oh, you had a one in four chance. Anyone could have made that!” But if you make it into a journal with a 95% rejection rate, that garners much more prestige.
Of course, anyone who actually knows how statistics works knows that this reduction of a peer evaluation, editing, and selection process to a fraction (1/4, 1/20) is a hideous oversimplification of an extremely complex process. But in most conversation, even among the supposedly most educated members of the human population, this little number is all that matters.
Academics themselves often take the criteria of a high rejection rate for granted, I think just because they’ve been acculturated to the idea for so long. Like the best satire, the Journal of Universal Rejection takes this simple principle and carries it to its logical extreme, so we can see how stupid it really is for measuring the worthiness of a journal. If a higher rejection rate equals greater prestige, then the most prestigious possible journal will have the highest possible rejection rate: 100%. Literally no essay is good enough for its high standards.
We find this ridiculous, but the principle we’ve used to arrive at the ridiculous is taken for granted and makes perfect sense. If we’re as intelligent as we say we are, we re-evaluate just how useful for living is this principle that we’ve never bothered to question before. This is how satire can sometimes push us into ambitious and interesting philosophy.
As I thought about it this morning, The Journal of Universal Rejection also has some meaning for my own ideas about ethics based on singularity. One of the problems I face when trying to articulate this ethical point of view is that it seems to paralyze activity. It starts from a principle that’s arrived through an ontological investigation, an examination of how the world is. That principle is that every situation and every individual is a singularity, a unique body that differs at least in some degree from every other. The result is that any universal principle or proposition will be a generalization that misses some of the singular features of the bodies to which it applies. A proposition that applies to many bodies in common won’t take into account various differences among those bodies. If it did, then it wouldn’t be able to apply to all of them.
This means that any universal proposition can’t be necessarily valid: some difference among its members that the proposition doesn’t account for can create effects that render the proposition useless. To put it more poetically: Reality rebels against any attempt at unity. Or to put it more happily: Existence can surprise us at any time.
In order for a set of universal principles and propositions to hold, those rebellious features of reality have to be set aside. If the people who hold those universal principles and propositions want to maintain the widespread belief in the truth of their system, they have to convince people that these rebellious singularities do not in fact exist. The universal principle rejects the reality that surprises it. If enough of reality becomes surprising to the universal system that the rejections can no longer be ignored, then the system becomes ridiculous, like a government or corporation that denies reality.
Think of Mahmoud Ahmedinejad telling the United Nations that there are no homosexuals in Iran, or a British Petroleum executive telling Louisiana residents that there isn’t anything serious about Deepwater Horizon. These statements become ridiculous because reality has escaped their systems of universal propositions which tell us that the singularities we can plainly see cannot possibly exist.
From ontology to ethics to politics in four paragraphs. Would anyone try to say philosophy is useless now?
So the first interpretation I’ll explore is the simple satire. Academic journals have a way of measuring their relative prestige that I find remarkably strange. A journal gains prestige based on how many submissions they reject. Now, there are other criteria of prestige, like the age of the journal, the number of articles it has published that became pivotal in the evolution of its field, the reputations of its editors or regular contributors. But the shorthand prestige marker is the sheer statistical likelihood (or lack thereof) of actually having your submitted essay accepted for publication. If a PhD candidate like me gets an essay published in a journal with a 75% rejection rate, an established (if snobbish) professor or colleague may dismiss it as a relatively unimportant venue. “Oh, you had a one in four chance. Anyone could have made that!” But if you make it into a journal with a 95% rejection rate, that garners much more prestige.
Of course, anyone who actually knows how statistics works knows that this reduction of a peer evaluation, editing, and selection process to a fraction (1/4, 1/20) is a hideous oversimplification of an extremely complex process. But in most conversation, even among the supposedly most educated members of the human population, this little number is all that matters.
Academics themselves often take the criteria of a high rejection rate for granted, I think just because they’ve been acculturated to the idea for so long. Like the best satire, the Journal of Universal Rejection takes this simple principle and carries it to its logical extreme, so we can see how stupid it really is for measuring the worthiness of a journal. If a higher rejection rate equals greater prestige, then the most prestigious possible journal will have the highest possible rejection rate: 100%. Literally no essay is good enough for its high standards.
We find this ridiculous, but the principle we’ve used to arrive at the ridiculous is taken for granted and makes perfect sense. If we’re as intelligent as we say we are, we re-evaluate just how useful for living is this principle that we’ve never bothered to question before. This is how satire can sometimes push us into ambitious and interesting philosophy.
As I thought about it this morning, The Journal of Universal Rejection also has some meaning for my own ideas about ethics based on singularity. One of the problems I face when trying to articulate this ethical point of view is that it seems to paralyze activity. It starts from a principle that’s arrived through an ontological investigation, an examination of how the world is. That principle is that every situation and every individual is a singularity, a unique body that differs at least in some degree from every other. The result is that any universal principle or proposition will be a generalization that misses some of the singular features of the bodies to which it applies. A proposition that applies to many bodies in common won’t take into account various differences among those bodies. If it did, then it wouldn’t be able to apply to all of them.
This means that any universal proposition can’t be necessarily valid: some difference among its members that the proposition doesn’t account for can create effects that render the proposition useless. To put it more poetically: Reality rebels against any attempt at unity. Or to put it more happily: Existence can surprise us at any time.
In order for a set of universal principles and propositions to hold, those rebellious features of reality have to be set aside. If the people who hold those universal principles and propositions want to maintain the widespread belief in the truth of their system, they have to convince people that these rebellious singularities do not in fact exist. The universal principle rejects the reality that surprises it. If enough of reality becomes surprising to the universal system that the rejections can no longer be ignored, then the system becomes ridiculous, like a government or corporation that denies reality.
Think of Mahmoud Ahmedinejad telling the United Nations that there are no homosexuals in Iran, or a British Petroleum executive telling Louisiana residents that there isn’t anything serious about Deepwater Horizon. These statements become ridiculous because reality has escaped their systems of universal propositions which tell us that the singularities we can plainly see cannot possibly exist.
From ontology to ethics to politics in four paragraphs. Would anyone try to say philosophy is useless now?
Labels:
Culture,
Ethics,
Philosophy,
Politics,
Singularity
Thursday, December 16, 2010
Crippled By Moral Sensitivity
A very funny moment happened during my first public reading of my short fiction. A friend was outside stumping for me, trying to get passersby along the Hamilton Art Crawl where I was performing to come inside and listen to me. One person asked my friend who I was, and she said that I was a PhD student in philosophy. This person then walked away faster.
I can understand that reaction. Academics and literature rarely go well together. It’s a very strange development to watch university MFA programs become the thriving new home for American short fiction. But those programs are actual creative writing programs, there to teach people how to write literature itself. They’re more like trade schools than academic institutions. And the MFA creative writing programs I’ve visited myself are free of a lot of the pretention and elitist attitudes of high-level academic institutions. Academics are often taught to keep their language dry, free from controversy, easily understandable, unchallenging, to stay away from ambition or broad scopes of meaning. I’ve never gotten along well with these academics, and I’ve worked best with philosophers who are just as hostile and apathetic toward the boring aspects of academic writing as I am.
But now that I have enough stories for a full set list myself, I’ve actually looked at my completed works so far and noticed an interesting trend. Half (or more, depending on whether you include writing about students and not just academic professionals) of my completed stories so far are about philosophers. Perhaps I’ve internalized the stereotypical adage of ‘Write what you know,’ because I’ve definitely gotten to know university and academic culture pretty well over the last few years. However, I think there’s a larger point that has snuck into my thoughts, which has to do with what kind of stories and what kind of characters I find interesting.
I’m most intrigued, as a writer, with hypocrisy. I’m not against hypocrisy per se; I never explicitly denounce hypocrisy in any of my fiction works – neither the stories or my novel. I’m a hypocrite myself. But I find that hypocrisy and inconsistency of character makes for the most intriguing literature. I’ve never been all that interested in literature about characters who have no internal conflicts and just deal with problems that arise around them. I’m not into plots. I don’t like narratives structured around things happening. I’m far more fascinated by narratives that reveal strange, multifaceted characters. Inconsistency in the beliefs and desires that are most important to your character makes for an amazing literary exploration.
I think this is the more profound reason why my ideas for stories keep coming back to philosophers. We’re the so-called lovers of wisdom. It’s in the etymology of the name of the fucking profession. A wise person is supposed to be a person without serious internal conflict, a person without hypocrisy. We call people wise who can guide people out of personal conflict and into more harmonious lives. Philosophers study ethics itself, so our own ethical beliefs we often hold to a higher standard than those of people outside the profession.
The ethical and personal obligations of a philosopher for consistency in living and freedom from internal conflict are at their highest intensity. A philosopher who becomes aware of his or her own hypocrisy or inconsistency of character is going to have the most intense conflict, because of all professions, philosophers have more skills to analyze these concepts and so understand their own internal conflicts more deeply than others who may not have been trained to be as articulate with ethical concepts. “Mobilization of the Oppressed,” which I just submitted to University of Toronto’s Echolocation fiction journal this afternoon, explores the disconnects from reality that can happen when you firmly believe that knowledge makes one moral. “My Perfect Lover” explores the hypocrisy of a man whose desires and emotions lead him to use his skills of reasoning and argument to defend a regime of slavery that he knows to be wrong. I have an untitled story in draft form about a professor whose drive to discover through philosophical argument the nature of a perfectly benevolent God turns him into a bitter old man incapable of love.
I thought of another idea today about a philosopher, the idea that made me realize that my profession was such a frequent subject for my fiction. This philosopher is so deeply committed to his utilitarian ethical beliefs and arguments, that the rich should give almost all they can to alleviate poverty, that the North is morally obligated to bankrupt themselves to feed the South. But as he comes to this ethical stance, he realizes that the institution of the university is incorrigably inegalitarian: according to his deeply held ethical beliefs, he shouldn’t hold a position that trains upper class elites of affluent North Americans and be paid from the profits gained from forcing thousands of young people each year into crushing student loan debt. But by the time he figures this out, he has his own family to support: children to feed and put through school. By his own philosophical beliefs, he should sacrifice the well-being of his three children to alleviate the pain of suffering millions. But when he goes home to see his own kids, he can’t. So he goes back to a job he hates every day.
Perhaps one day, I’ll publish a collection of stories about philosophers and their conflicts and hypocrisies. I might call it Thinkers. Perhaps it will be valuable.
I can understand that reaction. Academics and literature rarely go well together. It’s a very strange development to watch university MFA programs become the thriving new home for American short fiction. But those programs are actual creative writing programs, there to teach people how to write literature itself. They’re more like trade schools than academic institutions. And the MFA creative writing programs I’ve visited myself are free of a lot of the pretention and elitist attitudes of high-level academic institutions. Academics are often taught to keep their language dry, free from controversy, easily understandable, unchallenging, to stay away from ambition or broad scopes of meaning. I’ve never gotten along well with these academics, and I’ve worked best with philosophers who are just as hostile and apathetic toward the boring aspects of academic writing as I am.
But now that I have enough stories for a full set list myself, I’ve actually looked at my completed works so far and noticed an interesting trend. Half (or more, depending on whether you include writing about students and not just academic professionals) of my completed stories so far are about philosophers. Perhaps I’ve internalized the stereotypical adage of ‘Write what you know,’ because I’ve definitely gotten to know university and academic culture pretty well over the last few years. However, I think there’s a larger point that has snuck into my thoughts, which has to do with what kind of stories and what kind of characters I find interesting.
I’m most intrigued, as a writer, with hypocrisy. I’m not against hypocrisy per se; I never explicitly denounce hypocrisy in any of my fiction works – neither the stories or my novel. I’m a hypocrite myself. But I find that hypocrisy and inconsistency of character makes for the most intriguing literature. I’ve never been all that interested in literature about characters who have no internal conflicts and just deal with problems that arise around them. I’m not into plots. I don’t like narratives structured around things happening. I’m far more fascinated by narratives that reveal strange, multifaceted characters. Inconsistency in the beliefs and desires that are most important to your character makes for an amazing literary exploration.
I think this is the more profound reason why my ideas for stories keep coming back to philosophers. We’re the so-called lovers of wisdom. It’s in the etymology of the name of the fucking profession. A wise person is supposed to be a person without serious internal conflict, a person without hypocrisy. We call people wise who can guide people out of personal conflict and into more harmonious lives. Philosophers study ethics itself, so our own ethical beliefs we often hold to a higher standard than those of people outside the profession.
The ethical and personal obligations of a philosopher for consistency in living and freedom from internal conflict are at their highest intensity. A philosopher who becomes aware of his or her own hypocrisy or inconsistency of character is going to have the most intense conflict, because of all professions, philosophers have more skills to analyze these concepts and so understand their own internal conflicts more deeply than others who may not have been trained to be as articulate with ethical concepts. “Mobilization of the Oppressed,” which I just submitted to University of Toronto’s Echolocation fiction journal this afternoon, explores the disconnects from reality that can happen when you firmly believe that knowledge makes one moral. “My Perfect Lover” explores the hypocrisy of a man whose desires and emotions lead him to use his skills of reasoning and argument to defend a regime of slavery that he knows to be wrong. I have an untitled story in draft form about a professor whose drive to discover through philosophical argument the nature of a perfectly benevolent God turns him into a bitter old man incapable of love.
I thought of another idea today about a philosopher, the idea that made me realize that my profession was such a frequent subject for my fiction. This philosopher is so deeply committed to his utilitarian ethical beliefs and arguments, that the rich should give almost all they can to alleviate poverty, that the North is morally obligated to bankrupt themselves to feed the South. But as he comes to this ethical stance, he realizes that the institution of the university is incorrigably inegalitarian: according to his deeply held ethical beliefs, he shouldn’t hold a position that trains upper class elites of affluent North Americans and be paid from the profits gained from forcing thousands of young people each year into crushing student loan debt. But by the time he figures this out, he has his own family to support: children to feed and put through school. By his own philosophical beliefs, he should sacrifice the well-being of his three children to alleviate the pain of suffering millions. But when he goes home to see his own kids, he can’t. So he goes back to a job he hates every day.
Perhaps one day, I’ll publish a collection of stories about philosophers and their conflicts and hypocrisies. I might call it Thinkers. Perhaps it will be valuable.
Sunday, December 12, 2010
Arrogance Is Philosophy’s Most Widespread Paradox
Over the past couple of years, I’ve been building myself a tidy little transdisciplinary specialty that I like to call Critical Theory of Knowledge. The essays I’ve presented and published through the Book Conferences in 2010 in Switzerland and 2009 in Edinburgh are my major public efforts in this so far. But a kindly old professor once told me that it’s always good for a philosopher’s career if you can put something to do with knowledge and/or epistemology somewhere prominent on your CV. This suits me pretty well.
My main thrust is, at heart, to take knowledge and rationality off its high horse, without falling into the traps of post-modernism that would keep me from being read by people who still venerate rationality. Preaching to the choir might be an easy way to sell books, but I never took the easy way out. In my two published essays, I examine how peer review works in academic journals, and how attitudes of arrogance on the part of professors, editors, and article reviewers can stifle creative, unorthodox ideas, and render a field of study moribund and stagnant. My critique goes something like this: If someone has worked hard enough, and become widely recognized as an expert in their field, then they tend to take their own ideas as gospel. They’re the expert, after all, so their perspective on their field is the same as the truth. When someone disputes that perspective, the first response of a typical expert, working under this premise, is that the disputer is wrong. I wrote about this last week, so you can just scroll down to 3 December for a more detailed treatment of the argument.
Attitude is far more important to disseminating and taking seriously novel creative approaches to a field than most people generally realize. With this focus on attitude in mind, I remembered a curious commonality in my academic life. Keep in mind that this is entirely anecdotal, but what’s most important to take away from this story is not a certain truth, but an intriguing idea, a particular point of view, a conceptual nudge to the ribs.
Some of the most arrogant, curmudgeonly professors that I’ve ever met, the quickest and most vicious attackers of ideas that didn’t fit with their own established conclusions, were all devotees of Benedict de Spinoza.
A confession: I haven’t read an entire book by Spinoza in its entirety until this past week. Actually, I was emphatically turned off Spinoza’s philosophy by the egotistical and pretentious way it was presented to me in a class I took when I was 19. Spinoza wasn’t even on the class curriculum, but the professor would go on and on about “the divine beauty of Spinoza” in a way that communicated none of the important ideas, and just delayed us from covering the actual course material. With my current doctoral project using many ideas from the ontology of Gilles Deleuze, important background reading has turned out to be Spinoza. Deleuze’s big book on Spinoza, Expressionism, was the first presentation of his philosophy that made me feel good about it. This week, I’ve barrelled through Spinoza’s Ethics. And I’ve found something very intriguing.
Spinoza’s book Ethics is a philosophical guide to living. It’s written as a series of geometry-style proofs about the nature of God, existence, thought, emotion, and reason whose ultimate goal is to indicate how one can live through the guidance of reason, and so live a life of joy and exultation in existence itself. Pretentious? Maybe more than a little. Uplifting? Inspiring? Definitely! How could such a book, written with such sincerity by such a generous, magnanimous, and admirable personality inspire such arrogance among some of its devotees? The picture assembled itself slowly, but I’m convinced that I’ve worked out how it happens.
Spinoza has little time for people who live according to their emotions alone. They’re passive before the fluctuating situations of life, living as slaves to forces beyond their control. Part four of five, on how emotions can wreck someone’s thinking and personality, is actually titled “Of Human Bondage.” And he’s a master of the subtle burn. Reading the Ethics, I find myself laughing at a book laid out like a mathematical proof, because of the cunning ways he inserts light-hearted jabs about people who let their emotions carry them away, or who generally don’t think and live “guided by reason.”
And then it hit me. It was a sudden realization, which one should generally mistrust, but as I thought about it, the idea just made so much sense. Part four (Of Human Bondage), proposition 73, in the elaboration paragraphs labelled Scholium, Spinoza describes how the strong person is a person who lives guided by reason, a person who “hates nobody, is angry with nobody, envies nobody, is indignant with nobody, despises nobody, and is in no way prone to pride.” Yet when my Spinozist professors spoke to any students, colleagues, or even higher-ranked professors who expressed an idea hesitantly, or lacking detail, or fuzzily, or even just experimentally, the self-declard Spinozist would respond with anger, indignation, and spite. Anyone who articulated an idea with any less than the perfect precision with which Spinoza himself wrote and argued, was dismissed and insulted with great condescension and arrogance.
But sometimes, an idea needs to be given a chance to percolate in one’s thoughts, to drift around conversations, displaying roughness, but also promise. A lack of clarity may obscure a bounty of potential. These self-declared Spinozists of my anecdotes attack and dismiss an idea for lacking that perfect clarity of expression that it may not yet have had time to achieve. Spinoza’s burns and jokes are written with no cruelty, but a pleasant wit. His barbs come with the extended hand of friendship, never the spitefully dismissive spirit that I have heard from the self-declared Spinozists who ruthlessly attack all ideas in progress, unfinished, incomplete. But the same words Spinoza wrote, when delivered with a tone of anger, are words of hatred, rage, and dismissal.
In Switzerland, I spoke about humility as the most difficult, but most important task of a thinker. Humility is the ability to wonder sometimes, whether you are on the right track: The expert must sometimes question his own expertise to avoid destroying the vibrancy of the field to which he’s committed his life. Sometimes, if you dedicate yourself to Spinoza, patron saint of a life lived guided by reason, you can say to yourself, “I’m a follower of Spinoza, so I must be guided by reason. If I’m guided by reason, I must be right, and it’s my duty to stop those who are still in bondage to their lesser instincts, who are not yet guided by reason as I am!”
I hope you see the parallel structure of that thought, and my thought at the start of this post. The first sign that you’re no longer guided by reason is that you no longer think it’s required that you check to see if you’re guided by reason. Spinoza wrote that he who is guided by reason lives free from error, strife, or mistake. But the first and easiest mistake to make is to believe yourself incapable of mistake. That mistake is called pride.
My main thrust is, at heart, to take knowledge and rationality off its high horse, without falling into the traps of post-modernism that would keep me from being read by people who still venerate rationality. Preaching to the choir might be an easy way to sell books, but I never took the easy way out. In my two published essays, I examine how peer review works in academic journals, and how attitudes of arrogance on the part of professors, editors, and article reviewers can stifle creative, unorthodox ideas, and render a field of study moribund and stagnant. My critique goes something like this: If someone has worked hard enough, and become widely recognized as an expert in their field, then they tend to take their own ideas as gospel. They’re the expert, after all, so their perspective on their field is the same as the truth. When someone disputes that perspective, the first response of a typical expert, working under this premise, is that the disputer is wrong. I wrote about this last week, so you can just scroll down to 3 December for a more detailed treatment of the argument.
Attitude is far more important to disseminating and taking seriously novel creative approaches to a field than most people generally realize. With this focus on attitude in mind, I remembered a curious commonality in my academic life. Keep in mind that this is entirely anecdotal, but what’s most important to take away from this story is not a certain truth, but an intriguing idea, a particular point of view, a conceptual nudge to the ribs.
Some of the most arrogant, curmudgeonly professors that I’ve ever met, the quickest and most vicious attackers of ideas that didn’t fit with their own established conclusions, were all devotees of Benedict de Spinoza.
A confession: I haven’t read an entire book by Spinoza in its entirety until this past week. Actually, I was emphatically turned off Spinoza’s philosophy by the egotistical and pretentious way it was presented to me in a class I took when I was 19. Spinoza wasn’t even on the class curriculum, but the professor would go on and on about “the divine beauty of Spinoza” in a way that communicated none of the important ideas, and just delayed us from covering the actual course material. With my current doctoral project using many ideas from the ontology of Gilles Deleuze, important background reading has turned out to be Spinoza. Deleuze’s big book on Spinoza, Expressionism, was the first presentation of his philosophy that made me feel good about it. This week, I’ve barrelled through Spinoza’s Ethics. And I’ve found something very intriguing.
Spinoza’s book Ethics is a philosophical guide to living. It’s written as a series of geometry-style proofs about the nature of God, existence, thought, emotion, and reason whose ultimate goal is to indicate how one can live through the guidance of reason, and so live a life of joy and exultation in existence itself. Pretentious? Maybe more than a little. Uplifting? Inspiring? Definitely! How could such a book, written with such sincerity by such a generous, magnanimous, and admirable personality inspire such arrogance among some of its devotees? The picture assembled itself slowly, but I’m convinced that I’ve worked out how it happens.
Spinoza has little time for people who live according to their emotions alone. They’re passive before the fluctuating situations of life, living as slaves to forces beyond their control. Part four of five, on how emotions can wreck someone’s thinking and personality, is actually titled “Of Human Bondage.” And he’s a master of the subtle burn. Reading the Ethics, I find myself laughing at a book laid out like a mathematical proof, because of the cunning ways he inserts light-hearted jabs about people who let their emotions carry them away, or who generally don’t think and live “guided by reason.”
And then it hit me. It was a sudden realization, which one should generally mistrust, but as I thought about it, the idea just made so much sense. Part four (Of Human Bondage), proposition 73, in the elaboration paragraphs labelled Scholium, Spinoza describes how the strong person is a person who lives guided by reason, a person who “hates nobody, is angry with nobody, envies nobody, is indignant with nobody, despises nobody, and is in no way prone to pride.” Yet when my Spinozist professors spoke to any students, colleagues, or even higher-ranked professors who expressed an idea hesitantly, or lacking detail, or fuzzily, or even just experimentally, the self-declard Spinozist would respond with anger, indignation, and spite. Anyone who articulated an idea with any less than the perfect precision with which Spinoza himself wrote and argued, was dismissed and insulted with great condescension and arrogance.
But sometimes, an idea needs to be given a chance to percolate in one’s thoughts, to drift around conversations, displaying roughness, but also promise. A lack of clarity may obscure a bounty of potential. These self-declared Spinozists of my anecdotes attack and dismiss an idea for lacking that perfect clarity of expression that it may not yet have had time to achieve. Spinoza’s burns and jokes are written with no cruelty, but a pleasant wit. His barbs come with the extended hand of friendship, never the spitefully dismissive spirit that I have heard from the self-declared Spinozists who ruthlessly attack all ideas in progress, unfinished, incomplete. But the same words Spinoza wrote, when delivered with a tone of anger, are words of hatred, rage, and dismissal.
In Switzerland, I spoke about humility as the most difficult, but most important task of a thinker. Humility is the ability to wonder sometimes, whether you are on the right track: The expert must sometimes question his own expertise to avoid destroying the vibrancy of the field to which he’s committed his life. Sometimes, if you dedicate yourself to Spinoza, patron saint of a life lived guided by reason, you can say to yourself, “I’m a follower of Spinoza, so I must be guided by reason. If I’m guided by reason, I must be right, and it’s my duty to stop those who are still in bondage to their lesser instincts, who are not yet guided by reason as I am!”
I hope you see the parallel structure of that thought, and my thought at the start of this post. The first sign that you’re no longer guided by reason is that you no longer think it’s required that you check to see if you’re guided by reason. Spinoza wrote that he who is guided by reason lives free from error, strife, or mistake. But the first and easiest mistake to make is to believe yourself incapable of mistake. That mistake is called pride.
Friday, December 3, 2010
Publication Diaries: The Problem with Subtlties
So I just sent in the publishing contract for my second essay to come out in the International Journal of the Book, “The Danger of Institutional Conservatism in the Humanities.” It will be available in the 2011 edition of the journal, and I’m quite proud of it. I’m not sure if I’d say it’s the best work I’ve done so far, but it’s definitely my most experimental so far that’s being published in an academic journal.
As I learn more about the peer review process, especially its problems and difficulties (for details, see my article in the Book Journal last year), I think interdisciplinary journals are best suited for a lot of my work writing philosophy articles. I’ve come to this conclusion for reasons that will sound very self-serving, if you want to interpret me maliciously. But I think my reasons are actually very insightful, if you interpret me charitably. I personally think it’s a very self-serving insight, but quite insightful nonetheless. I've noticed in academic culture, that the more specialized one’s knowledge is, the more zealously one tends to guard one’s perspectives from critique. In learning more and more about an increasingly specific subject matter, one tends to acknowledge one’s own expertise: At a particular point, different for everyone, one tends to presume that one’s own perspective on the subject matter is the right perspective. “I am the expert,” says the expert, “so my own knowledge is the standard of my field. If it wasn’t the standard, then I wouldn’t be an expert.” These people are very often submission reviewers for the academic journals in their specialty.
This attitude creates a potentially terrible problem for creative thinkers, especially people who are younger and/or less experienced, still trying to establish themselves in their field. Such a young person, a new entrant, may have ideas that differ from the established experts. Being newer to the field, they don’t yet have the experience or prestige that a long career in a specialized field offers. But they may also have innovative new ideas and approaches to their field, which may not be compatible with the approaches of the experts. And if the established expert has come to identify their own way of thinking as the only way of thinking, then that new writer will be rejected. The expert will hold them to be wrong, when the new writer may just be in disagreement or holding a different approach than the expert. The expert will reject their work, preventing an innovative approach from being disseminated.
At this point, I think it should be clear that the person I’ve been calling a specialized expert is better titled an academic curmudgeon.
I think this attitude becomes more prevalent, or at least more likely to encounter, in highly specialized academic environments. This, right now, is just a matter of anecdotal evidence, but the anecdotal evidence is beginning to stack up. What this has to do with my mutually beneficial relationship with interdisciplinary journals is that one is less likely to encounter this attitude in a less specialized academic environment. So my own strange ideas and approaches are more likely to be given a chance than they would be in a highly specialized journal with a greater probability of curmudgeonliness.
My forthcoming essay is a more experimental in form than any essay I’ve attempted to put in the public view. Read by one of my former professors, he described it as uncategorizable into any typical genre or division of philosophy. I took this to be a compliment. He also called it cranky decades beyond my years, which I considered a backhanded compliment. When I presented it at the Book Conference in Switzerland last month, it was received with gaping mouths, and it took a while for the ideas to sink in to the audience. It’s a very dense essay for 4,000 words, and has some subtlties in its tone and language that may not be noticed.
The essay is a continuation of my critique of how academic knowledge is generated, and contains potential solutions to the ways in which a field of knowledge can become moribund, uncreative, and boring. Key to the solution, which I note – there and here – is much easier to talk about than actually to achieve, is an attitude of humility. One of my reviewers had no critiques of the content of my essay, but often told me to remove what s/he called ‘self-referencing,’ sentences starting with ‘I.’ I will admit that I didn’t follow this direction in every case, because I didn’t want to give the essay a tone of pure objectivity and distance that is one of the signs of the arrogance of the expert. When I describe the attitude of humility, the reviewer annotated that I should re-write my introductory sentences to display more of this attitude. It was cheeky, and I laughed, but s/he also didn’t understand the subtle point I was trying to make with my cranky tone.
The most difficult part about inculcating an attitude of humility into academic professionals is that our personalities, and academic society generally, are shaped to make it immensely difficult to have actual humble attitudes. We’re rewarded for being distinctively smarter than our colleagues, and especially the general public. There’s a casual disdain for undergraduates and ordinary students in academic culture that I never really noticed in universities until I was no longer one of those ordinary students. And I’m still uncomfortable with bragging in a non-professional context. It’s difficult for me to accept compliments about my work in philosophy and literature, because of the conflicts it gives me: I want to be a humble, easily-relatable person, but I also want to produce remarkable, superior, inspiring writing.
I tell my friends in the philosophy department how many different and intriguing ideas I have in the course of a week, and I feel awkward when they tell me they don’t have nearly so active a brain. If there’s one thing I don’t want to become, it’s an insufferable genius, even though I can see myself eventually heading for near complete Rain Man territory as I get older. Academics are not humble people, and our increasingly exclusive social circles of other graduate students and eventually other academics and highly educated professionals only encourage that attitude of superiority to everyday people.
So I wrote my essay about encouraging humility in a very superior, bordering on arrogant, tone. It’s an illustration in the tone of the writing itself of how genuinely difficult the task of humility is. It’s written by an arrogant man who knows, despite his own instincts, that his goal of encouraging innovation and works of brilliance (of which he considers much of his own work), will only be achieved by inculcating widespread attitudes of humility. The paradox unfolds along many different levels of articulation.
Brilliant, isn’t it?
•••
In other news, the new Kanye album is absolutely fantastic, and I don’t use the word absolute in a positive sense very often. It’s a very appropriate clip to end a post that talks about the importance of humility.
As I learn more about the peer review process, especially its problems and difficulties (for details, see my article in the Book Journal last year), I think interdisciplinary journals are best suited for a lot of my work writing philosophy articles. I’ve come to this conclusion for reasons that will sound very self-serving, if you want to interpret me maliciously. But I think my reasons are actually very insightful, if you interpret me charitably. I personally think it’s a very self-serving insight, but quite insightful nonetheless. I've noticed in academic culture, that the more specialized one’s knowledge is, the more zealously one tends to guard one’s perspectives from critique. In learning more and more about an increasingly specific subject matter, one tends to acknowledge one’s own expertise: At a particular point, different for everyone, one tends to presume that one’s own perspective on the subject matter is the right perspective. “I am the expert,” says the expert, “so my own knowledge is the standard of my field. If it wasn’t the standard, then I wouldn’t be an expert.” These people are very often submission reviewers for the academic journals in their specialty.
This attitude creates a potentially terrible problem for creative thinkers, especially people who are younger and/or less experienced, still trying to establish themselves in their field. Such a young person, a new entrant, may have ideas that differ from the established experts. Being newer to the field, they don’t yet have the experience or prestige that a long career in a specialized field offers. But they may also have innovative new ideas and approaches to their field, which may not be compatible with the approaches of the experts. And if the established expert has come to identify their own way of thinking as the only way of thinking, then that new writer will be rejected. The expert will hold them to be wrong, when the new writer may just be in disagreement or holding a different approach than the expert. The expert will reject their work, preventing an innovative approach from being disseminated.
At this point, I think it should be clear that the person I’ve been calling a specialized expert is better titled an academic curmudgeon.
I think this attitude becomes more prevalent, or at least more likely to encounter, in highly specialized academic environments. This, right now, is just a matter of anecdotal evidence, but the anecdotal evidence is beginning to stack up. What this has to do with my mutually beneficial relationship with interdisciplinary journals is that one is less likely to encounter this attitude in a less specialized academic environment. So my own strange ideas and approaches are more likely to be given a chance than they would be in a highly specialized journal with a greater probability of curmudgeonliness.
My forthcoming essay is a more experimental in form than any essay I’ve attempted to put in the public view. Read by one of my former professors, he described it as uncategorizable into any typical genre or division of philosophy. I took this to be a compliment. He also called it cranky decades beyond my years, which I considered a backhanded compliment. When I presented it at the Book Conference in Switzerland last month, it was received with gaping mouths, and it took a while for the ideas to sink in to the audience. It’s a very dense essay for 4,000 words, and has some subtlties in its tone and language that may not be noticed.
The essay is a continuation of my critique of how academic knowledge is generated, and contains potential solutions to the ways in which a field of knowledge can become moribund, uncreative, and boring. Key to the solution, which I note – there and here – is much easier to talk about than actually to achieve, is an attitude of humility. One of my reviewers had no critiques of the content of my essay, but often told me to remove what s/he called ‘self-referencing,’ sentences starting with ‘I.’ I will admit that I didn’t follow this direction in every case, because I didn’t want to give the essay a tone of pure objectivity and distance that is one of the signs of the arrogance of the expert. When I describe the attitude of humility, the reviewer annotated that I should re-write my introductory sentences to display more of this attitude. It was cheeky, and I laughed, but s/he also didn’t understand the subtle point I was trying to make with my cranky tone.
The most difficult part about inculcating an attitude of humility into academic professionals is that our personalities, and academic society generally, are shaped to make it immensely difficult to have actual humble attitudes. We’re rewarded for being distinctively smarter than our colleagues, and especially the general public. There’s a casual disdain for undergraduates and ordinary students in academic culture that I never really noticed in universities until I was no longer one of those ordinary students. And I’m still uncomfortable with bragging in a non-professional context. It’s difficult for me to accept compliments about my work in philosophy and literature, because of the conflicts it gives me: I want to be a humble, easily-relatable person, but I also want to produce remarkable, superior, inspiring writing.
I tell my friends in the philosophy department how many different and intriguing ideas I have in the course of a week, and I feel awkward when they tell me they don’t have nearly so active a brain. If there’s one thing I don’t want to become, it’s an insufferable genius, even though I can see myself eventually heading for near complete Rain Man territory as I get older. Academics are not humble people, and our increasingly exclusive social circles of other graduate students and eventually other academics and highly educated professionals only encourage that attitude of superiority to everyday people.
So I wrote my essay about encouraging humility in a very superior, bordering on arrogant, tone. It’s an illustration in the tone of the writing itself of how genuinely difficult the task of humility is. It’s written by an arrogant man who knows, despite his own instincts, that his goal of encouraging innovation and works of brilliance (of which he considers much of his own work), will only be achieved by inculcating widespread attitudes of humility. The paradox unfolds along many different levels of articulation.
Brilliant, isn’t it?
•••
In other news, the new Kanye album is absolutely fantastic, and I don’t use the word absolute in a positive sense very often. It’s a very appropriate clip to end a post that talks about the importance of humility.
Labels:
Kanye West,
Knowledge,
Music,
Philosophy,
Writing
Thursday, November 25, 2010
Switzerland Diary 4: Computers Exist, So Get Over It
About a month ago, I was talking to my friend Alanda for the first time in over a year. She was visiting her old friends still at McMaster philosophy after having moved to Barrie, gotten a teaching job at a college, and gotten married. One part of our conversation was about a new set of theories floating around educational circles about how to teach Millennials. This was a generation that had an entirely different perceptual understanding of computers, the internet, the temporal structure of the day. Millennials understood privacy, social interaction, how to behave in a classroom, how to learn, entirely differently than the generations before, because of their different relations to computer technology. She described them as a very alien society. It was then that, to her horror, I informed her that, having been born in 1983, I was a Millennial.
Normally, I don’t think this Millennial generational difference is that big a deal. But I saw some stuff at the Book Conference that made me think differently. The Book Conference had a different title when it began eight years ago, The Conference on the Future of the Book. The conference as I’ve come to know it in the last two years has covered many aspects of the phenomenon: literacy, education, book history, publishing business, the analysis of literature itself, intellectual and academic culture, and combinations and convergences of all these disciplines. But among them is a holdover from those early conferences: people who shook in their boots about the destruction of the book.
Their concerns were not Taliban-like anti-literacy movements, which exist and should be taken seriously and combatted. No, they were people scared of ebooks. Any new medium, like the electronic book, is going to have benefits and limitations. One advantage of ebooks is that they can be carried easily in large numbers. A library will be able to fit on an iPad. A limitation is the difficulty of controlling commerce in ebooks. They’ll be easy to download without financial recompense to the writer, so the economy of writers and books will have to change.
But I saw presentations and read essays about the popularization of ebooks that were conservative bordering on hysteria. I saw presentations that sought relevance for the physical book as a figure of fetishized pleasure, the turning of pages and the smell of ink deeply eroticized for the sake of preservation against the onslaught. I reviewed an essay for the journal that used Lacanian psychoanalytic concepts to villify the ebook as destructive of the individual human subject itself.
Every one of these people who were so afraid of ebooks was over thirty years old. They were all pre-Millennial, members of the generation less used to dealing with electronic media, generally less comfortable on the internet, those who find reading from a screen more difficult, an alienating process. It’s such a stupidly hysterical point of view that I can’t really take it seriously. It reminds me of those people who thought the advent of television would destroy cinema. But I’m not going to argue by analogy, because an analogy can be easily argued against: that’s A and B, but this is X and Y, with very different characters.
I still think this point of view, the defense of the paper book against the onslaught of electronic media, is utterly counter-productive to the best thinking on the topics of books and writing. The ebook is a different kind of medium for writing, one that is more mobile, easily distributed, copied, and stored. It will no more destroy literature and publishing than digital video has killed filmmaking. I think, like digital video, the ebook will offer a cheaper distribution method that will allow even more independent writers and presses to flourish, and encourage experimentation with literary techniques and tools. People who don’t understand this, because they’re too old and set in their ways to be comfortable with a new medium of artistic expression, should be quiet and let presentation slots at prestigious conferences go to creative people instead.
Normally, I don’t think this Millennial generational difference is that big a deal. But I saw some stuff at the Book Conference that made me think differently. The Book Conference had a different title when it began eight years ago, The Conference on the Future of the Book. The conference as I’ve come to know it in the last two years has covered many aspects of the phenomenon: literacy, education, book history, publishing business, the analysis of literature itself, intellectual and academic culture, and combinations and convergences of all these disciplines. But among them is a holdover from those early conferences: people who shook in their boots about the destruction of the book.
Their concerns were not Taliban-like anti-literacy movements, which exist and should be taken seriously and combatted. No, they were people scared of ebooks. Any new medium, like the electronic book, is going to have benefits and limitations. One advantage of ebooks is that they can be carried easily in large numbers. A library will be able to fit on an iPad. A limitation is the difficulty of controlling commerce in ebooks. They’ll be easy to download without financial recompense to the writer, so the economy of writers and books will have to change.
But I saw presentations and read essays about the popularization of ebooks that were conservative bordering on hysteria. I saw presentations that sought relevance for the physical book as a figure of fetishized pleasure, the turning of pages and the smell of ink deeply eroticized for the sake of preservation against the onslaught. I reviewed an essay for the journal that used Lacanian psychoanalytic concepts to villify the ebook as destructive of the individual human subject itself.
Every one of these people who were so afraid of ebooks was over thirty years old. They were all pre-Millennial, members of the generation less used to dealing with electronic media, generally less comfortable on the internet, those who find reading from a screen more difficult, an alienating process. It’s such a stupidly hysterical point of view that I can’t really take it seriously. It reminds me of those people who thought the advent of television would destroy cinema. But I’m not going to argue by analogy, because an analogy can be easily argued against: that’s A and B, but this is X and Y, with very different characters.
I still think this point of view, the defense of the paper book against the onslaught of electronic media, is utterly counter-productive to the best thinking on the topics of books and writing. The ebook is a different kind of medium for writing, one that is more mobile, easily distributed, copied, and stored. It will no more destroy literature and publishing than digital video has killed filmmaking. I think, like digital video, the ebook will offer a cheaper distribution method that will allow even more independent writers and presses to flourish, and encourage experimentation with literary techniques and tools. People who don’t understand this, because they’re too old and set in their ways to be comfortable with a new medium of artistic expression, should be quiet and let presentation slots at prestigious conferences go to creative people instead.
Labels:
Computers,
Culture,
Literature,
Philosophy,
Writing
Wednesday, November 24, 2010
Switzerland Diary 3: A Weekend of Stealing Ideas
What I like best about the International Conference on the Book, apart from the fact that they give me awards and take place in interesting places, some of which I can stay in for free (Ray’s apartment in Edinburgh, one of my many expat friends in Toronto next year), is that it’s an interdisciplinary conference that perfectly matches my career. It’s a venue where I can present and discuss my ideas that fall into the category of meta-philosophy, and there are enough people there talking about the publishing industry that I can brainstorm techniques for Crackjaw. Step one of being a web-based publisher: have a functioning website. I’ll get right on that, business seminar leader.
My own presentation impressed everyone who was there to see it, and because I was the award winner for my essay from the Edinburgh conference last year, I had a packed room in the first speaking session that morning. No one could really think of any questions for me at the end, though. I was told it was pretty dense. But later that day, after they had time to think about it, people from my audience came up to me and had some really interesting discussions about how fields of study can become insular and moribund through the processes like peer review and argument that we often think revitalizes us.
I felt a little bad that I was scheduled opposite my new friend Liz, who I’ve referred to in previous entries as the couch surfer. But there just wasn’t enough audience to go around on a Sunday morning. An art historian presenting on genital lack in statuary should at least be solid academic entertainment and a genuinely intriguing essay. However, I will admit that I'm not a fan of Freudian models of desire as lack. But I couldn't actually make her presentation. Christina, a film theory grad student from University of Iowa, presented an intriguing study of Hmong-American literature. It was interesting to see the reactive generation writing about their experiences breaking away from the conservative culture of their immigrant parents. But for me, the really interesting stuff will come from the generation in the Hmong community after this one: right now, their authors are too polarized between being purely American or purely Hmong. It’ll be another couple of decades before there are young authors capable of genuine play.
Corrine, my friend that I met at last year’s Book Conference, presented an ancient (for us, anyway; it was three years old) paper about Charlotte Brontë’s use of writing in her work as a sign of freedom from gender constraint. For me, secret megalomaniac that I am, the best part about her presentation was a single line, which I think she improvised and that I can’t even remember, that spurred me to an idea for a chapter in my planned book about philosophical ethics written through dialogue with Herzog movies. I figured out how to structure a chapter that explained how Herzog crafted his duty to New German Cinema, and through that his duty to rebuild Germany itself as a civilized country, and explained the ethical power of the duties that he demanded of himself and the world. It included his relation to the Silent Expressionists, Lotte Eisner the film critic, his strangely totemic walk from Munich to Paris in the dead of winter, and thematic analyses of Fata Morgana, Heart of Glass, and Nosferatu 1978. So thank you, Corrine, for the inspiration, even if it was utterly unintentional on both our parts.
Mathilde is a very short scholar of ancient Greek philosophy doing a PhD at UQAM, who presented a fascinating essay about the mythologization of Aristotle’s library in ancient Greece, examining different ways to relate to books as physical and mythical objects because of the different ways that books are produced and passed on in that civilization. If I can steal another Herzog phrase, it was about the ecstatic truth of Aristotle’s library rather than the actual facts of the case, which didn’t really matter to her point. The idea is to see what kind of philosophical insights we can take from the historical narrative – the facts of that historical narrative are only incidental, and should serve the philosophy without restraining it from undue fidelity to facts.
Liz, Corrine, Christina, and Mathilde were the other graduate students at the conference who I spent the most time with, and I'm very glad I did. That's all.
My own presentation impressed everyone who was there to see it, and because I was the award winner for my essay from the Edinburgh conference last year, I had a packed room in the first speaking session that morning. No one could really think of any questions for me at the end, though. I was told it was pretty dense. But later that day, after they had time to think about it, people from my audience came up to me and had some really interesting discussions about how fields of study can become insular and moribund through the processes like peer review and argument that we often think revitalizes us.
I felt a little bad that I was scheduled opposite my new friend Liz, who I’ve referred to in previous entries as the couch surfer. But there just wasn’t enough audience to go around on a Sunday morning. An art historian presenting on genital lack in statuary should at least be solid academic entertainment and a genuinely intriguing essay. However, I will admit that I'm not a fan of Freudian models of desire as lack. But I couldn't actually make her presentation. Christina, a film theory grad student from University of Iowa, presented an intriguing study of Hmong-American literature. It was interesting to see the reactive generation writing about their experiences breaking away from the conservative culture of their immigrant parents. But for me, the really interesting stuff will come from the generation in the Hmong community after this one: right now, their authors are too polarized between being purely American or purely Hmong. It’ll be another couple of decades before there are young authors capable of genuine play.
Corrine, my friend that I met at last year’s Book Conference, presented an ancient (for us, anyway; it was three years old) paper about Charlotte Brontë’s use of writing in her work as a sign of freedom from gender constraint. For me, secret megalomaniac that I am, the best part about her presentation was a single line, which I think she improvised and that I can’t even remember, that spurred me to an idea for a chapter in my planned book about philosophical ethics written through dialogue with Herzog movies. I figured out how to structure a chapter that explained how Herzog crafted his duty to New German Cinema, and through that his duty to rebuild Germany itself as a civilized country, and explained the ethical power of the duties that he demanded of himself and the world. It included his relation to the Silent Expressionists, Lotte Eisner the film critic, his strangely totemic walk from Munich to Paris in the dead of winter, and thematic analyses of Fata Morgana, Heart of Glass, and Nosferatu 1978. So thank you, Corrine, for the inspiration, even if it was utterly unintentional on both our parts.
Mathilde is a very short scholar of ancient Greek philosophy doing a PhD at UQAM, who presented a fascinating essay about the mythologization of Aristotle’s library in ancient Greece, examining different ways to relate to books as physical and mythical objects because of the different ways that books are produced and passed on in that civilization. If I can steal another Herzog phrase, it was about the ecstatic truth of Aristotle’s library rather than the actual facts of the case, which didn’t really matter to her point. The idea is to see what kind of philosophical insights we can take from the historical narrative – the facts of that historical narrative are only incidental, and should serve the philosophy without restraining it from undue fidelity to facts.
Liz, Corrine, Christina, and Mathilde were the other graduate students at the conference who I spent the most time with, and I'm very glad I did. That's all.
Sunday, November 14, 2010
Similarity Is Not a Sign of Intention
After performing a reasonably successful public reading of my short fiction, some misconceptions about my work have arisen in a manner typical of the Ontarian chattering classes. To set the record straight, I’ve spoken with literature and film critic Albert Nikos of Fictional Magazine.
Nikos: I’ll cut right to the chase, Riggio. Your story, “Mobilization of the Oppressed,” contained a central character who was very obviously satirizing your professors.
Riggio: That was most certainly not the case.
Nikos: Come on! The professor in that story runs his class like a dictator, utterly convinced of the power of his own ego. He’s totally condescending to all of his students, especially the women. He’s completely ignorant of any critique of a philosophical idea that isn’t strictly about the argument and its logical structure. He’s a pure ivory tower academic of the worst kind. Now who is he!?
Riggio: Professor Winchester is Professor Winchester. It’s as simple as that. I didn’t even think of a first name for him. He didn’t need one for the story, so I didn’t give him one.
Nikos: Well, where did the name Winchester come from? Surely it’s a reference to the British background of some of your professors at the McMaster philosophy department?
Riggio: He’s named after Dr Charles Emerson Winchester III, who David Ogden Stiers played on MASH. Actually, some of the folks in the audience thought I was making fun of the philosophy of law chair in the department, because the character talked about legal theory, and I read his lines with a deep voice. But I wasn’t making fun of any individual person. I was making fun of an attitude, showing the limitations of a particular way of thinking.
Nikos: And who among your professors demonstrates this way of thinking?
Riggio: You’re not going to catch me so easily, Albert. Everyone does, at some point in their thoughts. It doesn’t matter whether you’re a professor, graduate student, undergrad, secretary, janitor, or whatever. Anyone, when there’s any career path in which they can say that they were more knowledgeable than others, can think that they’re better than others. If we don’t check ourselves, or the outside world doesn’t check ourselves for us, we can all become as arrogant and dismissive as Dr Winchester. It’s the mind set of anyone who’s come to believe their own hype, someone who believes that they’re always right, and obviously right. So anyone who disagrees with them is either just plain wrong, or else they’re talking from a perspective that doesn’t count.
Nikos: What do you mean by that? A perspective that doesn’t count.
Riggio: Well, look at the character of Roshan in “Mobilization of the Oppressed.” She’s actually the central character, by the way, not Winchester.
Nikos: But Winchester has the most lines.
Riggio: But Roshan is the catalyst of the action, the knife that punctures his balloon of hot air.
Nikos: Or in this case, puts a bullet in it.
Riggio: Let’s not spoil the entire story.
Nikos: Sometimes, I can’t resist. It was just so delightfully weird.
Riggio: See, that’s the heart of the conflict right there. Roshan is delightfully weird, an event that shatters the illusions of perfect rationality and security. “Mobilization of the Oppressed” is just as much a critique of philosophy as it is a skewering of that kind of arrogant personality. Roshan is a contrarian, someone who isn’t comfortable kowtowing to authority because she’s seen legitimate authority at its most oppressive and violent. She’s left the oppression of Tehran, which was responsible for the death of her father, as I insinuate in that line where I describe him as having been disappeared.
Nikos: That was a clever touch.
Riggio: Thank you. But philosophy is a tradition that worships reason. That’s why Winchester always refers back to Plato, because we still think of ourselves, still too often in my opinion, as footnotes to Plato. We’re good democrats and liberals today, even the conservatives. So we always disagree with Plato’s Republic when he writes about a totalitarian dictatorship of the wise, Philosopher-Kings as society’s great planners. That’s because we’re uncomfortable with authoritarian political systems. That’s one way in which Roshan’s experience is put into tension. But philosophy as a tradition still believes in reason as being the paramount virtue. We always ask people to be reasonable, we believe that smart people should be in charge, that having the best knowledge results in the best political action when those people with the best knowledge are in charge. What Roshan does is problematize knowledge, call its value into question when she talks about political corruption and abuse of the vulnerable in society. You must have great knowledge of a political and legal system in order to manipulate it to your advantage. It’s that dark side of knowledge that Winchester doesn’t see, even as he’s an agent of it.
Nikos: You’re talking about the way he always talks down to Roshan, how that’s a kind of abuse of his power as a professor to control debate. He cuts her off, puts words in her mouth, even calls her questions nonsense.
Riggio: And it’s not just Winchester! She’s the only girl in that seminar, and I included lines insinuating that the male students in the class are always staring at her, and never sticking up for her or helping defend her against Winchester’s abuse. That’s the more insidious kind of oppression that we have in the West. In Tehran, if you’re undesirable, they come to your house and shoot you. It’s very honest. In Chicago, where the story takes place, or New York, or Toronto, or Dallas, or anywhere, undesirables are slowly worn down. People who are different think they have space to live as they want, think they’re respected and accepted by their neighbours, who are all fellow democrats. But they're wrong, because when they need help, their pleasant and smiling neighbours will often let them drown. Our democratic habits let us convince ourselves that we care about people who are different from us, they force us to hide our disgust at different ethnicities, different genders, different languages, different social classes. We even hide it from ourselves. But no one sticks their neck out for the town freaks. The really singular individuals will always be isolated, on their own. Roshan is different in so many ways. She rebels against her own culture’s traditions for how a woman should dress and behave, and she rebels against her professor’s condescension, and she rebels against the indifference or the objectivizing stares of her classmates. And her rebellion isn’t pure reactivity, pure resentment. She doesn’t rebel against Iranian standards of female dress by slutting it up. She dresses in dark colours, tight jeans, sweaters that show off her shape, but none of her skin. She’s creating her own definitions of modesty and confidence, without fully surrendering to the icons that are her reference points: the modest woman, the American feminist.
Nikos: Did you think of all this as you were planning the story, or did it occur to you after you wrote it? Because most fiction that’s written with those kinds of ideas in mind usually stinks.
Riggio: It does usually stink, because you end up with ciphers for philosophical concepts rather than singular characters. And you end up with a book that’s more like a disguised version of Hegel’s Logic, with characters interacting in ways perfectly determined by their concepts.
Nikos: Now you’re talking like a philosophy doctoral student. I’m going to have to ask you to stop.
Riggio: Yeah, a person walked right on by my reading when my friend told him that I was a PhD student in philosophy. I told her not to mention that again, if she stumps for me. She should say something pretentious about Borges instead.
Nikos: This will be my last question, but where did the story begin? What was the thought?
Riggio: My thought was my indignation about Raz’s idea in philosophy of law, that we defer to legal authorities the same way we defer to experts. Winchester articulates what I think is the natural evolution of that point of view to its extreme. Again, the tradition of philosophy worships reason, makes it into a moral virtue. Socrates said that knowledge makes someone morally good, and that’s just laughable. So I had this idea, that the account of legal authority as expert authority is secretly very fascist, very oppressive. But I also had a suspicion that I couldn’t argue against it as a philosophical essay. I wasn’t expert enough on actual theories of legal authority. And that kind of felt like I was playing into my opponent’s hand. So I decided to demonstrate the blind spots of pure reason, rather than arguing reasonably for them. Roshan was that demonstration.
Nikos: Will we see her again?
Riggio: Maybe one day. I hope so. I think there’s a lot more to her than comes across in this one story. There’s a novella I had an idea for a while ago, where I think she could be very useful. But I have no problem bringing someone back. I brought you back, didn’t I?
Nikos: No fourth wall tonight, sir. Thank you very much for sitting down.
Riggio: Thank you for having me.
Nikos: I’ll cut right to the chase, Riggio. Your story, “Mobilization of the Oppressed,” contained a central character who was very obviously satirizing your professors.
Riggio: That was most certainly not the case.
Nikos: Come on! The professor in that story runs his class like a dictator, utterly convinced of the power of his own ego. He’s totally condescending to all of his students, especially the women. He’s completely ignorant of any critique of a philosophical idea that isn’t strictly about the argument and its logical structure. He’s a pure ivory tower academic of the worst kind. Now who is he!?
Riggio: Professor Winchester is Professor Winchester. It’s as simple as that. I didn’t even think of a first name for him. He didn’t need one for the story, so I didn’t give him one.
Nikos: Well, where did the name Winchester come from? Surely it’s a reference to the British background of some of your professors at the McMaster philosophy department?
Riggio: He’s named after Dr Charles Emerson Winchester III, who David Ogden Stiers played on MASH. Actually, some of the folks in the audience thought I was making fun of the philosophy of law chair in the department, because the character talked about legal theory, and I read his lines with a deep voice. But I wasn’t making fun of any individual person. I was making fun of an attitude, showing the limitations of a particular way of thinking.
Nikos: And who among your professors demonstrates this way of thinking?
Riggio: You’re not going to catch me so easily, Albert. Everyone does, at some point in their thoughts. It doesn’t matter whether you’re a professor, graduate student, undergrad, secretary, janitor, or whatever. Anyone, when there’s any career path in which they can say that they were more knowledgeable than others, can think that they’re better than others. If we don’t check ourselves, or the outside world doesn’t check ourselves for us, we can all become as arrogant and dismissive as Dr Winchester. It’s the mind set of anyone who’s come to believe their own hype, someone who believes that they’re always right, and obviously right. So anyone who disagrees with them is either just plain wrong, or else they’re talking from a perspective that doesn’t count.
Nikos: What do you mean by that? A perspective that doesn’t count.
Riggio: Well, look at the character of Roshan in “Mobilization of the Oppressed.” She’s actually the central character, by the way, not Winchester.
Nikos: But Winchester has the most lines.
Riggio: But Roshan is the catalyst of the action, the knife that punctures his balloon of hot air.
Nikos: Or in this case, puts a bullet in it.
Riggio: Let’s not spoil the entire story.
Nikos: Sometimes, I can’t resist. It was just so delightfully weird.
Riggio: See, that’s the heart of the conflict right there. Roshan is delightfully weird, an event that shatters the illusions of perfect rationality and security. “Mobilization of the Oppressed” is just as much a critique of philosophy as it is a skewering of that kind of arrogant personality. Roshan is a contrarian, someone who isn’t comfortable kowtowing to authority because she’s seen legitimate authority at its most oppressive and violent. She’s left the oppression of Tehran, which was responsible for the death of her father, as I insinuate in that line where I describe him as having been disappeared.
Nikos: That was a clever touch.
Riggio: Thank you. But philosophy is a tradition that worships reason. That’s why Winchester always refers back to Plato, because we still think of ourselves, still too often in my opinion, as footnotes to Plato. We’re good democrats and liberals today, even the conservatives. So we always disagree with Plato’s Republic when he writes about a totalitarian dictatorship of the wise, Philosopher-Kings as society’s great planners. That’s because we’re uncomfortable with authoritarian political systems. That’s one way in which Roshan’s experience is put into tension. But philosophy as a tradition still believes in reason as being the paramount virtue. We always ask people to be reasonable, we believe that smart people should be in charge, that having the best knowledge results in the best political action when those people with the best knowledge are in charge. What Roshan does is problematize knowledge, call its value into question when she talks about political corruption and abuse of the vulnerable in society. You must have great knowledge of a political and legal system in order to manipulate it to your advantage. It’s that dark side of knowledge that Winchester doesn’t see, even as he’s an agent of it.
Nikos: You’re talking about the way he always talks down to Roshan, how that’s a kind of abuse of his power as a professor to control debate. He cuts her off, puts words in her mouth, even calls her questions nonsense.
Riggio: And it’s not just Winchester! She’s the only girl in that seminar, and I included lines insinuating that the male students in the class are always staring at her, and never sticking up for her or helping defend her against Winchester’s abuse. That’s the more insidious kind of oppression that we have in the West. In Tehran, if you’re undesirable, they come to your house and shoot you. It’s very honest. In Chicago, where the story takes place, or New York, or Toronto, or Dallas, or anywhere, undesirables are slowly worn down. People who are different think they have space to live as they want, think they’re respected and accepted by their neighbours, who are all fellow democrats. But they're wrong, because when they need help, their pleasant and smiling neighbours will often let them drown. Our democratic habits let us convince ourselves that we care about people who are different from us, they force us to hide our disgust at different ethnicities, different genders, different languages, different social classes. We even hide it from ourselves. But no one sticks their neck out for the town freaks. The really singular individuals will always be isolated, on their own. Roshan is different in so many ways. She rebels against her own culture’s traditions for how a woman should dress and behave, and she rebels against her professor’s condescension, and she rebels against the indifference or the objectivizing stares of her classmates. And her rebellion isn’t pure reactivity, pure resentment. She doesn’t rebel against Iranian standards of female dress by slutting it up. She dresses in dark colours, tight jeans, sweaters that show off her shape, but none of her skin. She’s creating her own definitions of modesty and confidence, without fully surrendering to the icons that are her reference points: the modest woman, the American feminist.
Nikos: Did you think of all this as you were planning the story, or did it occur to you after you wrote it? Because most fiction that’s written with those kinds of ideas in mind usually stinks.
Riggio: It does usually stink, because you end up with ciphers for philosophical concepts rather than singular characters. And you end up with a book that’s more like a disguised version of Hegel’s Logic, with characters interacting in ways perfectly determined by their concepts.
Nikos: Now you’re talking like a philosophy doctoral student. I’m going to have to ask you to stop.
Riggio: Yeah, a person walked right on by my reading when my friend told him that I was a PhD student in philosophy. I told her not to mention that again, if she stumps for me. She should say something pretentious about Borges instead.
Nikos: This will be my last question, but where did the story begin? What was the thought?
Riggio: My thought was my indignation about Raz’s idea in philosophy of law, that we defer to legal authorities the same way we defer to experts. Winchester articulates what I think is the natural evolution of that point of view to its extreme. Again, the tradition of philosophy worships reason, makes it into a moral virtue. Socrates said that knowledge makes someone morally good, and that’s just laughable. So I had this idea, that the account of legal authority as expert authority is secretly very fascist, very oppressive. But I also had a suspicion that I couldn’t argue against it as a philosophical essay. I wasn’t expert enough on actual theories of legal authority. And that kind of felt like I was playing into my opponent’s hand. So I decided to demonstrate the blind spots of pure reason, rather than arguing reasonably for them. Roshan was that demonstration.
Nikos: Will we see her again?
Riggio: Maybe one day. I hope so. I think there’s a lot more to her than comes across in this one story. There’s a novella I had an idea for a while ago, where I think she could be very useful. But I have no problem bringing someone back. I brought you back, didn’t I?
Nikos: No fourth wall tonight, sir. Thank you very much for sitting down.
Riggio: Thank you for having me.
Labels:
Albert Nikos,
Fictional Magazine,
Philosophy,
Writing
Monday, October 25, 2010
Have We No Right to Our Sort of Protest Songs?
It took me three weeks instead of one, but I’ve assembled my ideas about the problems of the affluent white person’s gesture of protest. It’s going to sound very cynical, but I actually consider my perspective on this quite optimistic, in a strange sort of way. All will, I hope, become clear by the end of the analysis.
So my loyal readers (or anyone who scrolls down to October 5) will know that I first began this stream of blogging with a saddening critique of an internet-based breast cancer awareness meme. People could put a joke in their statuses, mildly amusing at best, that would raise awareness of breast cancer among those who have already had this very opaque gesture explained to them. Here is the first, and in my view, the most superficial problem with the protest gestures of affluent white people. Quite a few of the things we get angry about – global poverty, disasters, disease, religious extremism, wars – are easily understood. And when people hang out in a public square holding signs that describe how much they hate war and cluster bombs, that’s easily understood. I look at a person with a sign that reads, “Stop the War in Iraq!” and I assume correctly that they very much want to stop that war in Iraq I’ve heard so much about. This is an effective protest because people, while they may not agree with you, will know what you’re talking about.
But some gestures of protest are very symbolic, and difficult to understand at first glance. In my breast cancer example from earlier this month, I found it very hard to understand. Cancer is a terrible disease, and we should raise money to research to cure cancers cheaply and effectively, and encourage people to self-examine and be mindful of their bodies, in case they develop tumours. A great way to spread awareness of this among your facebook friends is to post a status update like, “It’s Breast Cancer Awareness Month!” and embed a link to a reputable research charity or a web guide to self-exams. You could sponsor someone in a fundraising marathon, or some other kind of pledge drive. This would be an easily understood way of voicing your opinion and productively aiding the cause through the infrastructures that exist.
A terrible way to achieve a goal like this is to make your status an ambiguous joke about sex, writing “Athena Peterson likes it on the kitchen counter!” Really, you’re talking about ‘where you lay your purse when you come home,’ and in a long, elaborately detailed private message from the friend who’s been spreading this 21st century chain letter, explaining the symbolism that connects women’s sexual exploration, the attention that a kinky-sounding status garners, and the eroticization of the female breast to genuine concern about breast cancer. None of this deep and complicated meaning was at all present in the initial joke, which is the only part of this gesture that 95% of your friends will see! To them, your cause is lost in confusion and opaque symbolism.
I think this kind of protest is dreamt up by well-meaning people who simply have too much time on their hands, so they can ponder oblique connections between gestures, jokes, and political issues, then assemble a convincing pitch for their protest idea. Patton Oswalt has some wonderful jokes about this, his old routines about why hippies annoy him so much. But this kind of protest that defeats itself through its own opacity is the symptom of a much deeper problem with being a socially progressive affluent white person. Most of us in protest movements are affluent enough that we don’t have to work for a living. We do this because we’re bored.
Now, I don’t want to disparage the good intentions of many people, and I certainly don’t want to describe all progressive activists in my country as ivory tower academic types and trust fund kids who haven’t even seen poor people before. Most of the people I’ve known in activist communities have been on student loans, have staggering debt, and worked one or two wretched part-time jobs (fast food, gas stations, tour guides), to put themselves through school. But they could go to school, and university. They’re functionally literate. They have opportunities. They lived in decent neighbourhoods where you couldn’t just walk to the corner one block down to buy coke, meth, oxy, and heroin. They weren’t physically abused or molested. Their families usually had enough money to feed everyone and make the mortgage payments.
The people who actually understand from experience what it means to be poor, are poor, and they stay poor. Not by choice, but because poor people have to stay poor if capitalism is going to work. And communism only works for four or five decades before collapsing from the absurd weight of a bureaucracy big enough to plan (with minimal effectiveness, if that) an economy for an entire nation. We middle class liberals have the time to protest because we don’t have to worry much where our next meal is coming from. But because we aren’t poor, we can easily lose touch with the people we’re trying to help.
This is why moronically opaque, over-intellectual protest events happen: we have enough leisure time to come up with them, but actual poor people are too busy trying to survive to care. An affluent white person lives at a disconnect that the power of conscience alone can’t always bridge. That disconnect makes such a person a cartoon, and it makes the objects of their charity regard them with contempt and resentment. A poor person can legitimately say to the affluent white person who wants to help them, “You are an ignorant fool who understands nothing of my life. My life is hard and I work hard. I don’t need your fucking pity.”
Now for the most profound part of my analysis of the affluent white conscience: expand this scenario to the entire globe. Now colonialism is part of the picture, a massive system of economic exploitation that spread over the entire Earth and lasted centuries. We affluent white people exist because of the enormous effort our ancestors put into creating the massively unequal share of wealth among humanity today. If you think the resentment of a Canadian poor person toward a rich person who doesn’t understand their life can be powerful, imagine how someone who lives on the equivalent of a few Canadian coins each day would feel.
Even if affluent Western governments actually donated all the money in their foreign aid budgets to actual foreign aid, it is still an utter pittance. We live as we do today because for hundreds of years, our ancestors destroyed the economies of entire continents for their own gain. Today, we feel guilty about it. So we pity the poor of the world, and send some pocket change to them so they can buy an extra chicken and we can feel better. But it’s nothing more than our pity, which demeans and dehumanizes the people who are pitied. If an affluent Western person thinks they can restore the world to peace, harmony, and brotherhood with a few gestures of contrition about our society having reduced their societies to mud, she’s in for a rough surprise.
The global economy is an enormous crime against humanity. And I’m not even talking about the ecological destruction. That’s another post, and my PhD thesis.
There’s a beautiful and terrifying film that expresses the emptiness of the affluent’s contrition very succinctly. It’s called Cobra Verde, and it’s about the last gasp of the trans-Atlantic slave trade in the 1880s. There’s a scene, included in the trailer, where Klaus Kinski, playing Cobra Verde, the head of the slave trading port, takes a visitor to choose a slave woman to screw that evening. The women live in cramped quarters, in an underground hole. The chosen woman climbs out of a ladder. The visitor asks who these woman are, and Cobra Verde responds, with clear understanding of everything he’s done, “Our future murderers.”
Kinski plays a slave trader who understands exactly the horrifying criminal nature of the slave trade. He does it anyway because he is a criminal. He doesn’t pity his slaves either. He knows that one day the slave trade will end, and those who are oppressed now will take a place of dominance. He doesn’t call the slave woman an avenger, someone who will bring justice. He calls her a murderer. In this way, he understands that the only way to escape a system built on terror and injustice is not charity or contrition, but destruction.
But that’s not how the movie ends. The movie ends with a song by an African choir of young girls, singing in Akan, dancing in their own style, wearing their own clothes, and smiling. It’s an act of creation and celebration of life. The resentment engendered by pity, the confusion of a desperate conscience, the never-ending guilt of restitution, the ridiculous charity of affluent boredom; these are all forgotten. The scales of justice are thrown away, and we are left with dancing and laughter.
So my loyal readers (or anyone who scrolls down to October 5) will know that I first began this stream of blogging with a saddening critique of an internet-based breast cancer awareness meme. People could put a joke in their statuses, mildly amusing at best, that would raise awareness of breast cancer among those who have already had this very opaque gesture explained to them. Here is the first, and in my view, the most superficial problem with the protest gestures of affluent white people. Quite a few of the things we get angry about – global poverty, disasters, disease, religious extremism, wars – are easily understood. And when people hang out in a public square holding signs that describe how much they hate war and cluster bombs, that’s easily understood. I look at a person with a sign that reads, “Stop the War in Iraq!” and I assume correctly that they very much want to stop that war in Iraq I’ve heard so much about. This is an effective protest because people, while they may not agree with you, will know what you’re talking about.
But some gestures of protest are very symbolic, and difficult to understand at first glance. In my breast cancer example from earlier this month, I found it very hard to understand. Cancer is a terrible disease, and we should raise money to research to cure cancers cheaply and effectively, and encourage people to self-examine and be mindful of their bodies, in case they develop tumours. A great way to spread awareness of this among your facebook friends is to post a status update like, “It’s Breast Cancer Awareness Month!” and embed a link to a reputable research charity or a web guide to self-exams. You could sponsor someone in a fundraising marathon, or some other kind of pledge drive. This would be an easily understood way of voicing your opinion and productively aiding the cause through the infrastructures that exist.
A terrible way to achieve a goal like this is to make your status an ambiguous joke about sex, writing “Athena Peterson likes it on the kitchen counter!” Really, you’re talking about ‘where you lay your purse when you come home,’ and in a long, elaborately detailed private message from the friend who’s been spreading this 21st century chain letter, explaining the symbolism that connects women’s sexual exploration, the attention that a kinky-sounding status garners, and the eroticization of the female breast to genuine concern about breast cancer. None of this deep and complicated meaning was at all present in the initial joke, which is the only part of this gesture that 95% of your friends will see! To them, your cause is lost in confusion and opaque symbolism.
I think this kind of protest is dreamt up by well-meaning people who simply have too much time on their hands, so they can ponder oblique connections between gestures, jokes, and political issues, then assemble a convincing pitch for their protest idea. Patton Oswalt has some wonderful jokes about this, his old routines about why hippies annoy him so much. But this kind of protest that defeats itself through its own opacity is the symptom of a much deeper problem with being a socially progressive affluent white person. Most of us in protest movements are affluent enough that we don’t have to work for a living. We do this because we’re bored.
Now, I don’t want to disparage the good intentions of many people, and I certainly don’t want to describe all progressive activists in my country as ivory tower academic types and trust fund kids who haven’t even seen poor people before. Most of the people I’ve known in activist communities have been on student loans, have staggering debt, and worked one or two wretched part-time jobs (fast food, gas stations, tour guides), to put themselves through school. But they could go to school, and university. They’re functionally literate. They have opportunities. They lived in decent neighbourhoods where you couldn’t just walk to the corner one block down to buy coke, meth, oxy, and heroin. They weren’t physically abused or molested. Their families usually had enough money to feed everyone and make the mortgage payments.
The people who actually understand from experience what it means to be poor, are poor, and they stay poor. Not by choice, but because poor people have to stay poor if capitalism is going to work. And communism only works for four or five decades before collapsing from the absurd weight of a bureaucracy big enough to plan (with minimal effectiveness, if that) an economy for an entire nation. We middle class liberals have the time to protest because we don’t have to worry much where our next meal is coming from. But because we aren’t poor, we can easily lose touch with the people we’re trying to help.
This is why moronically opaque, over-intellectual protest events happen: we have enough leisure time to come up with them, but actual poor people are too busy trying to survive to care. An affluent white person lives at a disconnect that the power of conscience alone can’t always bridge. That disconnect makes such a person a cartoon, and it makes the objects of their charity regard them with contempt and resentment. A poor person can legitimately say to the affluent white person who wants to help them, “You are an ignorant fool who understands nothing of my life. My life is hard and I work hard. I don’t need your fucking pity.”
Now for the most profound part of my analysis of the affluent white conscience: expand this scenario to the entire globe. Now colonialism is part of the picture, a massive system of economic exploitation that spread over the entire Earth and lasted centuries. We affluent white people exist because of the enormous effort our ancestors put into creating the massively unequal share of wealth among humanity today. If you think the resentment of a Canadian poor person toward a rich person who doesn’t understand their life can be powerful, imagine how someone who lives on the equivalent of a few Canadian coins each day would feel.
Even if affluent Western governments actually donated all the money in their foreign aid budgets to actual foreign aid, it is still an utter pittance. We live as we do today because for hundreds of years, our ancestors destroyed the economies of entire continents for their own gain. Today, we feel guilty about it. So we pity the poor of the world, and send some pocket change to them so they can buy an extra chicken and we can feel better. But it’s nothing more than our pity, which demeans and dehumanizes the people who are pitied. If an affluent Western person thinks they can restore the world to peace, harmony, and brotherhood with a few gestures of contrition about our society having reduced their societies to mud, she’s in for a rough surprise.
The global economy is an enormous crime against humanity. And I’m not even talking about the ecological destruction. That’s another post, and my PhD thesis.
There’s a beautiful and terrifying film that expresses the emptiness of the affluent’s contrition very succinctly. It’s called Cobra Verde, and it’s about the last gasp of the trans-Atlantic slave trade in the 1880s. There’s a scene, included in the trailer, where Klaus Kinski, playing Cobra Verde, the head of the slave trading port, takes a visitor to choose a slave woman to screw that evening. The women live in cramped quarters, in an underground hole. The chosen woman climbs out of a ladder. The visitor asks who these woman are, and Cobra Verde responds, with clear understanding of everything he’s done, “Our future murderers.”
Kinski plays a slave trader who understands exactly the horrifying criminal nature of the slave trade. He does it anyway because he is a criminal. He doesn’t pity his slaves either. He knows that one day the slave trade will end, and those who are oppressed now will take a place of dominance. He doesn’t call the slave woman an avenger, someone who will bring justice. He calls her a murderer. In this way, he understands that the only way to escape a system built on terror and injustice is not charity or contrition, but destruction.
But that’s not how the movie ends. The movie ends with a song by an African choir of young girls, singing in Akan, dancing in their own style, wearing their own clothes, and smiling. It’s an act of creation and celebration of life. The resentment engendered by pity, the confusion of a desperate conscience, the never-ending guilt of restitution, the ridiculous charity of affluent boredom; these are all forgotten. The scales of justice are thrown away, and we are left with dancing and laughter.
Labels:
Cobra Verde,
Economics,
Ethics,
Patton Oswalt,
Philosophy,
Politics,
Werner Herzog
Sunday, October 17, 2010
Cover Without a Mother
I had a very curious idea about songwriting that I might eventually turn into some kind of academic article, though at this point, I have no idea how to do that. I may eventually ask my cousin, who is now a tenure-track professor of jazz performance at University of Victoria’s music school. The story of how I came to this idea is just as interesting, or at least funnier, than the idea itself.
Last weekend, I went with several close friends to Oktoberfest in Kitchener, the largest Oktoberfest outside Germany. I was told to expect utter ridiculousness, and I was not disappointed. After an hour of far too rapid pre-drinking, we took a short taxi ride to the auditorium where our Oktoberfest tickets were. Yes, it was an auditorium, with the hockey ice removed and a series of long red tables cris-crossing the cement floor. The auditorium was ringed with drink ticket booths, bars selling mediocre mass-produced beer (Molson Canadian was the lesser of the two evils), and pretzel and sausage stands. I knew my stomach was unable to handle giant sausages at that point in the night, but I spent $3.50 on the best pretzel I have ever eaten in my life.
At the centre of all this ridiculous insanity was a stage where, about an hour after we arrived, a band started to play. The band was composed of middle aged men, some of whom wore the hairstyles of 1980s hair metal bands whose hairspray was confiscated at customs as deadly weapons. The first song they played was Bon Jovi’s “You Give Love a Bad Name,” and as they worked their way through a variety of songs that night, I realized that they were all radio rock from the 1980s, and with few exceptions they were all Bon Jovi songs. I was at an Oktoberfest in Ontario where the best beer was Molson Canadian and the headlining act was a Bon Jovi cover band. I think I spent about a total of almost two hours laughing hysterically.
I’ll forgo the morning after and the car ride back, during which my travelling companions rediscovered their inner Robert Downey Jr circa 1997. The philosophical insight came to me a few days later, as I walked home on a productive evening of thinking. A mediocre song, like most of the songs in the Bon Jovi catalogue, almost always sounds even worse when a cover band plays it. But most Beatles songs still sound excellent when a cover band plays them. As long as they’re competent with their instruments and can sing reasonably well, a genuinely great song will always be covered well. A song that always sounds so good has a greater likelihood of being played by other people, because the quality doesn’t typically degrade, as in Bon Jovi or KISS covers.
Before the invention and mass production of recording technology, almost every song anyone ever heard was a cover song: someone playing a song that somebody else wrote, sometimes decades or even centuries ago. Today, when someone plays a cover song, we think of it as the player’s version of the writer’s song. And we can refer back to a definitive version of that song to compare the writer’s and the player’s: the album track.
But there isn’t really anything essentially different from the album track and a live reproduction and a cover version. The instrumentation may change, the quality of play may be different, but every iteration of that song is the same song. We’ve come to fetishize the recording to the point that the recorded version is often understood as the essence of the song. The album version is the theme, and all live performances and covers are variations. But by the 17th century, someone playing a song from the 16th century has no idea how it might have originally sounded. Without some definitive recorded version, that musician only has the basic structure of the song and his own skill to play it. There’s no battle between some new version of the song and its pure original.
It made me consider the idea that this understanding of the record as the essential version of the song is a kind of mistake. The musicians can be much more meticulous about the creation of a song in the studio, add effects or instrumentations that are only possible in the studio, and then rearrange everything in order to play the song live. Sometimes, the live version will be completely different from the recorded version.
Coroner ran into this problem a lot, because they created songs with gigantic numbers of guitar tracks all integrated with incredible complexity. But they only ever played live as a three-piece: the guitarist could never, with his single instrument, capture the same power and complexity as a studio version with twenty or more tracks. Their live performances lacked the necessary power that the studio could give them. Meanwhile, KISS only really broke through with their live album: the studio was too clinical an environment to produce the spontaneous, party-like energy of their live performances. KISS thrived on that energy of the concert, and they could never bring that energy to the meticulous construction of the studio.
The studio is just one set of ways of producing the song. It’s a very different set of tools than live performance, so there are very different things you can do. But the studio version is just one more iteration of the song itself, one more variation without a theme. It’s just that the studio version, being the one on record, is most easily referred back to. It’s the version of the song that most people will hear. They can play it at their leisure. They’ll hear it first, they’ll hear it most often, and they’ll probably hear it exclusively. So they think of this version that they hear most often, the most likely become ubiquitous in one’s experience of the song, as the essence of that song, and all other versions judged in reference to it. But the studio is one way among many of organizing musicians and instruments.
The song itself is the organizing principle of all its performances, whether that performance happens in a studio with the instruments recorded weeks apart and assembled on a mixing board, in the middle of an arena stage, or on an acoustic guitar half-drunk at a party.
Last weekend, I went with several close friends to Oktoberfest in Kitchener, the largest Oktoberfest outside Germany. I was told to expect utter ridiculousness, and I was not disappointed. After an hour of far too rapid pre-drinking, we took a short taxi ride to the auditorium where our Oktoberfest tickets were. Yes, it was an auditorium, with the hockey ice removed and a series of long red tables cris-crossing the cement floor. The auditorium was ringed with drink ticket booths, bars selling mediocre mass-produced beer (Molson Canadian was the lesser of the two evils), and pretzel and sausage stands. I knew my stomach was unable to handle giant sausages at that point in the night, but I spent $3.50 on the best pretzel I have ever eaten in my life.
At the centre of all this ridiculous insanity was a stage where, about an hour after we arrived, a band started to play. The band was composed of middle aged men, some of whom wore the hairstyles of 1980s hair metal bands whose hairspray was confiscated at customs as deadly weapons. The first song they played was Bon Jovi’s “You Give Love a Bad Name,” and as they worked their way through a variety of songs that night, I realized that they were all radio rock from the 1980s, and with few exceptions they were all Bon Jovi songs. I was at an Oktoberfest in Ontario where the best beer was Molson Canadian and the headlining act was a Bon Jovi cover band. I think I spent about a total of almost two hours laughing hysterically.
I’ll forgo the morning after and the car ride back, during which my travelling companions rediscovered their inner Robert Downey Jr circa 1997. The philosophical insight came to me a few days later, as I walked home on a productive evening of thinking. A mediocre song, like most of the songs in the Bon Jovi catalogue, almost always sounds even worse when a cover band plays it. But most Beatles songs still sound excellent when a cover band plays them. As long as they’re competent with their instruments and can sing reasonably well, a genuinely great song will always be covered well. A song that always sounds so good has a greater likelihood of being played by other people, because the quality doesn’t typically degrade, as in Bon Jovi or KISS covers.
Before the invention and mass production of recording technology, almost every song anyone ever heard was a cover song: someone playing a song that somebody else wrote, sometimes decades or even centuries ago. Today, when someone plays a cover song, we think of it as the player’s version of the writer’s song. And we can refer back to a definitive version of that song to compare the writer’s and the player’s: the album track.
But there isn’t really anything essentially different from the album track and a live reproduction and a cover version. The instrumentation may change, the quality of play may be different, but every iteration of that song is the same song. We’ve come to fetishize the recording to the point that the recorded version is often understood as the essence of the song. The album version is the theme, and all live performances and covers are variations. But by the 17th century, someone playing a song from the 16th century has no idea how it might have originally sounded. Without some definitive recorded version, that musician only has the basic structure of the song and his own skill to play it. There’s no battle between some new version of the song and its pure original.
It made me consider the idea that this understanding of the record as the essential version of the song is a kind of mistake. The musicians can be much more meticulous about the creation of a song in the studio, add effects or instrumentations that are only possible in the studio, and then rearrange everything in order to play the song live. Sometimes, the live version will be completely different from the recorded version.
Coroner ran into this problem a lot, because they created songs with gigantic numbers of guitar tracks all integrated with incredible complexity. But they only ever played live as a three-piece: the guitarist could never, with his single instrument, capture the same power and complexity as a studio version with twenty or more tracks. Their live performances lacked the necessary power that the studio could give them. Meanwhile, KISS only really broke through with their live album: the studio was too clinical an environment to produce the spontaneous, party-like energy of their live performances. KISS thrived on that energy of the concert, and they could never bring that energy to the meticulous construction of the studio.
The studio is just one set of ways of producing the song. It’s a very different set of tools than live performance, so there are very different things you can do. But the studio version is just one more iteration of the song itself, one more variation without a theme. It’s just that the studio version, being the one on record, is most easily referred back to. It’s the version of the song that most people will hear. They can play it at their leisure. They’ll hear it first, they’ll hear it most often, and they’ll probably hear it exclusively. So they think of this version that they hear most often, the most likely become ubiquitous in one’s experience of the song, as the essence of that song, and all other versions judged in reference to it. But the studio is one way among many of organizing musicians and instruments.
The song itself is the organizing principle of all its performances, whether that performance happens in a studio with the instruments recorded weeks apart and assembled on a mixing board, in the middle of an arena stage, or on an acoustic guitar half-drunk at a party.
Monday, September 20, 2010
Freedom Is Doing What You Never Even Knew You Could Have Done
I had one of those minor philosophical epiphanies that I probably won’t do anything with professionally, at least not directly. In order to produce publishable (in academic journals) material around this epiphany, I’d have to read at least the last twenty years of evolutionary philosophy and psychology, and the freedom vs determinism debate going back at least twice as far, and then well into the seventeenth century. I really don’t have time for an extra project that long, so I’m going to blog about it. There’s a few steps in this, so it’ll take a few paragraphs to spell out.
Why are the opponents of evolution so frightened by the prospect? It has to do with what exactly that prospect is. The more theatrical anti-evolutionists have made rhetoric of jokes about an orangutan being their uncles or aunts. But the weirdness of existing on a continuum with other species is only evoked to disgust people. Conceptually, a deeper meaning underlies it.
The way humans have understood themselves morally is as free agents. Human morality and the complex societies that produce these moral systems are typically seen as exceptions to the natural order. We humans are artificial. That which is natural is governed by deterministic linear causality, the most advanced form of which is instinct. But humans are not purely instinctual: we are moral. Morality is an exception to instinct, animal behaviour not subject to linear causality.
But evolution is a natural process. If humans are the product of a natural process, then our moral systems do not constitute an exception to the deterministic natural order. This produces a contradiction in which morality loses, in those people who think of the deterministic natural order in a particular (but very popular) way. Actions which are not freely chosen cannot require the moral responsibility of their actors. If morality is a natural process, then it is a complex evolution of instinct, an entire determined process. So we have moral concepts by which we attribute responsibility, but no actual responsibility because we are not an exception to the deterministic natural order.
The greatest fear underlying opposition to evolution is the fear that there is no genuine moral responsibility.
I laid out this chain of reasoning, but I don’t believe in it, because I think several of the premises according to which this makes sense are not actually the case about the universe. It hinges on a metaphysical point. I used the term deterministic linear causality above, and I did that on purpose. Causality in general is an underdetermined process: an event can have a huge number of conditions and causes, and very complex relations among them. The image of one snooker ball hitting another snooker ball is an example that oversimplifies an amazingly complex universe. Determinism is similarly underdetermined: an event can have a huge number of different effects, can change a system in a wide variety of highly complex ways, and each of these effects interfere with each other apart from the event that was their genesis.
Here’s where the word ‘linear’ comes in. Precisely because of these underdeterminations in how events actually interact and cause each other, very few relations of causality are actually linear, like the snooker balls. The world we live in is enormously complex, and even though the mathematics that describe these complex systems that are our world are deterministic, there are enormous possibilities within the deterministic development of a system. One event does not cause a single set of effect events. One event sets off an enormous chain of interrelated events with millions of possibilities that its constituent bodies can choose from. That choice among possibilities is especially open to creatures with highly complex perceptual and reasoning skills who can analyze situations with an eye towards all that can be, not simply what there is.
Instinct is a reactive response to stimulus, a pattern that an organism follows according to what is there. The ability to conceive of possibility, either through colloquial reasoning, or highly complex phase space chaos mathematics, is a step far beyond instinct. This way of thinking about the metaphysics of the universe and causality takes us out of the trap by which moral responsibility disappears. Freedom, the capacity to understand and act on what can be rather than simply on what there is, is an evolved trait.
The natural order itself constitutes creatures that are more free than any creature before, so free that they can imagine themselves to be unnatural, and believe their dreams.
Why are the opponents of evolution so frightened by the prospect? It has to do with what exactly that prospect is. The more theatrical anti-evolutionists have made rhetoric of jokes about an orangutan being their uncles or aunts. But the weirdness of existing on a continuum with other species is only evoked to disgust people. Conceptually, a deeper meaning underlies it.
The way humans have understood themselves morally is as free agents. Human morality and the complex societies that produce these moral systems are typically seen as exceptions to the natural order. We humans are artificial. That which is natural is governed by deterministic linear causality, the most advanced form of which is instinct. But humans are not purely instinctual: we are moral. Morality is an exception to instinct, animal behaviour not subject to linear causality.
But evolution is a natural process. If humans are the product of a natural process, then our moral systems do not constitute an exception to the deterministic natural order. This produces a contradiction in which morality loses, in those people who think of the deterministic natural order in a particular (but very popular) way. Actions which are not freely chosen cannot require the moral responsibility of their actors. If morality is a natural process, then it is a complex evolution of instinct, an entire determined process. So we have moral concepts by which we attribute responsibility, but no actual responsibility because we are not an exception to the deterministic natural order.
The greatest fear underlying opposition to evolution is the fear that there is no genuine moral responsibility.
I laid out this chain of reasoning, but I don’t believe in it, because I think several of the premises according to which this makes sense are not actually the case about the universe. It hinges on a metaphysical point. I used the term deterministic linear causality above, and I did that on purpose. Causality in general is an underdetermined process: an event can have a huge number of conditions and causes, and very complex relations among them. The image of one snooker ball hitting another snooker ball is an example that oversimplifies an amazingly complex universe. Determinism is similarly underdetermined: an event can have a huge number of different effects, can change a system in a wide variety of highly complex ways, and each of these effects interfere with each other apart from the event that was their genesis.
Here’s where the word ‘linear’ comes in. Precisely because of these underdeterminations in how events actually interact and cause each other, very few relations of causality are actually linear, like the snooker balls. The world we live in is enormously complex, and even though the mathematics that describe these complex systems that are our world are deterministic, there are enormous possibilities within the deterministic development of a system. One event does not cause a single set of effect events. One event sets off an enormous chain of interrelated events with millions of possibilities that its constituent bodies can choose from. That choice among possibilities is especially open to creatures with highly complex perceptual and reasoning skills who can analyze situations with an eye towards all that can be, not simply what there is.
Instinct is a reactive response to stimulus, a pattern that an organism follows according to what is there. The ability to conceive of possibility, either through colloquial reasoning, or highly complex phase space chaos mathematics, is a step far beyond instinct. This way of thinking about the metaphysics of the universe and causality takes us out of the trap by which moral responsibility disappears. Freedom, the capacity to understand and act on what can be rather than simply on what there is, is an evolved trait.
The natural order itself constitutes creatures that are more free than any creature before, so free that they can imagine themselves to be unnatural, and believe their dreams.
Saturday, September 4, 2010
How to Read Philosophy, and Be a Philosopher
In the course of preparing a presentation I’ve been invited to give at a conference at University of St Gallen in Switzerland later this Fall, an intriguing idea came to me about the history of philosophy. It’s too complex to fit into the space I have for the presentation, but it’s promising enough that I think I can work with it for a while. It’s also connected to a conversation I had Friday evening about how philosophy is taught at the introductory and undergraduate level.
My friend has begun to find it ridiculous that we are teaching undergraduates philosophy by having them argue against or otherwise try to attack the works and ideas of the giants of our fields. If a philosophical work or corpus has survived with a prominent role in the history of ideas for hundreds or thousands of years, it seems absurd that we would teach people by demanding that they refute Aristotle at age 19. It trivializes a work of monumental scope and power. It demeans the concepts that have revolutionized thinking over the millennia. I didn’t recall being taught that way.
My first philosophy instructor was (and still is) an old Cambridge man, who waxed to me this summer about the old way of teaching philosophy, where you truly know your history, can genuinely understand the thoughts and social milieux that shaped the thinkers you’re studying. You can’t start refuting all over the house until you know why every word is just the way it is. This is philosophy as serious scholarship, the meticulous investigation into a way of life that in most cases no longer exists, so that one can understand most deeply how a great piece of work was produced, and how it was meant to affect its own time, its own readers.
However, there is a different way to read philosophy which I consider equally legitimate as serious scholarship, but is easier in some respects, but far more difficult in others. Werner Herzog talks about how the meaning of his films, particularly Aguirre The Wrath of God, changes depending on who is watching them. The work is no less great, even though the people who receive it transform its meaning significantly and radically. In fact, it’s greater because it can have all these different meanings in different contexts of culture and history. Philosophy has such a long tradition that its great works have undergone similar transformations. It is easier than scholarship because it doesn’t require so much historical and contextual work. But inspirational readings are more difficult because the work stands out as even more alien when it is transplanted into a new context.
It’s difficult to read philosophy well, or indeed any great work, when you are part of the community. Every filmmaker, the Hollywood hacks, commercial directors, no-budget indie directors with a stolen digital camera, is in the same community as Kubrick, Murnau, and Herzog. Writers are in the same community as Eliot, Joyce, and Cervantes. Philosophers are in the same community as Plato, Russell, Deleuze, and Kant. The danger of the trivializing attitude of refutation being your only engagement with a work is that you make a mockery of the giants of your field. The scholarly attitude becomes dangerous when it becomes worship, and you sterilize your own creativity in a terrible inferiority complex.
The inspirational attitude is to pick up a work and a philosopher as if you are talking to an old, strange friend. This friend will shock and terrify you, and also mystify you completely. But if you can engage your alien friend in a respectful conversation, a productive dialogue, then you can become a great figure yourself.
My friend has begun to find it ridiculous that we are teaching undergraduates philosophy by having them argue against or otherwise try to attack the works and ideas of the giants of our fields. If a philosophical work or corpus has survived with a prominent role in the history of ideas for hundreds or thousands of years, it seems absurd that we would teach people by demanding that they refute Aristotle at age 19. It trivializes a work of monumental scope and power. It demeans the concepts that have revolutionized thinking over the millennia. I didn’t recall being taught that way.
My first philosophy instructor was (and still is) an old Cambridge man, who waxed to me this summer about the old way of teaching philosophy, where you truly know your history, can genuinely understand the thoughts and social milieux that shaped the thinkers you’re studying. You can’t start refuting all over the house until you know why every word is just the way it is. This is philosophy as serious scholarship, the meticulous investigation into a way of life that in most cases no longer exists, so that one can understand most deeply how a great piece of work was produced, and how it was meant to affect its own time, its own readers.
However, there is a different way to read philosophy which I consider equally legitimate as serious scholarship, but is easier in some respects, but far more difficult in others. Werner Herzog talks about how the meaning of his films, particularly Aguirre The Wrath of God, changes depending on who is watching them. The work is no less great, even though the people who receive it transform its meaning significantly and radically. In fact, it’s greater because it can have all these different meanings in different contexts of culture and history. Philosophy has such a long tradition that its great works have undergone similar transformations. It is easier than scholarship because it doesn’t require so much historical and contextual work. But inspirational readings are more difficult because the work stands out as even more alien when it is transplanted into a new context.
It’s difficult to read philosophy well, or indeed any great work, when you are part of the community. Every filmmaker, the Hollywood hacks, commercial directors, no-budget indie directors with a stolen digital camera, is in the same community as Kubrick, Murnau, and Herzog. Writers are in the same community as Eliot, Joyce, and Cervantes. Philosophers are in the same community as Plato, Russell, Deleuze, and Kant. The danger of the trivializing attitude of refutation being your only engagement with a work is that you make a mockery of the giants of your field. The scholarly attitude becomes dangerous when it becomes worship, and you sterilize your own creativity in a terrible inferiority complex.
The inspirational attitude is to pick up a work and a philosopher as if you are talking to an old, strange friend. This friend will shock and terrify you, and also mystify you completely. But if you can engage your alien friend in a respectful conversation, a productive dialogue, then you can become a great figure yourself.
Tuesday, August 17, 2010
The ‘Sin’ of Omission, History, and Philosophy
I picked up this afternoon, as a summer present to myself, a giant collection of fiction by Jorge Borges, who in the past year has become one of my favourite authors, especially in how I approach my shorter pieces of fiction. Thousands of ideas traversing all disciplines of knowledge animate his work, and his work inspires just as many ideas in his readers. Meditating on his work today has distilled in me the reasons for one of the only concrete, unequivocal stands I take in philosophy and art.
I have occasionally come across a philosopher who believes that the discipline’s goal is to discover ultimate universal truths through argument, and that these truths will be simple, clear, and comprehensive. I’ll omit names of those I’ve met personally, and mention one illustrative example that I’ve only read, Scott Shapiro. It’s an admirable goal, the admission and expectation that one day, philosophy will have completed its task, and in so doing, will be the greatest of all possible sciences. It will have explained all of existence in a short series of simple phrases.
It’s a beautiful dream, but an arrogant, hubristic, and ignorant dream. Consider the nature of expression, not in terms of what is meant or what is said or what is understood, but in terms of what is not said. I say a single word, for example, ‘symbol.’ Most of the time, we concentrate on that spoken word itself, and what it could mean, how we can understand it.
But when I say one word, I choose that one over all the thousands of words that I know in the languages I understand. So much of what is possible is omitted when I act. All the words that I could have said are thrown away and forgotten when I choose that one word. This enormous omission of what could have been, of possibility, of capacity, happens with each utterance of every person.
When I am silent, that is actually when I am closest to articulating those dreamy phrases that encompass all the universe, because I omit the least. In not acting, I certainly don’t omit, but I don’t say anything either. Perhaps that’s what Wittgenstein meant when he ended the Tractatus with “That of which we cannot speak, we must be silent.” There are some possibilities, some capacities, that we should not ignore and discard because of the occasional practical need to say stuff. This, I think Wittgenstein tried to say.
I don’t have Wittgenstein’s mystical leanings, but I think this is important for philosophers to consider when trying to articulate their mission statement. Every word said, every idea developed, requires the omission of all the ideas and words within our capacities apart from that one chosen. Articulating what is requires the omission of what could have been. If philosophy is to take capacity seriously, which I believe it must, then we must consider the radical finitude of all sensible statements. What is said cuts away all that could have been said. Can we really consider all that is said to be a complete picture of reality when so much is invariably omitted?
I have occasionally come across a philosopher who believes that the discipline’s goal is to discover ultimate universal truths through argument, and that these truths will be simple, clear, and comprehensive. I’ll omit names of those I’ve met personally, and mention one illustrative example that I’ve only read, Scott Shapiro. It’s an admirable goal, the admission and expectation that one day, philosophy will have completed its task, and in so doing, will be the greatest of all possible sciences. It will have explained all of existence in a short series of simple phrases.
It’s a beautiful dream, but an arrogant, hubristic, and ignorant dream. Consider the nature of expression, not in terms of what is meant or what is said or what is understood, but in terms of what is not said. I say a single word, for example, ‘symbol.’ Most of the time, we concentrate on that spoken word itself, and what it could mean, how we can understand it.
But when I say one word, I choose that one over all the thousands of words that I know in the languages I understand. So much of what is possible is omitted when I act. All the words that I could have said are thrown away and forgotten when I choose that one word. This enormous omission of what could have been, of possibility, of capacity, happens with each utterance of every person.
When I am silent, that is actually when I am closest to articulating those dreamy phrases that encompass all the universe, because I omit the least. In not acting, I certainly don’t omit, but I don’t say anything either. Perhaps that’s what Wittgenstein meant when he ended the Tractatus with “That of which we cannot speak, we must be silent.” There are some possibilities, some capacities, that we should not ignore and discard because of the occasional practical need to say stuff. This, I think Wittgenstein tried to say.
I don’t have Wittgenstein’s mystical leanings, but I think this is important for philosophers to consider when trying to articulate their mission statement. Every word said, every idea developed, requires the omission of all the ideas and words within our capacities apart from that one chosen. Articulating what is requires the omission of what could have been. If philosophy is to take capacity seriously, which I believe it must, then we must consider the radical finitude of all sensible statements. What is said cuts away all that could have been said. Can we really consider all that is said to be a complete picture of reality when so much is invariably omitted?
Labels:
Art,
Jorge Luis Borges,
Language,
Literature,
Philosophy
Wednesday, July 28, 2010
A Hilariously Simple Idea About Gilles Deleuze
I was reading some philosophy this evening, as is my wont and my job, when I had one of those moments when several disparate threads of philosphical reading and reflection came together into what I considered a pretty wild revelation. The past two winters, my PhD supervisor has been teaching a graduate level seminar on the thought of Gilles Deleuze, a notoriously difficult French writer of philosophy. One of the concepts that has been most puzzling in that seminar, and in Deleuze’s writing generally, is the virtual.
I’m not going to go into what we speculated about the nature of the virtual, because I don’t have time this evening to write out all that speculation. We talked about Henri Bergson’s metaphysics, the mathematics of differential equations represented in phase space, the nature of possibility. We tried to figure out how something could be real but not actual. I could go on, but I won’t. I have lunch plans in fourteen hours, and I may run out of time. Also, I want to qualify this post with the fact that I’m not yet familiar enough with the French language secondary material on Deleuze to know whether this idea has been articulated there already. But I think I have figured out exactly what this concept is.
The virtual existence of any body is a complete set of all that a body can do. Here’s how I figured it out:
1) A body’s mathematical representation as a phase space is a representation on an n-dimensional map of every possible state of that body. We talked in the seminar about the virtual being ‘something like, but not quite’ phase space.
2) A possible state of a body is something that body can do. I can run, eat, sing (poorly), impersonate the voice of internationally acclaimed film director Werner Herzog. But I am not doing any of that right now. None of these actions are actually being done as I write this, but I can do them.
3) Understand the possible states of a body as contained within the structure of that body itself. This could sound weird, but all you’re doing is considering the capacities of a body to be part of that body. A capacity is a real part of a body, even while that body is not acting with that capacity at the moment.
4) A capacity not enacted right now is a real part of a body, but not actual because it isn’t enacted. A body can do this capacity, but doesn’t all the time, or maybe even ever. I can develop my voice into a deeply rich baritone and embark on an eccentric music career. But I won’t. That’s a real capacity I have, so is really part of the structure of my body. But it will never be actualized.
All that a body can do (keep your eyes open, Spinoza fans, Deleuze was one of you too) is the virtual aspects of that body. That’s Deleuze’s language for discussing a body’s capacities, what a structure is capable of, even if that body never develops that capacity. The capacity is always part of that body, even if it is never actualized. Deleuze calls that virtual.
I’m not going to go into what we speculated about the nature of the virtual, because I don’t have time this evening to write out all that speculation. We talked about Henri Bergson’s metaphysics, the mathematics of differential equations represented in phase space, the nature of possibility. We tried to figure out how something could be real but not actual. I could go on, but I won’t. I have lunch plans in fourteen hours, and I may run out of time. Also, I want to qualify this post with the fact that I’m not yet familiar enough with the French language secondary material on Deleuze to know whether this idea has been articulated there already. But I think I have figured out exactly what this concept is.
The virtual existence of any body is a complete set of all that a body can do. Here’s how I figured it out:
1) A body’s mathematical representation as a phase space is a representation on an n-dimensional map of every possible state of that body. We talked in the seminar about the virtual being ‘something like, but not quite’ phase space.
2) A possible state of a body is something that body can do. I can run, eat, sing (poorly), impersonate the voice of internationally acclaimed film director Werner Herzog. But I am not doing any of that right now. None of these actions are actually being done as I write this, but I can do them.
3) Understand the possible states of a body as contained within the structure of that body itself. This could sound weird, but all you’re doing is considering the capacities of a body to be part of that body. A capacity is a real part of a body, even while that body is not acting with that capacity at the moment.
4) A capacity not enacted right now is a real part of a body, but not actual because it isn’t enacted. A body can do this capacity, but doesn’t all the time, or maybe even ever. I can develop my voice into a deeply rich baritone and embark on an eccentric music career. But I won’t. That’s a real capacity I have, so is really part of the structure of my body. But it will never be actualized.
All that a body can do (keep your eyes open, Spinoza fans, Deleuze was one of you too) is the virtual aspects of that body. That’s Deleuze’s language for discussing a body’s capacities, what a structure is capable of, even if that body never develops that capacity. The capacity is always part of that body, even if it is never actualized. Deleuze calls that virtual.
Subscribe to:
Comments (Atom)