If you're reading this (though I can't imagine there are many of you who might be), I've moved this yere blog over to a different service, which is easier for me to manage. Feel free to join me over at: https://dissoitopoi.home.blog.
All the stuff from this blog has been migrated over, so there's no need to fear losing access to the dozen things I wrote here in, like, 10 years.
Dissoi Topoi
A collection of bon mots, non sequiturs, and solecisms.
Tuesday, December 31, 2019
Monday, December 30, 2019
"Everything in Moderation" Doesn't Prevent Extremism
“Everything in moderation” (or EIM, for short) is a hegemonic commonplace in 21st century American culture. That means, it is a belief that is so commonly accepted as to be nearly unquestionable. And it’s a serious problem for supporters of democracy in the 21st century.
The reasons for the phrase’s popularity are easy to see. We need salt and fat in our diets, for instance, but too much salt or fat is unhealthy. A little liquor can be a nice way to celebrate a holiday with friends, but too much can lead to bad behavior, addiction, and death. Solitude is restorative; isolation is destructive. Illustrative examples of things that are good in moderation but bad in excess are countless, but I’ll stop here in a nod to moderation. “Everything in moderation” is so obviously a virtue that it hardly bears a second thought.
But one major problem with EIM being a hegemonic commonplace—that is, a belief we tend to take as a given—is that it is not always applicable. “Everything in moderation” can start to fall apart, for example, when we apply it to political philosophies. “We need a little totalitarian dictatorship, but you know, in moderation.” I’d wager that most people who live in democratic societies would find such a comment curious (I’d hope enraging, but I’d bet actual money on “curious”).
For that matter, “Democracy in moderation” is not a phrase I’d seriously expect to encounter in the wild, even on the internet where anything ridiculous can be found if one looks hard enough. I’m not talking about moderate democracy, I’m talking about democracy in moderation.
The reasons people don’t talk about “democracy in moderation” are pretty straightforward. We can talk about pragmatic limits on democracy—the need for representatives, for example, or the maintenance of non-public, and therefore non-democratic, decision making—but the idea that we just need a little democracy, not too much, just a smidge, is nonsensical. How do you have just a little democracy? “Democracy in moderation” begs the question of what other systems we’re going to also adopt in moderation. Some democracy in the spring, a light communism in the summer, maybe a nice monarchy in the fall, and a hearty totalitarian dictatorship in the winter, when it’s most likely to thrive?
My point here is that “everything in moderation,” reasonable though it may initially seem, is often not very good advice for making decisions. Sometimes, frankly, it just doesn’t belong in the conversation. But, because it is a hegemonic commonplace, like any hegemonic commonplace, the taken-for-grantedness of “everything in moderation” can show up in places it doesn’t belong, and it can short-circuit thinking even in areas where it doesn’t make very good sense. As a result, EIM can appear to be virtuous even when it’s actually really dangerous. Such is the case of politics and media in the current moment in America (and really, for at least the last decades, but probably longer).
In late December, as just one example, Chuck Todd gave an interview to Rolling Stone to promote an upcoming MSNBC special about political misinformation and disinformation, which he is suddenly all worried about. Todd is the Political Director for NBC News and the current moderator of NBC’s wildly popular political show, Meet the Press. Despite being one of the most influential (and presumably most knowledgeable) political commentators in the country, Todd admitted that he was “just so absurdly naive in hindsight” about American politicians’ willingness to spread conspiracy theories and misinformation to accomplish their political goals. Todd was specifically talking about Republicans since Trump’s election in 2016, but the party affiliation is actually beside the point here.
As Jay Rosen helpfully points out at PressThink, a key premise of Meet the Press is that there is a basic symmetry between the two major political parties. Todd is just now realizing that symmetry doesn’t exist anymore, if it ever did. Rosen is not the only person to observe this premise, nor is it the first time he’s found good reason to comment on it. Nevertheless, the assumption of symmetry that Rosen notices really is grounded in the “everything in moderation” commonplace. EIM is the backbone of “both-sides” political press. If you have a Democrat on, it only makes sense to have a Republican for balance. Give everyone a voice and the plurality will naturally moderate the conversation.
As Todd (finally) figured out more than 2000 years into the common era, and probably at least 3000 years too late, the assumption that moderation will magically materialize if you include a little bit of “everything” is plainly not true. Never has been. There is no state of equilibrium to which political discussion naturally resolves. Apparently no one bothered to tell him, and he wasn’t smart or curious enough to figure it out on his own until, like, yesterday.
I’m picking on Todd here, but he is not uniquely blinkered. The New York Times, the Washington Post, and CNN, three supposedly liberal news organizations, are constantly playing the both-sides game, often to what seem like painfully ridiculous ends.
(The supposition that there are two sides is another major problem. In short, there aren’t two sides to every issue—there are an endless number of sides, but they’re collapsed into two because it’s more marketable. But that’s another problem for another day, or hey, read this post, which includes a fine explanation of the problem.)
But it’s not ridiculous if your guiding belief is “everything in moderation.” When EIM is your basis for decision making, even if it’s unconscious, then a little racist science—as long as it’s just a little—isn’t so bad (for an example, you can check out Bret Stephens’s recent NYT op-ed about the superiority of Ashkenazi Jews, but I’m not linking to it). Or short-circuiting discussions about whether or not a Supreme Court justice nominee sexually assaulted a woman (or several women) and lied about it is fair game—after all everything in moderation. So maybe just a little good-hearted reflection on his carpooling prowess. Don’t get carried away though! Anti-deliberative, rape-obscuring deflection should be done in moderation!
CNN, for its part, has repeatedly hosted and/or hired people pushing racist conspiracy theories (Lou Dobbs famously engaged in birtherism for months before he was taken off-air), people advocating discrimination and violence against LGBT people (at least until 2013, CNN regularly hosted guests from the Family Research Council, an organization deemed a hate-group by the SPLC), and people accused of physically assaulting members of the press (dude, they hired Corey Lewandowski). Because “everything in moderation”—taken to a logical extreme really can mean “EVERYTHING.” Not too much, of course. Just a little.
For CNN, the NYT, and WaPo, and frankly for the vast majority of politics and news organizations, EIM short-circuits their ability to think carefully about what they’re actually doing to public discourse. As a result, someone like Chuck Todd can sincerely claim in 2019 that he didn’t notice that some people weren’t playing by the moderate rules. It’s like they were taking advantage or something.
Put another way, even when “everything in moderation” starts to break down as an value for some people, it can remain an unquestioned value for others because it’s hegemonic. It’s just taken for granted. In those asymmetrical circumstances, people who are willing to act immoderately can wreak profound changes on the culture almost without challenge.
In and of themselves, profound cultural changes are not necessarily bad. But when powerful people like Chuck Todd, CNN executives, and decision makers at the NYT and WaPo ignore those changes and carry along as if everything is the same as it always was, it’s a recipe for bad things. Because they’ve been including “fascism in moderation” and “Nazism in moderation” and “white supremacy in moderation” in their “moderate” coverage for decades without stopping for two minutes think if that’s good standard to work by. And they’ve blithely ignored everyone yelling at them to pull their heads out of their moderation.
Of course, the appropriate reaction to “everything in moderation” is not “everything to the extreme.” The appropriate next step, which Todd is hopefully sincerely beginning to take, is to begin to notice how EIM and false balance shape our thinking and how they allow coverage, commentary, and conversation to progress in ways that we don’t want them to. And when you have the chance, to act and advocate differently.
Not everyone is Chuck Todd, however, and not everyone has the opportunity to reshape the political landscape. But “everything in moderation” also has serious, and potentially damaging effects on how voters look at the candidates. In the moment, this is Democratic voters. There is a pretty common assumption—in the political press and also among individual voters—that there is a sort of moderate equilibrium on the table in the 2020 election. But moderation is a chimera. It’s a moving target in politics and always has been. Or rather, moderation as a value can appear to stay constant even when the grounds for moderation have moved radically to one extreme or another, but it ain't.
Aiming at moderation does not absolve us of the responsibility of looking carefully at our candidates, at their records, at their policies and actions. It is not a substitute for making careful, thoughtful decisions. Moderation will not save our politics any more than it’s saved Chuck “Absurdly Naive in Hindsight” Todd from looking like a novice.
The reasons for the phrase’s popularity are easy to see. We need salt and fat in our diets, for instance, but too much salt or fat is unhealthy. A little liquor can be a nice way to celebrate a holiday with friends, but too much can lead to bad behavior, addiction, and death. Solitude is restorative; isolation is destructive. Illustrative examples of things that are good in moderation but bad in excess are countless, but I’ll stop here in a nod to moderation. “Everything in moderation” is so obviously a virtue that it hardly bears a second thought.
But one major problem with EIM being a hegemonic commonplace—that is, a belief we tend to take as a given—is that it is not always applicable. “Everything in moderation” can start to fall apart, for example, when we apply it to political philosophies. “We need a little totalitarian dictatorship, but you know, in moderation.” I’d wager that most people who live in democratic societies would find such a comment curious (I’d hope enraging, but I’d bet actual money on “curious”).
For that matter, “Democracy in moderation” is not a phrase I’d seriously expect to encounter in the wild, even on the internet where anything ridiculous can be found if one looks hard enough. I’m not talking about moderate democracy, I’m talking about democracy in moderation.
The reasons people don’t talk about “democracy in moderation” are pretty straightforward. We can talk about pragmatic limits on democracy—the need for representatives, for example, or the maintenance of non-public, and therefore non-democratic, decision making—but the idea that we just need a little democracy, not too much, just a smidge, is nonsensical. How do you have just a little democracy? “Democracy in moderation” begs the question of what other systems we’re going to also adopt in moderation. Some democracy in the spring, a light communism in the summer, maybe a nice monarchy in the fall, and a hearty totalitarian dictatorship in the winter, when it’s most likely to thrive?
My point here is that “everything in moderation,” reasonable though it may initially seem, is often not very good advice for making decisions. Sometimes, frankly, it just doesn’t belong in the conversation. But, because it is a hegemonic commonplace, like any hegemonic commonplace, the taken-for-grantedness of “everything in moderation” can show up in places it doesn’t belong, and it can short-circuit thinking even in areas where it doesn’t make very good sense. As a result, EIM can appear to be virtuous even when it’s actually really dangerous. Such is the case of politics and media in the current moment in America (and really, for at least the last decades, but probably longer).
In late December, as just one example, Chuck Todd gave an interview to Rolling Stone to promote an upcoming MSNBC special about political misinformation and disinformation, which he is suddenly all worried about. Todd is the Political Director for NBC News and the current moderator of NBC’s wildly popular political show, Meet the Press. Despite being one of the most influential (and presumably most knowledgeable) political commentators in the country, Todd admitted that he was “just so absurdly naive in hindsight” about American politicians’ willingness to spread conspiracy theories and misinformation to accomplish their political goals. Todd was specifically talking about Republicans since Trump’s election in 2016, but the party affiliation is actually beside the point here.
As Jay Rosen helpfully points out at PressThink, a key premise of Meet the Press is that there is a basic symmetry between the two major political parties. Todd is just now realizing that symmetry doesn’t exist anymore, if it ever did. Rosen is not the only person to observe this premise, nor is it the first time he’s found good reason to comment on it. Nevertheless, the assumption of symmetry that Rosen notices really is grounded in the “everything in moderation” commonplace. EIM is the backbone of “both-sides” political press. If you have a Democrat on, it only makes sense to have a Republican for balance. Give everyone a voice and the plurality will naturally moderate the conversation.
As Todd (finally) figured out more than 2000 years into the common era, and probably at least 3000 years too late, the assumption that moderation will magically materialize if you include a little bit of “everything” is plainly not true. Never has been. There is no state of equilibrium to which political discussion naturally resolves. Apparently no one bothered to tell him, and he wasn’t smart or curious enough to figure it out on his own until, like, yesterday.
I’m picking on Todd here, but he is not uniquely blinkered. The New York Times, the Washington Post, and CNN, three supposedly liberal news organizations, are constantly playing the both-sides game, often to what seem like painfully ridiculous ends.
(The supposition that there are two sides is another major problem. In short, there aren’t two sides to every issue—there are an endless number of sides, but they’re collapsed into two because it’s more marketable. But that’s another problem for another day, or hey, read this post, which includes a fine explanation of the problem.)
But it’s not ridiculous if your guiding belief is “everything in moderation.” When EIM is your basis for decision making, even if it’s unconscious, then a little racist science—as long as it’s just a little—isn’t so bad (for an example, you can check out Bret Stephens’s recent NYT op-ed about the superiority of Ashkenazi Jews, but I’m not linking to it). Or short-circuiting discussions about whether or not a Supreme Court justice nominee sexually assaulted a woman (or several women) and lied about it is fair game—after all everything in moderation. So maybe just a little good-hearted reflection on his carpooling prowess. Don’t get carried away though! Anti-deliberative, rape-obscuring deflection should be done in moderation!
CNN, for its part, has repeatedly hosted and/or hired people pushing racist conspiracy theories (Lou Dobbs famously engaged in birtherism for months before he was taken off-air), people advocating discrimination and violence against LGBT people (at least until 2013, CNN regularly hosted guests from the Family Research Council, an organization deemed a hate-group by the SPLC), and people accused of physically assaulting members of the press (dude, they hired Corey Lewandowski). Because “everything in moderation”—taken to a logical extreme really can mean “EVERYTHING.” Not too much, of course. Just a little.
For CNN, the NYT, and WaPo, and frankly for the vast majority of politics and news organizations, EIM short-circuits their ability to think carefully about what they’re actually doing to public discourse. As a result, someone like Chuck Todd can sincerely claim in 2019 that he didn’t notice that some people weren’t playing by the moderate rules. It’s like they were taking advantage or something.
Put another way, even when “everything in moderation” starts to break down as an value for some people, it can remain an unquestioned value for others because it’s hegemonic. It’s just taken for granted. In those asymmetrical circumstances, people who are willing to act immoderately can wreak profound changes on the culture almost without challenge.
In and of themselves, profound cultural changes are not necessarily bad. But when powerful people like Chuck Todd, CNN executives, and decision makers at the NYT and WaPo ignore those changes and carry along as if everything is the same as it always was, it’s a recipe for bad things. Because they’ve been including “fascism in moderation” and “Nazism in moderation” and “white supremacy in moderation” in their “moderate” coverage for decades without stopping for two minutes think if that’s good standard to work by. And they’ve blithely ignored everyone yelling at them to pull their heads out of their moderation.
Of course, the appropriate reaction to “everything in moderation” is not “everything to the extreme.” The appropriate next step, which Todd is hopefully sincerely beginning to take, is to begin to notice how EIM and false balance shape our thinking and how they allow coverage, commentary, and conversation to progress in ways that we don’t want them to. And when you have the chance, to act and advocate differently.
Not everyone is Chuck Todd, however, and not everyone has the opportunity to reshape the political landscape. But “everything in moderation” also has serious, and potentially damaging effects on how voters look at the candidates. In the moment, this is Democratic voters. There is a pretty common assumption—in the political press and also among individual voters—that there is a sort of moderate equilibrium on the table in the 2020 election. But moderation is a chimera. It’s a moving target in politics and always has been. Or rather, moderation as a value can appear to stay constant even when the grounds for moderation have moved radically to one extreme or another, but it ain't.
Aiming at moderation does not absolve us of the responsibility of looking carefully at our candidates, at their records, at their policies and actions. It is not a substitute for making careful, thoughtful decisions. Moderation will not save our politics any more than it’s saved Chuck “Absurdly Naive in Hindsight” Todd from looking like a novice.
Labels:
both sides,
ideology,
language,
moderation,
politics,
power,
rhetoric
Tuesday, August 9, 2016
Burke's Unending Conversation in the Social Media Age
Imagine that you enter a conversation thread. When you arrive, others
have long preceded you, and they are engaged in a heated discussion, a
discussion too heated for them to even notice that you’re there, but
it’s okay because it’s all archived. You listen for a while, like/heart a
few things, and click through to various links, until you decide that
you have caught the tenor of the argument; then you put in your oar.
Someone answers; you answer them; another comes to your defense
with an inforgraphic; another aligns himself against you, to either the
embarrassment or gratification of your opponent, depending upon the
quality of your ally’s clickbait title. Someone calls someone else
Hitler. However, the discussion is interminable and you don’t really
care that much—you were just trolling. The hour grows late, you must
depart. And you do depart, with the discussion still vigorously in
progress, but it’s okay, because you can always read the rest of the
comments later if you’re so inclined.
Kenneth Duva Burke, from the Philosophy of Literary Form in the Social Media Age
Friday, February 12, 2016
‘Remediation’ in the United States: A Brief History
‘Remediation’ in the United States: A Brief History
San José State University Academic Retreat
February 11, 2016
My area of research is the history of writing instruction in American higher education. More specifically, I study the functions that writing instruction plays in institutions of higher education. The organizers of today's event invited me to talk today about the history of remediation in America, because, as it happens, writing instruction—often combined with reading instruction—is the commonest form of remedial education in American high education, and it has been for nearly two centuries.
Before I get into the history, I want to set out some basic parameters. First, remediation is a relatively specific term that refers to programs designed for quote/unquote “underprepared students.” The assumption, and often the accompanying assertion, is that remedial students did not get some key aspect of the education they were supposed to bring to college. Remediation, therefore, is commonly cast as the repetition of some fundamental training that students should already have mastered.
Second, in keeping with this definition, remedial education nearly always refers to “Basic Skills” —usually reading, writing, math, and/or study skills. It is worth noting that designating reading, writing, math, and studying as “basic skills” reinforces a pervasive belief that they are sub-collegiate. As I noted before, remediation refers to relearning fundamental skills, and those fundaments—reading, writing, math, and studying—are therefore designated as sub-collegiate.
The definition and domains I’ve sketched here are well understood and widely used, and they feed a long-standing belief that remediation is transitory—that we are in a period of uniquely underprepared students, and that we might one day reform education to the point where remediation is not necessary. However, what my historical work suggests is something like the opposite. In fact, what I plan to argue today is that remediation and American universities are, and have always been, mutually dependent.
There is no better place to start such a history than at the beginning. Harvard College was the first institution of higher education founded in what is now the United States. It was established by the Massachusetts legislature in 1636 as a sort of finishing school for future clergymen. Harvard’s earliest curriculum was a thoroughly classical one, conducted primarily in the so-called “learned languages.” As such, the entrance requirements were simply that students could read and speak Latin and Greek. Despite its reputation as a [QUOTE] “haven for the world’s most ambitious scholars,” matriculating students were not always fully prepared. Consequently, beginning in 1636, Harvard offered remedial Latin classes.
Harvard’s remedial Latin class offers us some useful points of consideration. For one, it suggests the omnipresent tension in remedial education between access on the one hand and standards on the other. Harvard offered remedial Latin because the school needed matriculants in order to stay solvent. There was not a ready supply of college bound youth in the American colonies, and Harvard’s survival literally depended on their ability to attract students who were geographically close enough to attend and pay tuition. Harvard educators made the college accessible to whomever they could and provided whatever education students needed.
Harvard’s Latin course also points to the aspirational nature of college standards. Harvard faculty and administrators wanted students to be fluent in Latin prior to entering college, but that was an unrealistic expectation under the circumstances. Hence the need for remedial Latin. In other words, Harvard’s early standards were actually goals, not standards. For the next two centuries or so, Harvard was exemplary where remediation was concerned. Throughout the 17th and 18th centuries, as new colleges were founded throughout the colonies and states, faculty and administrators went to great lengths to attract enough students to stay in business. Despite explicit “standards” that required students to be prepared before entering, underprepared students seemed to proliferate. Consequently, remediation was a ubiquitous survival mechanism.
In the early 1800s, as colleges strengthened, remediation took on a new cast in American higher education. Specifically, it became a more overtly political tool. In 1828, for example, a faculty committee at Yale published a report designed to defend the classical curriculum against the encroachment of modern, elective curricula. Among the arguments for maintaining the classical curriculum was that Yale (and other colleges) could provide appropriate remediation in the classical subjects (which, by that point included, quote “VULGAR ARITHMETIC”), but faculty would be less capable of serving underprepared students if the fields were expanded too greatly.
One important aspect of this argument is that it conjures remediation in support of a distinctly political cause—defending the classical curriculum—by attempting to balance access for students with quote “deficient preparation” against the standards of the college, which the authors claimed were rising every year. Perhaps more importantly, however, is the assertion of rising standards. In fact, Yale’s standards had risen considerably in the century and a quarter between when it was opened and when the Yale report was produced. The defense of the classical curriculum belies the fact that the curriculum had changed a great deal over the years. What had been courses for graduating seniors in 1701 had gradually moved toward the first year or even down into the secondary schools by the early 19th century (Broome 35-37). Even more plainly, when the Yale report was published, what counted as “deficient preparation” was very different than it had been just decades earlier. This was the case at virtually every institution of higher education.
The importance of Yale’s ever-rising standards is important in a history of remediation because it indicates that standards are moving targets. Remember I noted earlier that standards are aspirational. Once they’re achieved to any meaningful degree, standards are reimagined. This process of raising standards, which is observable throughout the history of American education, undermines one of the important claims made by the Yale authors, which is that remediation is temporary. As the argument went, if the classical curriculum was maintained, the lower schools would eventually get up to speed, and students eventually wouldn’t need remedial education. But the persistently rising standards suggests that remediation is absolutely necessary, and in fact, the need for remediation is indispensible for balancing the goals of access and standards because institutions can claim to be holding standards high enough that they’re rigorous enough to need remedial education as a bridge to underprepared students.
Through the course of the 19th century, the argument that remediation was transitory took on new dimensions, usually in keeping with evolving political goals. In 1848, the University of Wisconsin opened, and the next year, despite their dismay at having to do so, they opened a preparatory department to bridge the gap between secondary and post-secondary. The department lasted 31 years, and it was replaced with remedial classes, rather than a department, soon after. The University of Wisconsin was hardly unique in this regard.
In the 1870s and 1880s, Harvard sought attract public high school students by offering an elective curriculum. Although the goal was to boost enrollments, Harvard’s efforts were predicated on arguments bemoaning the woeful underpreparation of students. When the elective curriculum was fully in place, several courses were required in the first year, all but one of which Harvard’s president called [QUOTE] “obviously matters which properly belong to the secondary schools” [END QUOTE]. In other words, they were remedial. The notion was that they would eventually be returned to secondary schools, once the high schools got up to speed. One of the remedial courses Harvard introduced at that point was the original first-year composition course, English A, which became a model for first-year composition nationally. First-year composition has been attacked routinely and viciously over the years as belonging in high schools, and yet, it has enrolled more students than any other course in the past century.
To read through the history of remediation is to discover that on the one hand, administrators, faculty, trustees, legislators, and commentators have been complaining about the need for remedial education for centuries. But the goal, as I’ve been arguing, is tied to the moving target of standards and the unfortunate albatross of access. The results of this situation—in which standards move, access must be maintained, and remediation persists—are complicated. Burgeoning numbers of students with diverse cultural and educational backgrounds have attended colleges and universities over the past two centuries because of access to remedial education. According to recent research, underprepared students are often very well served by remediation, and many go on to graduate in similar timeframes as their [QUOTE/UNQUOTE] college-level counterparts.
At the same time, the myth that we will rid ourselves of remediation has persisted. Despite the demonstrable shifts in what constitutes “college-level” reading, writing, math, and study skills, producing well-wrought laments about the dire state of education and the regrettable need for remedial work is practically a cottage industry. Often these complaints have described courses or students using highly derogatory labels including “bonehead courses,” “the awkward squad,” and “the vandal horde.” Likewise, because of the belief that they are “sub-collegiate” basic skills courses, remedial courses have often been outsourced, degraded, defunded, and even outlawed.
Even when schools attempted to rid themselves of remediation, however, it nearly always creeps back in. In 1960, the California Master Plan for Higher Education relegated remedial instruction to California’s Community Colleges. Soon thereafter, the California Legislature cut funding to the CSU that had been earmarked for writing clinics that served underprepared students. Less than a decade later, in 1974, the CSU issued a report indicating the need for remedial courses, and two years after that the CSUs introduced the English Placement Test and remedial courses. The UCs, as well, have long found ways to accommodate underprepared students even when they were pretending not to accept them.
Rather than continuing to enumerate examples, I want to finish by returning again to Harvard. From what I can ascertain, Harvard has offered some form of remedial work in almost every year of its existence, even when it wasn’t officially requiring remedial courses. In 1914, after about 20 years of requiring remedial students to take “special instruction,” Harvard finally reintroduced an official remedial writing course. The course shifted and changed over the years, and in the 1970s, they renamed it “Expos 10: Introduction to Expository Writing.” It remains on the books to this day. What is so striking about this remedial class is that Harvard is so unlike most other post-secondary institutions in America. In 2015, they accepted fewer than 6% of their applicants—less than 2000 of the more than 37,000 people who applied. These are some of the most accomplished applicants in the world—maybe the history of the world. And Harvard still needs to offer remedial writing courses for some of them.
It’s worth asking why, and the simple answer is that students need reading, writing, math, and study skills instruction. Notwithstanding the political value of remediation in arguments about post-secondary education, the history of remediation helps us to recognize that basic skills are not basic in the sense of being prior to more complex instruction—they are basic in the sense of being fundamental to all instruction. Remediation is not going away any time soon, and perhaps counter intuitively, the final expulsion of remediation would be a bad sign for higher education because it would signal the end of aspirational standards. Remedial education allows universities to serve student needs and pursue higher standards, and I think we would do well to recognize it.
Friday, October 30, 2015
In, On, and Of Depression
I have suffered from clinical
depression, to a greater or lesser degree, for at least the past two decades.
Within that time, I have generally been fine (or at least functional), but I have also experienced a
couple of prolonged episodes of severe depression, one of which I have recently
started to emerge from. If I had to guess, I’d say the first signs of this episode started some time in the late-summer of 2013 and got progressively
worse over the next year. But the very worst of it—which included severe anxiety and
insomnia—started in August of 2014 and lasted for about 8 months. The major
causes are pretty easy to point to: (1) a close
friendship ended in a dramatic and protracted way, and (2) I was under a
tremendous amount of stress as a couple of high-stakes deadlines coincided with
being on the job market (another indescribably stressful process, at least for
me). These two extreme stressors (along with any number of other, less
momentous stressors) resulted in the anxiety and insomnia. In short, I was constantly stressed
out, exhausted, and anxious, all of which worked together in a destructive
feedback loop. It was a really tough period, and though it started to improve
significantly last April, I’ve only really started to feel
like myself again in the past month or so (and that intermittently).
The funny thing (ha!) is I didn’t even recognize the last two years as a bout of depression until after the worst part was already over. (Actually, I didn't even recognize it then--my mom pointed it out.) I knew I was stressed and tired and anxious, but there were all sorts of things I could point to as the direct causes of my distress. In retrospect, however, the distress did not go away when the apparent causes did. In fact looking back, I recognize whole stretches of time I lost to avoidance…of anything and everything. I would hole up in my office, turn off the lights, and basically hide. How anything got done in those two years, I don’t think I’ll ever be able to fully explain.
The point of reflecting on these past two years is not to solicit sympathy, but to mark a point of progress. There is therapeutic value in being able to identify a period of real misery and begin to pull at some of the various threads. What were the causes of the misery? What were the symptoms and the signs, especially the early signs of bad things a-comin’? What could I have done differently? Sooner? With more care? What was good despite the misery? And what were the consequences? No need for comprehensive answers to any of those questions here. Lots of people have written about the personal toll of depression. I’ve taken a lot of solace in the essays, blog posts, cartoons, and revelations of fellow depressives, especially Allie Brosh’s “Hyperbole and a Half” (part 1), (part 2) and the Bloggess’s blog (part everything). Lots of commentators also note the collateral damage of depression for people near and dear to the person suffering. I won’t rehearse those things here, which have been well rehearsed elsewhere.
I do, however, want to note one consequence I’ve only recently recognized, which has to do with the effects of my depression on other people. Specifically, I missed/passed up a lot of opportunities in the past few years to offer comfort to other people who were in need of a kind word or gesture because I was so busy trying to keep my head above water. At first, I thought of these missed opportunities as a form of selfishness. I think this is pretty obvious. When you’re depressed—even mildly depressed—sometimes you just don’t notice other people’s suffering, even when their suffering is severe. When every moment of the day requires crisis-level triage, you overlook a lot of things that aren’t your crisis. Such self-interest may be justifiable for someone struggling with depression, but it still wears on the people around you.
Even when you do notice, it can be hard to muster the energy to attend to other people. Without going too far down the rabbit hole, I think Christine Miserandino’s “Spoon Theory” is useful here. Spoon Theory essentially says that people struggling with mental and/or physical disabilities have to be very careful about how they ration their energy. Sometimes running to the store for a greeting card, writing a note, and sticking it in the mail can obliterate an entire day’s spoons. When I'm depressed, I get stingy about caring for others, even in relatively meager ways, because I’m hoarding spoons, knowing full well that if I spend them too quickly or too frivolously, I’ll regret it later. And once I start hoarding spoons, it becomes increasingly easy to justify hoarding. It becomes habitual, made ever easier by the fact that I don’t always notice other people’s suffering to begin with.
But when it comes to comforting others, the real tragedy may be in those times—more common than they might seem—during which you (that is, I) notice someone you care about who is in need of comfort, you have the energy to comfort them, but you convince yourself they wouldn’t want to hear from you. In other words, you convince yourself you can’t be a source of comfort to people you care about, and even that your comfort may cause them additional pain. (Can I stop here to say: just typing out those last two sentences really sucked.) In this case, it's not selfishness at work--it's a weird, stupid, self-destructive selflessness. It's a huge loss for everyone involved, and it's usually invisible to everyone (as well as another feedback loop).
Emerging from depression has been (and is) a very slow process for me, in part because I feel like I have to relearn some pretty basic human interaction stuff, the muscles for which have atrophied. And taking stock at points along the way is helpful. And sometimes writing stuff down is a good way to remind myself that the cock-ups that result from depression are not elemental failings that spring forth from the unalterable core of my being (unlike melodrama, which is apparently part of my very fabric). And then noticing that I spent the time and energy to write something down helps to reassure me that I’m continuing to emerge. And hopefully that will help me to remember how to do some of those human interaction things, like comforting the people I care about when they need it. All of this to say, if I err on the side of over-comforting you in the next couple of months, or if I'm just generally weird, I’d appreciate it if you’d chalk it up to relearning how to be a functional human person.
The funny thing (ha!) is I didn’t even recognize the last two years as a bout of depression until after the worst part was already over. (Actually, I didn't even recognize it then--my mom pointed it out.) I knew I was stressed and tired and anxious, but there were all sorts of things I could point to as the direct causes of my distress. In retrospect, however, the distress did not go away when the apparent causes did. In fact looking back, I recognize whole stretches of time I lost to avoidance…of anything and everything. I would hole up in my office, turn off the lights, and basically hide. How anything got done in those two years, I don’t think I’ll ever be able to fully explain.
The point of reflecting on these past two years is not to solicit sympathy, but to mark a point of progress. There is therapeutic value in being able to identify a period of real misery and begin to pull at some of the various threads. What were the causes of the misery? What were the symptoms and the signs, especially the early signs of bad things a-comin’? What could I have done differently? Sooner? With more care? What was good despite the misery? And what were the consequences? No need for comprehensive answers to any of those questions here. Lots of people have written about the personal toll of depression. I’ve taken a lot of solace in the essays, blog posts, cartoons, and revelations of fellow depressives, especially Allie Brosh’s “Hyperbole and a Half” (part 1), (part 2) and the Bloggess’s blog (part everything). Lots of commentators also note the collateral damage of depression for people near and dear to the person suffering. I won’t rehearse those things here, which have been well rehearsed elsewhere.
I do, however, want to note one consequence I’ve only recently recognized, which has to do with the effects of my depression on other people. Specifically, I missed/passed up a lot of opportunities in the past few years to offer comfort to other people who were in need of a kind word or gesture because I was so busy trying to keep my head above water. At first, I thought of these missed opportunities as a form of selfishness. I think this is pretty obvious. When you’re depressed—even mildly depressed—sometimes you just don’t notice other people’s suffering, even when their suffering is severe. When every moment of the day requires crisis-level triage, you overlook a lot of things that aren’t your crisis. Such self-interest may be justifiable for someone struggling with depression, but it still wears on the people around you.
Even when you do notice, it can be hard to muster the energy to attend to other people. Without going too far down the rabbit hole, I think Christine Miserandino’s “Spoon Theory” is useful here. Spoon Theory essentially says that people struggling with mental and/or physical disabilities have to be very careful about how they ration their energy. Sometimes running to the store for a greeting card, writing a note, and sticking it in the mail can obliterate an entire day’s spoons. When I'm depressed, I get stingy about caring for others, even in relatively meager ways, because I’m hoarding spoons, knowing full well that if I spend them too quickly or too frivolously, I’ll regret it later. And once I start hoarding spoons, it becomes increasingly easy to justify hoarding. It becomes habitual, made ever easier by the fact that I don’t always notice other people’s suffering to begin with.
But when it comes to comforting others, the real tragedy may be in those times—more common than they might seem—during which you (that is, I) notice someone you care about who is in need of comfort, you have the energy to comfort them, but you convince yourself they wouldn’t want to hear from you. In other words, you convince yourself you can’t be a source of comfort to people you care about, and even that your comfort may cause them additional pain. (Can I stop here to say: just typing out those last two sentences really sucked.) In this case, it's not selfishness at work--it's a weird, stupid, self-destructive selflessness. It's a huge loss for everyone involved, and it's usually invisible to everyone (as well as another feedback loop).
Emerging from depression has been (and is) a very slow process for me, in part because I feel like I have to relearn some pretty basic human interaction stuff, the muscles for which have atrophied. And taking stock at points along the way is helpful. And sometimes writing stuff down is a good way to remind myself that the cock-ups that result from depression are not elemental failings that spring forth from the unalterable core of my being (unlike melodrama, which is apparently part of my very fabric). And then noticing that I spent the time and energy to write something down helps to reassure me that I’m continuing to emerge. And hopefully that will help me to remember how to do some of those human interaction things, like comforting the people I care about when they need it. All of this to say, if I err on the side of over-comforting you in the next couple of months, or if I'm just generally weird, I’d appreciate it if you’d chalk it up to relearning how to be a functional human person.
Labels:
depression,
emotions,
job market,
relationships
Wednesday, November 26, 2014
On Sadness after Ferguson, With Apologies to All the Victims of Injustice
For many years, prominent activists have meticulously documented the structural racism, sexism, classism, and other pervasive, systemic injustices that define the lives of Americans--all Americans, whether to their benefit or to their detriment--systemic injustices so well-worn that they can be hard to see even for people who suffer the consequences.
Of course, people have been documenting structural, systemic injustices for centuries. Although it undoubtedly started many centuries earlier, I'm thinking of Mary Wollstonecraft in the late 1700s and Cherokee activists in the early 1800s; I'm thinking of the Seneca Falls women's rights activists in 1848, and Sojourner Truth, and Ida B. Wells; I'm thinking of W.E.B. DuBois, and Alice Paul, and Wilma Mankiller; I'm thinking of Martin Luther King, and Malcolm X, and Fannie Lou Hamer, and Gloria Steinem; I'm thinking much more recently of Ta-Nehisi Coates, and Anita Sarkissian, and Tim Wise. The list is endless, and it is filled with people who I respect and trust because the work they're doing to document and confront injustice is hard, and thankless, and even dangerous.
I often take comfort in these people--that they exist and that they have taken responsibility for confronting the persistent march of injustice that results in death, hate, violence, rape, theft--usually in the lives of people who are already disenfranchised. I try to emulate them when and where I can. But while there is often comfort in knowing that social justice activists are at work, and there is therefore a horizon against which to (try to) orient my own actions for social justice (however meager they may be), I find that today I cannot muster anything but profound sadness.
I am sad that so many people in Ferguson and around the country today feel like the only recourse they have for voicing legitimate concerns about the deaths of their loved ones is rage. I'm sad that I think they're mostly right.
I'm sad that people will wake up in Ferguson today and tomorrow to the smell of smoke and burned out buildings. I'm sad that I feel helpless to do anything more than post FB links and talk to my children and teach about structural racism in my classes, knowing full well that the people who most need to hear the message are the ones most likely (and able) to ignore it.
I'm sad that people I know and love, as well as people I've never met, can't hear the expressions of deeply held pain and sorrow in the narratives that so many people--people of color and women and other human targets of systemic injustice--tell about their daily lives. I'm sad that someone, somewhere is profiting off of all this sadness and happily stoking the fire while people die and cities burn.
I'm sad that I catch myself backing away from saying anything because I don't want to make people uncomfortable, even when I purport to be (and try to be) an advocate for social justice and an ally to the activists who do the real work of confronting injustice. I'm sad that I don't even know how to talk or think about any of it without feeling despair and anger and fear and utter shame.
And I know that my sadness is meager--disproportionate to any injustices I personally face and unequal to the demands of social justice that I desire. I know, in fact, that my sadness in this moment may even be mawkish and insulting to the people who are facing injustice, whether as target or documentarian or both. And I know as well that my brand of sadness is a form of luxury, available only to the people like me who can step away from systemic injustice long enough to wallow. In the end, my sadness will not change anything, and I know this, too. I can even admit, as much as I do not want to, that my sadness may even be detrimental to the cause in some way. But sadness is, at least for now, what I have.
Someone someday may accuse me of not doing enough to advance the cause of social justice, and I have to live with the reality that they will probably be right. But I hope in that moment, people will not say I did nothing. I hope I will earn at least that caveat. For now, I don't know what else to do but wallow and post links and talk to my children and teach my classes and be sad. Tomorrow I'll try again to do something better.
Of course, people have been documenting structural, systemic injustices for centuries. Although it undoubtedly started many centuries earlier, I'm thinking of Mary Wollstonecraft in the late 1700s and Cherokee activists in the early 1800s; I'm thinking of the Seneca Falls women's rights activists in 1848, and Sojourner Truth, and Ida B. Wells; I'm thinking of W.E.B. DuBois, and Alice Paul, and Wilma Mankiller; I'm thinking of Martin Luther King, and Malcolm X, and Fannie Lou Hamer, and Gloria Steinem; I'm thinking much more recently of Ta-Nehisi Coates, and Anita Sarkissian, and Tim Wise. The list is endless, and it is filled with people who I respect and trust because the work they're doing to document and confront injustice is hard, and thankless, and even dangerous.
I often take comfort in these people--that they exist and that they have taken responsibility for confronting the persistent march of injustice that results in death, hate, violence, rape, theft--usually in the lives of people who are already disenfranchised. I try to emulate them when and where I can. But while there is often comfort in knowing that social justice activists are at work, and there is therefore a horizon against which to (try to) orient my own actions for social justice (however meager they may be), I find that today I cannot muster anything but profound sadness.
I am sad that so many people in Ferguson and around the country today feel like the only recourse they have for voicing legitimate concerns about the deaths of their loved ones is rage. I'm sad that I think they're mostly right.
I'm sad that people will wake up in Ferguson today and tomorrow to the smell of smoke and burned out buildings. I'm sad that I feel helpless to do anything more than post FB links and talk to my children and teach about structural racism in my classes, knowing full well that the people who most need to hear the message are the ones most likely (and able) to ignore it.
I'm sad that people I know and love, as well as people I've never met, can't hear the expressions of deeply held pain and sorrow in the narratives that so many people--people of color and women and other human targets of systemic injustice--tell about their daily lives. I'm sad that someone, somewhere is profiting off of all this sadness and happily stoking the fire while people die and cities burn.
I'm sad that I catch myself backing away from saying anything because I don't want to make people uncomfortable, even when I purport to be (and try to be) an advocate for social justice and an ally to the activists who do the real work of confronting injustice. I'm sad that I don't even know how to talk or think about any of it without feeling despair and anger and fear and utter shame.
And I know that my sadness is meager--disproportionate to any injustices I personally face and unequal to the demands of social justice that I desire. I know, in fact, that my sadness in this moment may even be mawkish and insulting to the people who are facing injustice, whether as target or documentarian or both. And I know as well that my brand of sadness is a form of luxury, available only to the people like me who can step away from systemic injustice long enough to wallow. In the end, my sadness will not change anything, and I know this, too. I can even admit, as much as I do not want to, that my sadness may even be detrimental to the cause in some way. But sadness is, at least for now, what I have.
Someone someday may accuse me of not doing enough to advance the cause of social justice, and I have to live with the reality that they will probably be right. But I hope in that moment, people will not say I did nothing. I hope I will earn at least that caveat. For now, I don't know what else to do but wallow and post links and talk to my children and teach my classes and be sad. Tomorrow I'll try again to do something better.
Tuesday, August 19, 2014
Why College Writing Matters
“Why College
Writing Matters”
Ryan Skinnell
UNT Writing Program Orientation
August 20, 2014
Those of you who know me, and especially those of you who have taken my classes, know that I am pretty cynical about composition. My invitation to talk today about “Why College Writing Matters” may therefore seem like the height of irony. But obviously I don’t think it is, otherwise I would’ve had to politely decline Dr. Yeatts’s gracious invitation. In fact, although I am cynical about composition, I am also rather idealistic about why college writing matters, and especially why teaching college writing matters.
For me, the crucial distinction to be made at this point is between composition and writing. The former, to my mind, is a uniquely institutional creature, caught up as it is in discussions about FTEs, assessment reportage, and degree plans. Probably synergies and deliverables, too. Composition in this sense is 150 years old, give or take. Writing, on the other hand, is a couple millennia older and it is neither organically nor necessarily linked to institutions of higher education.
Here’s the other thing about writing—it’s really hard to define in a satisfactory way. I can say, for instance, that my 3-year-old daughter is learning to write just as truthfully as I can say that I am still learning to write. The common thread that ties these two usages, and many others, together is that writing is a form of cultural participation. My daughter is in the early stages of learning to participate in a broadly literate culture; I like to think I’m a little further along in learning to participate in a series of much more focused professional, public, and personal cultures.
Both composition and writing matter, in their own ways. Both, in fact, are absolutely necessary for our existence as college writing teachers. But unlike composition, writing matters in a way that we—as writing teachers—can actually affect on a regular, individual, personal basis. My idealism about writing, then, is grounded in a deep-seated optimism about the possibility of participating meaningfully in culture and about the possibility of helping students to do so, as well.
The remainder of what I have to say today is essentially an amplification of this theme. I’ve broken it down roughly into three sections to tease out what I think are some useful distinctions as we approach the beginning of a new semester. First, I plan talk more specifically about teaching, second about culture, third more properly about today’s theme, why writing matters.
First,
teaching. I hope you will forgive what is sure to be a classic case of
preaching to the choir. I am quite certain that we all believe teaching is
important—some of us maybe even think teaching is virtuous. No doubt this
belief is reinforced by how challenging good teaching is. In a recent essay for
Slate magazine, former Yale professor William Deresiewicz writes that [QUOTE] “College teaching, like any
other kind, is a slow, painstaking, difficult process.” [UNQUOTE]. It is
complex, uncertain, and often ambiguous. [I warned you that I would be
preaching to the choir.]
The thing is, teaching is also fundamentally a hopeful process. We believe that
students can learn, and in so doing, can be better, smarter, happier, and more
successful people. And we believe that we can help.
To be sure, the slowness
and difficulty and painstakingness that Deresiewicz talks about can make the
hopefulness hard to maintain. As you’re all undoubtedly aware, when you spend
days and weeks and months with students who seem not to get what you’re trying
to teaching them, it is hard not to come away feeling a little frustrated—maybe
even occasionally dejected. And even when teaching works well, the evidence of
success may be invisible for weeks or months or years—which can mean “invisible
forever” to teachers whose students move on after a semester or two. Yet,
despite the all reasons to be frustrated or dejected—and there are plenty—teaching
remains at its core an optimistic endeavor.
Recently, I had to go see my doctor
to update some vaccinations. This was the first time I’d been to see this
doctor and he asked me what I do for a living. I bet you can guess what his
response was when I told him I teach writing. Right on cue, as if reading from
a script:
“Oh, that’s fantastic. I don’t have to tell you” [BUT OF COURSE HE
DID], “Kids today can’t write. It’s all text speak and tweeting.” Etc. etc.
etc.
I have a standard response to these kinds of conventional complaints about
“kids these days.” I always respond, “It’s a good thing they can’t write—otherwise
I’d be out of a job.” Depending on how snarky I’m feeling, I might also add,
“After all, if they could already write, they wouldn’t need college, would
they?”
I relate this anecdote for two reasons. The first, of course, is that
it’s a common experience for writing teachers. You could replace doctor with “stranger
on an airplane” or “grocery bagger” or “President of the United States” and
predict much the same response. Heck, you could replace doctor with “President
of the university” and the script would probably stay the same. I find this routine
intensely aggravating, and many of the teachers I know do as well.
But the
second reason I tell this story is the reason teachers tell it to each other
all the time—it reminds us how hopeful we are about teaching and about
students. It illustrates what I think is the hopeful difficulty of teaching—and
especially of teaching writing. I am aggravated by the conventional “Kids these
days” script because I believe that what I do is worthwhile. And I believe it
is worthwhile because I am really, truly, deeply optimistic about students.
I
hope that those of you who have taken classes with me (or will be soon) know
this to be true. For all my theatrical blustering, I know that students are enriched
by education and by good teachers (a designation to which I fervently aspire). I
am hopeful FOR students and I am hopeful BECAUSE of them. I’ll wrap this
section here with a quick summation lest I start gushing—teaching is
intrinsically an optimistic undertaking, even when it is otherwise burdened by
frustrations and delayed gratification.
Which brings me to section two: This
section will be both shorter and less optimistic than section one. My academic background
is in rhetoric, and it is my unequivocal belief that the study of rhetoric is
also fundamentally optimistic. Rhetoricians are relentless in their pursuit of
positive change through the power of language. We’re practically a bunch of
Pollyannas.
Except, of course, the insistence on bringing about positive change
also incessantly directs rhetoricians’ attention to all the negative things
that need changing. As it happens, culture—broadly-defined—is a roiling mess of
negative things that need changing. Cultures—more narrowly-defined—are also
ripe with things that demand positive change. And, as we see in any number of
spots around the university, the community, the nation, and the world, the need
for positive change is intense. It is also interminable and controversial.
I
suppose this, like the previous section, is preaching to the choir. Nevertheless,
it returns me to my definition of writing as cultural participation. What we
are enlisted to do as writing teachers is to help our students make sense of
and thoughtfully participate in cultures that are roiling messes in need of
positive change. So here’s where we are so far: teaching is an optimistic
pursuit, and culture is intensely fraught. Based on my definitions, then, teaching
writing is a grand paradox.
You have no doubt noticed a theme in
my talk by now. To wit: the abundant rewards of Teaching…Writing…and Teaching Writing
are always complicated by serious obstacles. But, of course, another way to say
this is that the slow, painstaking, difficult process of teaching writing is
always accompanied by serious rewards. This, to my mind, is precisely why college writing matters. College
writing matters because, done well, it is an invitation for students to encounter
the complex challenges of the worlds they live in—to participate in them—by
holding out the possibility of actually making positive cultural change.
Granted, this is not a one-to-one transaction. College writing does not
necessarily make positive cultural
change. In Writing at the End of the
World, Richard Miller confronts this uncomfortable truth directly. He
writes, [QUOTE] “The dark night of the soul for literacy workers comes with the
realization that training students to read, write, and talk in more critical
and self-reflective ways cannot protect them from the violent changes our
culture is undergoing” [END QUOTE]. His examples are of 9/11, Columbine, and
pervasive sexual assault on college campuses, among others. We might update
them to include Afghanistan, Ferguson, and countless more.
Not cheering stuff,
and not likely to be solved by 15 weeks’ worth of exposure to Ancient Rhetorics
or The Shallows or a half dozen progymnasmata exercises. But Miller is a committed
teacher, and as such he cannot help but betray his hopefulness. Miller is
non-plussed by claims that reading or writing will empower students, or make
them good academic arguers, or will save them from the blight of modern life.
He’s not a romantic. Nevertheless, he sees real value in teaching students to
[QUOTE] “use writing as a technology to think with.”
At the risk of
oversimplifying Miller’s subtle arguments, I read him to say that writing is a
way to explore, confront, and engage in the concerns that shape our lives—not
just students’ lives, but our lives as well. College writing may not solve the
world’s greatest problems, and in fact it may be one of the world’s problems,
depending on who you talk to. But college writing nevertheless matters because
it can help students make sense of and thoughtfully participate in cultures
that that need positive change. And college writing matters because it does not
abandon those processes for students to figure out alone. As I said earlier, whatever
the frustrations, teaching writing matters because writing teachers can
contribute meaningfully to students’ participation in their worlds through
regular, individual, personal interactions.
Perhaps it would be worthwhile here to
reframe this point about why college writing matters. My doctor thinks college
writing solves problems—this is the crux of his complaint about “kids these
days,” with their twitterizing, and their text messaging, and their fashionable
ankle monitors. But of course he is exactly wrong. Or more precisely, solving
problems is the purview of composition—which is assessable and achieves
outcomes.
As I said in the beginning, composition matters, but composition is
not writing. College writing in the way Miller invites us to see it doesn’t
solve problems as well as it reveals them. Teaching in the way Deresiewicz invites
us to see it, likewise reveals problems. In this sense, the paradox of college
writing is its greatest strength. The paradox of college writing exemplifies
the hope and difficulty of teaching writing more generally.
And, of course, it’s a beginning, not
an end. After 15 weeks of strict drilling, my students would probably not
impress my general practitioner with their erudition, nor even probably their
punctuation.
And what I am asking
them to do is much harder in the long run than excising dangles from their
modifiers.
And they may choose not
to accept the invitation, and they
may defer the invitation until long after we’ve parted ways.
And in the end, of course, the composition
creature must be fed. Miller again: [QUOTE] “Students must attend;
teachers must be present; papers must be solicited, assessed, returned. And,
regardless of what actually happens during this process, instruction must be said to have occurred” [END
QUOTE].
And all of these things
conspire to undercut the hopefulness of teaching.
And yet they are exactly why college writing matters. Because while
the college writing classroom is certainly not the only place that students are
invited to discover the problems of the world around them, it remains, against
all odds, a hopeful place for discovering ways to participate in the process of
solving them.
Subscribe to:
Posts (Atom)