Tuesday, August 9, 2016

Burke's Unending Conversation in the Social Media Age

Imagine that you enter a conversation thread. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to even notice that you’re there, but it’s okay because it’s all archived. You listen for a while, like/heart a few things, and click through to various links, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer them; another comes to your defense with an inforgraphic; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally’s clickbait title. Someone calls someone else Hitler. However, the discussion is interminable and you don’t really care that much—you were just trolling. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress, but it’s okay, because you can always read the rest of the comments later if you’re so inclined.


Kenneth Duva Burke, from the Philosophy of Literary Form in the Social Media Age

Friday, February 12, 2016

‘Remediation’ in the United States: A Brief History


‘Remediation’ in the United States: A Brief History
San José State University Academic Retreat
February 11, 2016


My area of research is the history of writing instruction in American higher education. More specifically, I study the functions that writing instruction plays in institutions of higher education. The organizers of today's event invited me to talk today about the history of remediation in America, because, as it happens, writing instruction—often combined with reading instruction—is the commonest form of remedial education in American high education, and it has been for nearly two centuries.

Before I get into the history, I want to set out some basic parameters. First, remediation is a relatively specific term that refers to programs designed for quote/unquote “underprepared students.” The assumption, and often the accompanying assertion, is that remedial students did not get some key aspect of the education they were supposed to bring to college. Remediation, therefore, is commonly cast as the repetition of some fundamental training that students should already have mastered.

Second, in keeping with this definition, remedial education nearly always refers to “Basic Skills” —usually reading, writing, math, and/or study skills. It is worth noting that designating reading, writing, math, and studying as “basic skills” reinforces a pervasive belief that they are sub-collegiate. As I noted before, remediation refers to relearning fundamental skills, and those fundaments—reading, writing, math, and studying—are therefore designated as sub-collegiate.

The definition and domains I’ve sketched here are well understood and widely used, and they feed a long-standing belief that remediation is transitory—that we are in a period of uniquely underprepared students, and that we might one day reform education to the point where remediation is not necessary. However, what my historical work suggests is something like the opposite. In fact, what I plan to argue today is that remediation and American universities are, and have always been, mutually dependent.

There is no better place to start such a history than at the beginning. Harvard College was the first institution of higher education founded in what is now the United States. It was established by the Massachusetts legislature in 1636 as a sort of finishing school for future clergymen. Harvard’s earliest curriculum was a thoroughly classical one, conducted primarily in the so-called “learned languages.” As such, the entrance requirements were simply that students could read and speak Latin and Greek. Despite its reputation as a [QUOTE] “haven for the world’s most ambitious scholars,” matriculating students were not always fully prepared. Consequently, beginning in 1636, Harvard offered remedial Latin classes.

Harvard’s remedial Latin class offers us some useful points of consideration. For one, it suggests the omnipresent tension in remedial education between access on the one hand and standards on the other. Harvard offered remedial Latin because the school needed matriculants in order to stay solvent. There was not a ready supply of college bound youth in the American colonies, and Harvard’s survival literally depended on their ability to attract students who were geographically close enough to attend and pay tuition. Harvard educators made the college accessible to whomever they could and provided whatever education students needed.

Harvard’s Latin course also points to the aspirational nature of college standards. Harvard faculty and administrators wanted students to be fluent in Latin prior to entering college, but that was an unrealistic expectation under the circumstances. Hence the need for remedial Latin. In other words, Harvard’s early standards were actually goals, not standards. For the next two centuries or so, Harvard was exemplary where remediation was concerned. Throughout the 17th and 18th centuries, as new colleges were founded throughout the colonies and states, faculty and administrators went to great lengths to attract enough students to stay in business. Despite explicit “standards” that required students to be prepared before entering, underprepared students seemed to proliferate. Consequently, remediation was a ubiquitous survival mechanism.

In the early 1800s, as colleges strengthened, remediation took on a new cast in American higher education. Specifically, it became a more overtly political tool. In 1828, for example, a faculty committee at Yale published a report designed to defend the classical curriculum against the encroachment of modern, elective curricula. Among the arguments for maintaining the classical curriculum was that Yale (and other colleges) could provide appropriate remediation in the classical subjects (which, by that point included, quote “VULGAR ARITHMETIC”), but faculty would be less capable of serving underprepared students if the fields were expanded too greatly.

One important aspect of this argument is that it conjures remediation in support of a distinctly political cause—defending the classical curriculum—by attempting to balance access for students with quote “deficient preparation” against the standards of the college, which the authors claimed were rising every year. Perhaps more importantly, however, is the assertion of rising standards. In fact, Yale’s standards had risen considerably in the century and a quarter between when it was opened and when the Yale report was produced. The defense of the classical curriculum belies the fact that the curriculum had changed a great deal over the years. What had been courses for graduating seniors in 1701 had gradually moved toward the first year or even down into the secondary schools by the early 19th century (Broome 35-37). Even more plainly, when the Yale report was published, what counted as “deficient preparation” was very different than it had been just decades earlier. This was the case at virtually every institution of higher education.

The importance of Yale’s ever-rising standards is important in a history of remediation because it indicates that standards are moving targets. Remember I noted earlier that standards are aspirational. Once they’re achieved to any meaningful degree, standards are reimagined. This process of raising standards, which is observable throughout the history of American education, undermines one of the important claims made by the Yale authors, which is that remediation is temporary. As the argument went, if the classical curriculum was maintained, the lower schools would eventually get up to speed, and students eventually wouldn’t need remedial education. But the persistently rising standards suggests that remediation is absolutely necessary, and in fact, the need for remediation is indispensible for balancing the goals of access and standards because institutions can claim to be holding standards high enough that they’re rigorous enough to need remedial education as a bridge to underprepared students.

Through the course of the 19th century, the argument that remediation was transitory took on new dimensions, usually in keeping with evolving political goals. In 1848, the University of Wisconsin opened, and the next year, despite their dismay at having to do so, they opened a preparatory department to bridge the gap between secondary and post-secondary. The department lasted 31 years, and it was replaced with remedial classes, rather than a department, soon after. The University of Wisconsin was hardly unique in this regard.

In the 1870s and 1880s, Harvard sought attract public high school students by offering an elective curriculum. Although the goal was to boost enrollments, Harvard’s efforts were predicated on arguments bemoaning the woeful underpreparation of students. When the elective curriculum was fully in place, several courses were required in the first year, all but one of which Harvard’s president called [QUOTE] “obviously matters which properly belong to the secondary schools” [END QUOTE]. In other words, they were remedial. The notion was that they would eventually be returned to secondary schools, once the high schools got up to speed. One of the remedial courses Harvard introduced at that point was the original first-year composition course, English A, which became a model for first-year composition nationally. First-year composition has been attacked routinely and viciously over the years as belonging in high schools, and yet, it has enrolled more students than any other course in the past century.

To read through the history of remediation is to discover that on the one hand, administrators, faculty, trustees, legislators, and commentators have been complaining about the need for remedial education for centuries. But the goal, as I’ve been arguing, is tied to the moving target of standards and the unfortunate albatross of access. The results of this situation—in which standards move, access must be maintained, and remediation persists—are complicated. Burgeoning numbers of students with diverse cultural and educational backgrounds have attended colleges and universities over the past two centuries because of access to remedial education. According to recent research, underprepared students are often very well served by remediation, and many go on to graduate in similar timeframes as their [QUOTE/UNQUOTE] college-level counterparts.

At the same time, the myth that we will rid ourselves of remediation has persisted. Despite the demonstrable shifts in what constitutes “college-level” reading, writing, math, and study skills, producing well-wrought laments about the dire state of education and the regrettable need for remedial work is practically a cottage industry. Often these complaints have described courses or students using highly derogatory labels including “bonehead courses,” “the awkward squad,” and “the vandal horde.” Likewise, because of the belief that they are “sub-collegiate” basic skills courses, remedial courses have often been outsourced, degraded, defunded, and even outlawed.

Even when schools attempted to rid themselves of remediation, however, it nearly always creeps back in. In 1960, the California Master Plan for Higher Education relegated remedial instruction to California’s Community Colleges. Soon thereafter, the California Legislature cut funding to the CSU that had been earmarked for writing clinics that served underprepared students. Less than a decade later, in 1974, the CSU issued a report indicating the need for remedial courses, and two years after that the CSUs introduced the English Placement Test and remedial courses. The UCs, as well, have long found ways to accommodate underprepared students even when they were pretending not to accept them.

Rather than continuing to enumerate examples, I want to finish by returning again to Harvard. From what I can ascertain, Harvard has offered some form of remedial work in almost every year of its existence, even when it wasn’t officially requiring remedial courses. In 1914, after about 20 years of requiring remedial students to take “special instruction,” Harvard finally reintroduced an official remedial writing course. The course shifted and changed over the years, and in the 1970s, they renamed it “Expos 10: Introduction to Expository Writing.” It remains on the books to this day. What is so striking about this remedial class is that Harvard is so unlike most other post-secondary institutions in America. In 2015, they accepted fewer than 6% of their applicants—less than 2000 of the more than 37,000 people who applied. These are some of the most accomplished applicants in the world—maybe the history of the world. And Harvard still needs to offer remedial writing courses for some of them.

It’s worth asking why, and the simple answer is that students need reading, writing, math, and study skills instruction. Notwithstanding the political value of remediation in arguments about post-secondary education, the history of remediation helps us to recognize that basic skills are not basic in the sense of being prior to more complex instruction—they are basic in the sense of being fundamental to all instruction. Remediation is not going away any time soon, and perhaps counter intuitively, the final expulsion of remediation would be a bad sign for higher education because it would signal the end of aspirational standards. Remedial education allows universities to serve student needs and pursue higher standards, and I think we would do well to recognize it.

Friday, October 30, 2015

In, On, and Of Depression

I have suffered from clinical depression, to a greater or lesser degree, for at least the past two decades. Within that time, I have generally been fine (or at least functional), but I have also experienced a couple of prolonged episodes of severe depression, one of which I have recently started to emerge from. If I had to guess, I’d say the first signs of this episode started some time in the late-summer of 2013 and got progressively worse over the next year. But the very worst of it—which included severe anxiety and insomnia—started in August of 2014 and lasted for about 8 months. The major causes are pretty easy to point to: (1) a close friendship ended in a dramatic and protracted way, and (2) I was under a tremendous amount of stress as a couple of high-stakes deadlines coincided with being on the job market (another indescribably stressful process, at least for me). These two extreme stressors (along with any number of other, less momentous stressors) resulted in the anxiety and insomnia. In short, I was constantly stressed out, exhausted, and anxious, all of which worked together in a destructive feedback loop. It was a really tough period, and though it started to improve significantly last April, I’ve only really started to feel like myself again in the past month or so (and that intermittently).

The funny thing (ha!) is I didn’t even recognize the last two years as a bout of depression until after the worst part was already over. (Actually, I didn't even recognize it then--my mom pointed it out.) I knew I was stressed and tired and anxious, but there were all sorts of things I could point to as the direct causes of my distress. In retrospect, however, the distress did not go away when the apparent causes did. In fact looking back, I recognize whole stretches of time I lost to avoidance…of anything and everything. I would hole up in my office, turn off the lights, and basically hide. How anything got done in those two years, I don’t think I’ll ever be able to fully explain.

The point of reflecting on these past two years is not to solicit sympathy, but to mark a point of progress. There is therapeutic value in being able to identify a period of real misery and begin to pull at some of the various threads. What were the causes of the misery? What were the symptoms and the signs, especially the early signs of bad things a-comin’? What could I have done differently? Sooner? With more care? What was good despite the misery? And what were the consequences? No need for comprehensive answers to any of those questions here. Lots of people have written about the personal toll of depression. I’ve taken a lot of solace in the essays, blog posts, cartoons, and revelations of fellow depressives, especially Allie Brosh’s “Hyperbole and a Half” (part 1), (part 2) and the Bloggess’s blog (part everything). Lots of commentators also note the collateral damage of depression for people near and dear to the person suffering. I won’t rehearse those things here, which have been well rehearsed elsewhere.

I do, however, want to note one consequence I’ve only recently recognized, which has to do with the effects of my depression on other people. Specifically, I missed/passed up a lot of opportunities in the past few years to offer comfort to other people who were in need of a kind word or gesture because I was so busy trying to keep my head above water. At first, I thought of these missed opportunities as a form of selfishness. I think this is pretty obvious. When you’re depressed—even mildly depressed—sometimes you just don’t notice other people’s suffering, even when their suffering is severe. When every moment of the day requires crisis-level triage, you overlook a lot of things that aren’t your crisis. Such self-interest may be justifiable for someone struggling with depression, but it still wears on the people around you.

Even when you do notice, it can be hard to muster the energy to attend to other people. Without going too far down the rabbit hole, I think Christine Miserandino’s “Spoon Theory” is useful here. Spoon Theory essentially says that people struggling with mental and/or physical disabilities have to be very careful about how they ration their energy. Sometimes running to the store for a greeting card, writing a note, and sticking it in the mail can obliterate an entire day’s spoons. When I'm depressed, I get stingy about caring for others, even in relatively meager ways, because I’m hoarding spoons, knowing full well that if I spend them too quickly or too frivolously, I’ll regret it later. And once I start hoarding spoons, it becomes increasingly easy to justify hoarding. It becomes habitual, made ever easier by the fact that I don’t always notice other people’s suffering to begin with.

But when it comes to comforting others, the real tragedy may be in those times—more common than they might seem—during which you (that is, I) notice someone you care about who is in need of comfort, you have the energy to comfort them, but you convince yourself they wouldn’t want to hear from you. In other words, you convince yourself you can’t be a source of comfort to people you care about, and even that your comfort may cause them additional pain. (Can I stop here to say: just typing out those last two sentences really sucked.) In this case, it's not selfishness at work--it's a weird, stupid, self-destructive selflessness. It's a huge loss for everyone involved, and it's usually invisible to everyone (as well as another feedback loop).

Emerging from depression has been (and is) a very slow process for me, in part because I feel like I have to relearn some pretty basic human interaction stuff, the muscles for which have atrophied. And taking stock at points along the way is helpful. And sometimes writing stuff down is a good way to remind myself that the cock-ups that result from depression are not elemental failings that spring forth from the unalterable core of my being (unlike melodrama, which is apparently part of my very fabric). And then noticing that I spent the time and energy to write something down helps to reassure me that I’m continuing to emerge. And hopefully that will help me to remember how to do some of those human interaction things, like comforting the people I care about when they need it. All of this to say, if I err on the side of over-comforting you in the next couple of months, or if I'm just generally weird, I’d appreciate it if you’d chalk it up to relearning how to be a functional human person.

Wednesday, November 26, 2014

On Sadness after Ferguson, With Apologies to All the Victims of Injustice

For many years, prominent activists have meticulously documented the structural racism, sexism, classism, and other pervasive, systemic injustices that define the lives of Americans--all Americans, whether to their benefit or to their detriment--systemic injustices so well-worn that they can be hard to see even for people who suffer the consequences. 

Of course, people have been documenting structural, systemic injustices for centuries. Although it undoubtedly started many centuries earlier, I'm thinking of Mary Wollstonecraft in the late 1700s and Cherokee activists in the early 1800s; I'm thinking of the Seneca Falls women's rights activists in 1848, and Sojourner Truth, and Ida B. Wells; I'm thinking of W.E.B. DuBois, and Alice Paul, and Wilma Mankiller; I'm thinking of Martin Luther King, and Malcolm X, and Fannie Lou Hamer, and Gloria Steinem; I'm thinking much more recently of Ta-Nehisi Coates, and Anita Sarkissian, and Tim Wise. The list is endless, and it is filled with people who I respect and trust because the work they're doing to document and confront injustice is hard, and thankless, and even dangerous. 

I often take comfort in these people--that they exist and that they have taken responsibility for confronting the persistent march of injustice that results in death, hate, violence, rape, theft--usually in the lives of people who are already disenfranchised. I try to emulate them when and where I can. But while there is often comfort in knowing that social justice activists are at work, and there is therefore a horizon against which to (try to) orient my own actions for social justice (however meager they may be), I find that today I cannot muster anything but profound sadness. 

I am sad that so many people in Ferguson and around the country today feel like the only recourse they have for voicing legitimate concerns about the deaths of their loved ones is rage. I'm sad that I think they're mostly right. 

I'm sad that people will wake up in Ferguson today and tomorrow to the smell of smoke and burned out buildings. I'm sad that I feel helpless to do anything more than post FB links and talk to my children and teach about structural racism in my classes, knowing full well that the people who most need to hear the message are the ones most likely (and able) to ignore it. 

I'm sad that people I know and love, as well as people I've never met, can't hear the expressions of deeply held pain and sorrow in the narratives that so many people--people of color and women and other human targets of systemic injustice--tell about their daily lives. I'm sad that someone, somewhere is profiting off of all this sadness and happily stoking the fire while people die and cities burn. 

I'm sad that I catch myself backing away from saying anything because I don't want to make people uncomfortable, even when I purport to be (and try to be) an advocate for social justice and an ally to the activists who do the real work of confronting injustice. I'm sad that I don't even know how to talk or think about any of it without feeling despair and anger and fear and utter shame.

And I know that my sadness is meager--disproportionate to any injustices I personally face and unequal to the demands of social justice that I desire. I know, in fact, that my sadness in this moment may even be mawkish and insulting to the people who are facing injustice, whether as target or documentarian or both. And I know as well that my brand of sadness is a form of luxury, available only to the people like me who can step away from systemic injustice long enough to wallow. In the end, my sadness will not change anything, and I know this, too. I can even admit, as much as I do not want to, that my sadness may even be detrimental to the cause in some way. But sadness is, at least for now, what I have. 

Someone someday may accuse me of not doing enough to advance the cause of social justice, and I have to live with the reality that they will probably be right. But I hope in that moment, people will not say I did nothing. I hope I will earn at least that caveat. For now, I don't know what else to do but wallow and post links and talk to my children and teach my classes and be sad. Tomorrow I'll try again to do something better.

Tuesday, August 19, 2014

Why College Writing Matters



“Why College Writing Matters”
Ryan Skinnell
UNT Writing Program Orientation
August 20, 2014

Those of you who know me, and especially those of you who have taken my classes, know that I am pretty cynical about composition. My invitation to talk today about “Why College Writing Matters” may therefore seem like the height of irony. But obviously I don’t think it is, otherwise I would’ve had to politely decline Dr. Yeatts’s gracious invitation. In fact, although I am cynical about composition, I am also rather idealistic about why college writing matters, and especially why teaching college writing matters.

For me, the crucial distinction to be made at this point is between composition and writing. The former, to my mind, is a uniquely institutional creature, caught up as it is in discussions about FTEs, assessment reportage, and degree plans. Probably synergies and deliverables, too. Composition in this sense is 150 years old, give or take. Writing, on the other hand, is a couple millennia older and it is neither organically nor necessarily linked to institutions of higher education.

Here’s the other thing about writing—it’s really hard to define in a satisfactory way. I can say, for instance, that my 3-year-old daughter is learning to write just as truthfully as I can say that I am still learning to write. The common thread that ties these two usages, and many others, together is that writing is a form of cultural participation. My daughter is in the early stages of learning to participate in a broadly literate culture; I like to think I’m a little further along in learning to participate in a series of much more focused professional, public, and personal cultures.

Both composition and writing matter, in their own ways. Both, in fact, are absolutely necessary for our existence as college writing teachers. But unlike composition, writing matters in a way that we—as writing teachers—can actually affect on a regular, individual, personal basis. My idealism about writing, then, is grounded in a deep-seated optimism about the possibility of participating meaningfully in culture and about the possibility of helping students to do so, as well.

The remainder of what I have to say today is essentially an amplification of this theme. I’ve broken it down roughly into three sections to tease out what I think are some useful distinctions as we approach the beginning of a new semester. First, I plan talk more specifically about teaching, second about culture, third more properly about today’s theme, why writing matters.
           
First, teaching. I hope you will forgive what is sure to be a classic case of preaching to the choir. I am quite certain that we all believe teaching is important—some of us maybe even think teaching is virtuous. No doubt this belief is reinforced by how challenging good teaching is. In a recent essay for Slate magazine, former Yale professor William Deresiewicz writes that [QUOTE]College teaching, like any other kind, is a slow, painstaking, difficult process.” [UNQUOTE]. It is complex, uncertain, and often ambiguous. [I warned you that I would be preaching to the choir.] 

The thing is, teaching is also fundamentally a hopeful process. We believe that students can learn, and in so doing, can be better, smarter, happier, and more successful people. And we believe that we can help. 

To be sure, the slowness and difficulty and painstakingness that Deresiewicz talks about can make the hopefulness hard to maintain. As you’re all undoubtedly aware, when you spend days and weeks and months with students who seem not to get what you’re trying to teaching them, it is hard not to come away feeling a little frustrated—maybe even occasionally dejected. And even when teaching works well, the evidence of success may be invisible for weeks or months or years—which can mean “invisible forever” to teachers whose students move on after a semester or two. Yet, despite the all reasons to be frustrated or dejected—and there are plenty—teaching remains at its core an optimistic endeavor.
             
Recently, I had to go see my doctor to update some vaccinations. This was the first time I’d been to see this doctor and he asked me what I do for a living. I bet you can guess what his response was when I told him I teach writing. Right on cue, as if reading from a script: 

“Oh, that’s fantastic. I don’t have to tell you” [BUT OF COURSE HE DID], “Kids today can’t write. It’s all text speak and tweeting.” Etc. etc. etc. 

I have a standard response to these kinds of conventional complaints about “kids these days.” I always respond, “It’s a good thing they can’t write—otherwise I’d be out of a job.” Depending on how snarky I’m feeling, I might also add, “After all, if they could already write, they wouldn’t need college, would they?” 

I relate this anecdote for two reasons. The first, of course, is that it’s a common experience for writing teachers. You could replace doctor with “stranger on an airplane” or “grocery bagger” or “President of the United States” and predict much the same response. Heck, you could replace doctor with “President of the university” and the script would probably stay the same. I find this routine intensely aggravating, and many of the teachers I know do as well. 

But the second reason I tell this story is the reason teachers tell it to each other all the time—it reminds us how hopeful we are about teaching and about students. It illustrates what I think is the hopeful difficulty of teaching—and especially of teaching writing. I am aggravated by the conventional “Kids these days” script because I believe that what I do is worthwhile. And I believe it is worthwhile because I am really, truly, deeply optimistic about students. 

I hope that those of you who have taken classes with me (or will be soon) know this to be true. For all my theatrical blustering, I know that students are enriched by education and by good teachers (a designation to which I fervently aspire). I am hopeful FOR students and I am hopeful BECAUSE of them. I’ll wrap this section here with a quick summation lest I start gushing—teaching is intrinsically an optimistic undertaking, even when it is otherwise burdened by frustrations and delayed gratification.
             
Which brings me to section two: This section will be both shorter and less optimistic than section one. My academic background is in rhetoric, and it is my unequivocal belief that the study of rhetoric is also fundamentally optimistic. Rhetoricians are relentless in their pursuit of positive change through the power of language. We’re practically a bunch of Pollyannas. 

Except, of course, the insistence on bringing about positive change also incessantly directs rhetoricians’ attention to all the negative things that need changing. As it happens, culture—broadly-defined—is a roiling mess of negative things that need changing. Cultures—more narrowly-defined—are also ripe with things that demand positive change. And, as we see in any number of spots around the university, the community, the nation, and the world, the need for positive change is intense. It is also interminable and controversial. 

I suppose this, like the previous section, is preaching to the choir. Nevertheless, it returns me to my definition of writing as cultural participation. What we are enlisted to do as writing teachers is to help our students make sense of and thoughtfully participate in cultures that are roiling messes in need of positive change. So here’s where we are so far: teaching is an optimistic pursuit, and culture is intensely fraught. Based on my definitions, then, teaching writing is a grand paradox.
           
You have no doubt noticed a theme in my talk by now. To wit: the abundant rewards of Teaching…Writing…and Teaching Writing are always complicated by serious obstacles. But, of course, another way to say this is that the slow, painstaking, difficult process of teaching writing is always accompanied by serious rewards. This, to my mind, is precisely why college writing matters. College writing matters because, done well, it is an invitation for students to encounter the complex challenges of the worlds they live in—to participate in them—by holding out the possibility of actually making positive cultural change. 

Granted, this is not a one-to-one transaction. College writing does not necessarily make positive cultural change. In Writing at the End of the World, Richard Miller confronts this uncomfortable truth directly. He writes, [QUOTE] “The dark night of the soul for literacy workers comes with the realization that training students to read, write, and talk in more critical and self-reflective ways cannot protect them from the violent changes our culture is undergoing” [END QUOTE]. His examples are of 9/11, Columbine, and pervasive sexual assault on college campuses, among others. We might update them to include Afghanistan, Ferguson, and countless more. 

Not cheering stuff, and not likely to be solved by 15 weeks’ worth of exposure to Ancient Rhetorics or The Shallows or a half dozen progymnasmata exercises. But Miller is a committed teacher, and as such he cannot help but betray his hopefulness. Miller is non-plussed by claims that reading or writing will empower students, or make them good academic arguers, or will save them from the blight of modern life. He’s not a romantic. Nevertheless, he sees real value in teaching students to [QUOTE] “use writing as a technology to think with.” 

At the risk of oversimplifying Miller’s subtle arguments, I read him to say that writing is a way to explore, confront, and engage in the concerns that shape our lives—not just students’ lives, but our lives as well. College writing may not solve the world’s greatest problems, and in fact it may be one of the world’s problems, depending on who you talk to. But college writing nevertheless matters because it can help students make sense of and thoughtfully participate in cultures that that need positive change. And college writing matters because it does not abandon those processes for students to figure out alone. As I said earlier, whatever the frustrations, teaching writing matters because writing teachers can contribute meaningfully to students’ participation in their worlds through regular, individual, personal interactions.
           
Perhaps it would be worthwhile here to reframe this point about why college writing matters. My doctor thinks college writing solves problems—this is the crux of his complaint about “kids these days,” with their twitterizing, and their text messaging, and their fashionable ankle monitors. But of course he is exactly wrong. Or more precisely, solving problems is the purview of composition—which is assessable and achieves outcomes. 

As I said in the beginning, composition matters, but composition is not writing. College writing in the way Miller invites us to see it doesn’t solve problems as well as it reveals them. Teaching in the way Deresiewicz invites us to see it, likewise reveals problems. In this sense, the paradox of college writing is its greatest strength. The paradox of college writing exemplifies the hope and difficulty of teaching writing more generally.  

And, of course, it’s a beginning, not an end. After 15 weeks of strict drilling, my students would probably not impress my general practitioner with their erudition, nor even probably their punctuation.  
And what I am asking them to do is much harder in the long run than excising dangles from their modifiers.  

And they may choose not to accept the invitation, and they may defer the invitation until long after we’ve parted ways.  

And in the end, of course, the composition creature must be fed. Miller again: [QUOTE] “Students must attend; teachers must be present; papers must be solicited, assessed, returned. And, regardless of what actually happens during this process, instruction must be said to have occurred” [END QUOTE].  

And all of these things conspire to undercut the hopefulness of teaching.  

And yet they are exactly why college writing matters. Because while the college writing classroom is certainly not the only place that students are invited to discover the problems of the world around them, it remains, against all odds, a hopeful place for discovering ways to participate in the process of solving them.

Monday, January 20, 2014

The Academic Job Market--One Thing I Wish I'd Been Better Prepared For

(originally written 10/11/11)

I'm watching several friends take on this year's academic job market, and having just come off the market myself, I have one bit of advice I wish had been shared with me earlier in the application process.

The job market is likely to be one of the most emotional years of your career to this point. One reason is that being on the market is so drawn out. You spend the summer waiting for ads to be posted, you spend September-December writing and sending applications, you spend the same period waiting for requests for more information or invitations to interview, you spend winter break waiting for MLA, you spend the few weeks after MLA waiting for invitations to interview on campus, you spend time after that waiting for additional information and/or an offer. If you get a job, you have to wait 3-4 months for it to start; if you don’t, you start the process again. In short, you spend a year of your life in limbo, just waiting for some resolution.

At every step, you will likely experience incredible highs (“Hinterlands University emailed to ask for my teaching philosophy!”) and incredible lows (“Hinterlands University emailed to say they received 400 applications and do not intend to interview me at MLA.”) In fact, it’s quite possible that on any given day you could experience both peak and valley, maybe even several times over.

The emotional rollercoaster is compounded by any number of other factors: noticing a typo in a job letter that’s been replicated dozens of times, seeing a job posting that seems to describe your exact qualifications, not hearing anything from anyone for long stretches of time, being invited to interview, reading job wikis (avoid this if you can—they can be poisonous), getting unexpected encouragement, learning that a position (or positions) was cancelled, getting helpful or enthusiastic feedback on any of your job documents. The list could easily extend for many more pages, and while the factors may change from candidate to candidate, it’s unlikely that anyone, regardless of qualifications and preparation, will avoid the job market ups and downs.

There are as many ways to meet the emotional challenges of the job market as there are causes of potential emotional swings. I relied heavily on chocolate, meals with close friends, and regular contact with people who could put my “plight” in less emotional contexts; other people pour themselves into work or monster movie marathons. Whatever your preferred method, knowing that your emotions are likely to be running high, and preparing ways to meet those emotional challenges, is nearly as important as any other action that you take during your time as a job candidate, especially if you’ve done a good job of preparing to be a candidate during your time as a student.

Thursday, October 4, 2012

On Debates and Disappointments

Every season on American Idol, one of the heavy favorites goes home way too early. Chris Daughtry, for example, was sent packing even though he was apparently far more popular than the three contestants who survived him (including the winner, and subsequent flop, Taylor Hicks). So shocking have the departures been that the show instituted a “judges save” to allow the judges to overrule the numb idiots in the electorate (nobody says outright that they’re numb idiots, but the message is palpable). Additionally, the Idol judges regularly repeat the obvious message that “you have to get out and vote for your favorites if you don’t want them to go home.” The Idol example, silly though it may seem, is instructive for thinking about last night’s presidential debate.

For anyone listening to the debate analysis, Romney wiped the floor with President Obama. I’ve heard commentators announce that Romney “destroyed,” “obliterated,” and “dominated” Obama. On the Diane Rehm Show, the three guests, ostensibly representing the entire spectrum of American political belief, agree that “this format isn’t really Obama’s strong suit.” On MSNBC, Obama’s liberal supporters are like howler monkeys, shrieking about the President’s failure to “show up” and “get the job done.” As on the teevee, there is widespread disbelief and dismay across the liberal social media landscape. And as if that all wasn’t enough, Obama and his team are apparently so demoralized that his handlers barely showed up to spin the debate last night (one Twitter commenter noted 17 Romney surrogates to only 5 Obamanites talking to the press after the debate), and the Obama campaign has been all but silent this morning. Surely, it seems, last night’s debate was a catastrophic blow to the Obama campaign. But, I think we forget too quickly the American Idol lesson.

For the past several weeks, Obama has looked insurmountable following a series of devastating gaffes on Romney’s part. The most prominent one, of course, is the secretly recorded video of Romney calling 47% of the country leeches and victims that has been making the rounds for the past few weeks. The election forecasters InTrade and FiveThirtyEight blog have had Obama at a huge advantage to win both the electoral and popular votes (not much has changed since last night as that’s concerned, by the way—FiveThirtyEight still gives Obama better than an 80% chance of winning the election.). While this seems like a good thing for the President’s chance at a second term, Chris Daughtry can tell you that there’s nothing worse for turning out voters than a sure thing. In fact, the more certain the President’s chances are, the more he has to worry that voters will lose their motivation to turn out on November 6th.

This put(s) the President in a tough position as a debater. Does he, as Chris Matthews and Ed Shultz suggested, crush his opponent on stage to show that he’s the big man on campus? Or does he soft-pedal into the debate, hoping to convince his supporters that the race is still a tight one and he needs their energy, their fervor, or at least their begrudging support come election day? I think we got a pretty good idea of which path Obama decided was the smarter one. It was no surprise, then, when Senior Advisor to the Obama campaign, David Plouffe, said in an interview on MSNBC that Obama thinks that “in the long run” voters will recognize who the best leader is. Anyone who watched Al Gore’s dominating performance against George W. Bush in 2000, and then watched the electorate revolt against the incumbent because mean old Mr. Gore hurt poor Bushie Wushie’s feelings might recognize the genius of this strategy. I assume it was strategic, for what it’s worth, because, while debates might not be Obama’s greatest platform, there is no evidence that I can see that he or his advisors have ever been anything but masterful at campaigning (whether or not you agree that he’s been great at actually governing). The absence of “messaging” or “spin” following the debates reinforced my sense that Obama’s team made a strategic choice that was far removed from winning last night’s debacle.

As I’ve watched the debate analysis over the past 12 hours or so, it appears the strategy is working. Liberals and Democrats (which are not, by the way, always the same thing) have spent lots of hours and energy rehashing and reasserting Obama’s accomplishments as Commander-in-Chief. Obama’s weakness last night has provided an impetus for renewed energy and commitment on the part of his voting base, in no small part because they’re being reminded (ad nauseum) of his accomplishments over the past four years. On the other hand, conservatives and Republicans (again, not necessarily coterminous) have spent much of the same time lauding Romney’s debate bona fides. For Romney, I can’t help but wonder about whether his push to the middle on things like tax cuts, Medicare, and even Dodd-Frank and the Affordable Care Act (“I’ll get rid of Obamacare and replace it with Romney Care on a state-by-state basis?) will re-energize his voting base or de-energize them. I doubt they’ll be rushing to change teams, but there wasn’t a lot of what the pundits call “red meat” for his supporters to get them excited about casting a ballot in his favor. For people whose minds were made up prior to the debates, nothing that happened last night is likely to change their minds, but much happened that might have changed their motivation. For those who have yet to make up their minds, I imagine this event (and all the energy surrounding it) will inform their eventual decision, but only mildly given the inevitable intensity of the next 5 weeks of campaigning.

Which brings me to my final thought, only tangentially related to what I’ve already said. As I surfed Fizzborg, the Twitter, and other haunts of the political commentariat last night and this morning, a theme kept surfacing: these debates were nothing but political theater, and therefore a waste of time. I find this assertion somewhat distressing for a number of reasons, but I will restrain my observation to just two of them here. First, a large number of people claiming the irrelevance of last night’s debates are also, in other circumstances, wont to claim that they’re troubled by Americans’ widespread disconnection from politics and aversion to democratic civic engagement. This is especially the case among academics, and yet, the sort of abdication I saw repeatedly in regards to the debate made me wonder: If we, the people who are supposed to be professionally critically engaged in this type of civic event, can’t get invested, how can be surprised when our fellow Americans don’t engage either? In tuning out, it seems to me, we risk performing the failure of our own beliefs.

 Which leads to my second point. The fact is, a lot happened last night, and the baying of the pundit class was as much a part of what happened as were the lunge-and-parry onstage. The debaters said little and meant even less, but to dismiss it as meaningless theater is a tremendous mistake. This is the political environment in which we live—strategy, messaging, and manipulation are (and have always been) fundamental parts of the process. We act surprised and dismayed, but that’s because we’re playing at being willfully naïve, not because there’s anything so surprising. In fact, there was nothing, and I mean nothing that was truly surprising last night. In the absence of some astonishing revelation, it is easy to dismiss the debates as meaningless. But it is exactly the absence of revelation that invites us, and every potential voter along with us, to consider more deeply the “meaning” of a series of debates that seem absent of any meaning. A tremendous amount of time, energy, and emotion was invested in last night’s event, and to pretend that it was wasted because (1) the debaters weren’t honest or forthcoming enough or (2) the exchange didn’t provide enough revelatory fireworks, is to become the numb, mindless sheep that each party is fond of accusing the other party of catering to.