ENG 429 7.T: Graham Project Reflection

Today’s Plan

  • Thursday in the lab: Helping out a Friend
  • Changes to this Project
  • A thing I wrote

Thursday in the Lab

I think I mentioned that my friend and former student, Dr. Kristen Gay, is doing some research on AI. I want to spend 15 minutes at the start of Thursday’s class responding to her survey.

Changes to this Project

Originally I had planned for you to write a response to Graham after the current reflection paper. I’m going to cancel that so we can begin the next project.

Instead of writing a response to Graham, I’ll ask you to incorporate that response into your reflection paper. I’ll also ask that you submit two versions of this paper–the longer one that you want me to read, and a shorter one (one page, single-spaced) that you will read to the class next week. I’ve already written my longer one, and I’m going to read it to you today.

Just Because the Machine Can Produce writing Doesn’t Mean It Is Writing

I started drafting this on Friday, when I was supposed to be grading. I was going to take 15 minutes. Instead I took 3 hours 5 hours (sorry 301 students, I didn’t get to your papers). I’ve swung back to it a few days later. I should still be grading reading and responding to those papers.

First a bit of background. In my 429 Rhetoric and Technology seminar, students used AI to generate papers. I revised and further developed an assignment shared by Scott Graham in 2022. We then scored those AI generated papers using the rubric the English department uses for institutional-state program assessment. To help spur reflective thought on the project, I distributed a survey that asked students to respond to a conclusion Scott Graham drew after conducting a similar experiment in 2022. Graham concluded:

“AI-generated essays are nothing to worry about. The technology just isn’t there, and I doubt it will be anytime soon.”

After our work this semester, I’m not sure I agree with Graham. As I discuss further below, our experiment has demonstrated that the quality of AI writing largely hinges upon the quality of the human writer prompting it. Producing excellent writing (according to our state standards) requires quite a bit of prompting, an intimate working with the machine. But the technology is closer, let’s say, then it was in 2022. And I think it will “be there” sooner than most of us writing folk would like.

I find myself in an odd institutional position. On the one hand, I’m by and large responsible for thinking about professionalization and career readiness for students. As I touch on below, I expect proficiency with AI to appear more frequently in job advertisements. On the other hand, my theoretical interests include people like Walter Ong and Gregory Ulmer, and I’ve thought and continue to think about how electracy (digital technology) challenges some of the ontological, epistemological, and ethical elements of literacy. I’ve already done some of that writing, and I hope to do more soon.

In part what I write below is a reaction and a response to Megan McIntyre and Maggie Fernandes’ podcast “Everyone is Writing with AI (Except Me).” It is an excellent podcast and worth a listen. In short, I challenge their interpretation of what it means to be critical about AI. I don’t think one can be critical of AI if you are not using it. This isn’t to say that their skepticism is ungrounded: the environmental, economic, and potentially racist dimensions of AI are really concerning. But if we are to steer the implementation of AI into our campuses, we will have to have hard data, or at least soft experience, to ground our positions. Two side notes and then I’ll end this preface.

  • Side note #1:, I don’t think you will find the kind of racism that Nobles describes in ChatGPT and other advanced LLMs because of guard-rails OpenAI has developed. That doesn’t mean it isn’t enforcing a putrid form of white, standardized, academic English. I have a few students experimenting with ChatGPt this semester on this issue, and they report that it is pretty difficult to get the machine to say something problematic. I know this was a problem with earlier releases, but OpenAI is investing a lot of labor into guard-railing GPT.
  • Side note #2, as I discuss below, it doesn’t take much direct prompting to teach the machine what you want your English to sound like. I think it is much easier to address the issue of language diversity than it is most of the others, although that might mean “selecting” a voice rather than developing one. This semester, our 429 seminar has been pushing against the machine, testing what it can do and exploring what it cannot. In short, if you feed it some of your *own* writing, it can quickly learn to mimic that. Your voice can be its own (apologies).

My students are writing reflection papers on this first project, and below is what will count as mine. So here’s three reasons for being concerned about the future of artificial intelligence and writing (both academic and professional). And a coda on what learning should mean.

#1 Cheating will become a REAL problem. Not for every student. But for every student who doesn’t care about learning, cheating will be as easy as breathing.

One student in our class, Luna, composed a pretty amazing paper using AI. Beyond its argument–which is smart–and its use of sources–which is uncharacteristically sharp–Luna’s paper had a real “voice.” Luna’s ability to generate a voice is a concern, certainly on the “cheating” front. How did she do it? By feeding the paper a short amount (say five pages) of her own writing and asking the machine to mimic it. That’s it. And it worked. Really well.

This troubles me because voice and personality are probably the best dimensions we (veteran writing instructors) have for identifying cheating. Just like when someone drops a quote in the middle of a paper, we can hear it. Voice isn’t the only thing that my class and I identified as weak points capable of identifying AI writing. For instance, the machine has a general inability to deal with a direct quotation, both in terms of contextualizing material and (even moreso) doing any analysis of a quote. Getting the machine to do those things requires a large amount of prompt engineering, directing the machine to react to words or to think about how a quote resonates with a previous part of the paper. That’s another weakness: AI writing shows almost no intertextuality without extensive prompting). But here’s the thing–developing writers struggle with those elements too, especially citation. In fact, we (Composition Studies) have an incredibly in-depth and longitudinal research project, the Citation Project, that shows that developing writers (those in first-year required classes like ENG 122 or ENG 123) really struggle to *meaningfully* incorporate quotes into research papers. Folks hypothesize that this comes not just from laziness or lack of investment, but also in a decline in critical reading skills. I’ll circle back to that below. For now, I will just say that if the machine can already develop a voice based on a sample of writing, then detecting cheating will be damn-near impossible. If it can learn to be a “good” writer like Luna, then it can also learn to mimic being a developing writer. Let’s assume a future in which detecting AI writing is impossible.

But here’s the real thing: I don’t care if I can’t detect it because I really don’t want to be a cop (and the MLA/CCCC doesn’t want us to do that either). I don’t want to surveil and police writing.

I haven’t had to write a teaching philosophy in a long-time. But if I am spit-balling what is important to me, it is to reframe the classroom as a space of opportunity for people to do things. Sometimes, those are things that they want to do; other times they are things that I think they need to try (in order to become better humans, better thinkers, better Writers, or better citizens; rarely do I care if they become better students or “writers”). I’ve long ago stopped using the word “students” whenever I can and am being thoughtful and careful; that word sustains a power imbalance between me and the people with which I work. Let’s be real, there is a power imbalance–there’s a grade book that only I have access to. But I can, through assessment strategies and assignment design and direct communication, try to offer them the opportunity to “take the power back.” Or, at least, to take responsibility for what they want to learn and do. There are some courses, like ENG 328 (Graphic Design) or ENG 301 (Writing as a Job) where I feel a greater obligation to “discipline” (Foucault) students, to alert them to external expectations that should become a part of their internal process and self-regulation. But ultimately, I teach the rules so that you are self-aware of when you want to break them. If you want to design a flyer for a poetry reading in Comic Sans, by all means knock yourself out. But know that I will mock your choices. Also, know that you are free to make them. I’ve adopted ungrading as a way of making the my classes a safe space for experimentation, potential failure, reflection, and growth. Grades are bullshit for many reasons, but primarily for me because they tend to inhibit all of those dimensions of learning.

That’s a lot of writing, and I’m probably feeling self-conscious here. I think I wrote all of that because I want to stress that I am not someone who wants to police “students.” I want to develop environments in which people can learn. But I am also postpedagogical. Pedagogy is the word we (folks who study/care about teaching) use to describe not “what” you teach (that’s curriculum), but “how.” Cicero might be the first postpedagogue. He wrote that “the greatest impediment to those who want to learn are those who want to teach.” (Well, he actually wrote that “The authority of those who teach is often an obstacle to those who want to learn” and I’ve been misquoting it for years now. Whatever.) “Teaching” too often is a telling, an ordering. A telling of what to do. An ordering of a chaos. It makes this whole thing, this system, this building, this class, seem so simple and efficient and possible and worth investing in. I show you what to do and then you do it. And if learning isn’t happening, then either I am a bad teacher or you are bad students. We (everyone, always, already) are the failures, because the system is just so good.

But if you’ve been around this place as long as I have, then you know that is bullshit. That is not how any of this works. Sometimes, maybe it seems that way. I but assure you it isn’t (and I will not, I repeat I will not lecture about phenomenology and how you have taught yourself anything you have learned, anything that out of experience you have transformed into theory and repeatable practice. I will not talk about phenomenology and consciousness and metacognition. Nope). A mentor used to say that he knew a person had arrived when we (emerging scholars) surpassed our self-perceived reliance on him, when we no longer needed him, when we overcame him. It wasn’t a battle metaphor, or if it was, it wasn’t about us defeating him as much as us conquering a preliminary stage of imposter syndrome. We recognized that we could do it on our own. We carved our own path. I’d like to say that we didn’t just produce some writing, but we did some Writing or maybe even experienced W-R-I-T-I-N-G.

A concern I have, and will work out more fully below, regarding artificial intelligence is that it undermines learning; it just writes. So much writing, done for us. “Good” writing in the sense of White-Standard-Academic-English. But if it writes, then no one is learning (except it?)–no one engages that messy and painful process through which we (humans who accept the struggle, the frustration, the fight, the work) learn that we can do things. Hard things. I’m not just talking about learning to craft worlds with words or move mountains with metaphors. Here, I’m talking about learning about our own capacity.

But first let me conclude this section emphasizing that if we (back around to teachers who care about learning) are going to make a place for Writing and W-R-I-T-I-N-G in our classes, then we are going to have to convince students, those people paying to sit in our classes, that they are worth fighting for. They are worth the frustration and the effort and the pain and the frustration did I mention the frustration of, as Jim Corder describes it, reducing the wild infinite possibilities of existence down into dumb, frustrating, inadequate words. We have to trust that students want to experience and learn, rather than design systems that “ensure” that they will (as if that’s even possible). We can no longer “order” their best interests. Learning outcomes will be sales pitches more than strictures.

#2 “Who cares if it works?”. That’s a line from Robocop, a hyper-violent sci-fi action movie from the early 1990’s. In the movie a corporate executive designs an automated police robot in an explicit effort to make policing more cost efficient. There is a demonstration that goes woefully wrong. After the “disappointing” demonstration, the CEO orders the production of another prototype project, the titular RoboCop project. In a later scene, in the men’s room, a metaphorical pissing contest takes place between ED-209 and Robocop’s respective designers. A snippet from ED-209’s designer:
“I had a guaranteed military sale with ED. Renovation programs. Spare parts for years. Who cares if it worked or not?”
There’s a real corporate, economic history that underwrites that line (I think particularly of the Ford Pinto and the cost-benefit analysis that prompted Ford to let people die in crashes because it was cheaper to pay insurance payouts than recall and fix the cars). I have little faith in this era of late-stage capitalism that anyone will do what is good for writing. “Who cares if it works?” Is that even a [rhetorical] question?

I’ve been haunted by this scene so many times while in academia. There’s an old content management system, Blackboard, that absolutely resonates with that lack of care. At UNC, I’ve used so many digital project management systems that clearly lack any investment in user experience (looking at you Slate). For our purposes here, those lines foreground my concern for large-language models like ChatGPT.

Overall, I think people who struggle with writing would think most of our papers are marvelous. Such “good writing.” Those are the same people who don’t want to pay for writing. Because of those people, business faculty at this school make over 100k and I make 66k (after my “big” summer raise!). I’ve long thought that people who *can’t* write or play music or draw or make art often suppose that those of us who can just have a “gift.” There’s a jealousy that leads to a dismissal of the work. And/Or maybe there’s just another manifestation of corporate greed that doesn’t want to pay for labor. Who embrace technology in the worst spirit of Heideggerian efficiency. Whatever. Maybe that’s me making unfair sweeping generalizations out of spite. Maybe. But I believe one REAL reason for concern is that the machine is getting good enough that I don’t think non-experts will see (or “care” about) the limitations that we (writing folk writ large) all clearly do. Or they won’t want to see. Whatever. I have little faith that the folks who hire and fire will truly care about Writing if the machine “works” well enough to sell it to someone who knows/cares even less about writing.

My ENG 301 class found that 11 of 92 jobs this semester called for experience with AI–what will that be next year? Given what I have read and heard from folks working in the field, that number will continue to grow rapidly. I just read a celebratory NPR article on how great it is that more people are adopting AI in the workplace than previous technologies and research would have anticipated. I’m assured it will be great for the economy. Sarcasm and cynicism aside and general loathing of unfettered capitalism aside, here’s where my institutional position weighs on me–between wanting to encourage learning and recognizing my responsibility in preparing writers, editors, and designers to enter a job market seduced by? infatuated with? invested in? exploring the effectiveness of? the machine.

I hope this “concern,” about AI replacing human intellectual labor, is unfounded. I *hope* that people will see the machine’s limitations. (But then again, “who cares if it works?”). I *hope*, maybe even expect when I am feeling particularly positive, that, due to the machine’s reliance on human metacognition, that writers will be in more demand as long as we call ourselves “prompt engineers.” Maybe.

#3 I care about learning. As I wrote above, our Graham Project demonstrated a few general weaknesses with AI. More importantly, we’ve shown that metacognition of writing correlates strongly to paper quality.

So a more longitudinal question is whether we (Rhet/Comp folk) care about where the words on the page came from. Do we care about ideas and process and structure (genre)? Or about words (product on the page). In short, does it matter is writing becomes sitting at the machine and asking it to produce words? If the machine frees us from that frustration Corder describes?

I am being somewhat facetious here, because I would argue that the complexity and struggle of putting words on a page affords the ability to think with words (nuance, sophistication, critical dissection). Writing, of course, requires critical reading, and critical reading is learned through writing. Through reading, we gain awareness of both genre/structure (what moves could this writer make) and attention to decision (what moves did she make?).

When I read a block quote, for instance, I am identifying all the elements of the quote that could command response. After the quote, I am focusing attention on what she chose to respond to, because that tells me what she thinks is important. Critical response–my own writing–often begins by wondering what else she might say. This thing is already long enough, but here’s where I could insert some Walter Ong and feel right at home: the idea of thinking in literacy as “deep” engagement, and thinking in electracy as horizontal association.

So much of our disciplinary blood is dedicated to notions of “process.” I remain suspicious if our praxis, particularly our assessment practices, actually reinforce that. That’s another conversation for another day, but it explains why I am being facetious here–because the machine is *really* going to put that commitment to the test. The question I want to sharpen here is about whether instruction in writing has always been about prompt engineering as rhetorical / genre training. Whether it really matters how words got on the page if the intention behind those words lies in the writer. Who sets the purpose? Who distributes the product? What change does this writing hope to engender? Can we use the machine to democratize writing, to allow more people to create and distribute words that mean something to them and their world? Does it matter if syntax and diction were auto-generated for them? Does rhetoric care less about the choice of medium? As a young graduate student I had an opportunity to talk to Cynthia Selfe at a CCCC’s conference, and I asked her how she justified her use of digital composing tools and softwares. And she responded, with some Aristotle, that her job was to teach people to compose via “all available means.” What if the machine is another available means? One that, in the words of Ethan Mollick, promises to “democratize” creativity and expression?

Again, I’m being somewhat facetious here, and making a counter argument that I don’t think I believe (honestly, I’m not sure). But I think the challenge before us–my reason for concern–is to convince people that why that matters. Especially those people sitting in our classes and wondering why they have to care about writing.

And I don’t think our current model of K-Higher Ed education, as a Hunger Games style contest of survival and competition, helps. I don’t think streams of standardized tests that rank and file help. I don’t think GPA’s help. I don’t the system in its current form lends itself to making the argument that we have to make: that learning is what is important. The system tells students that grades matter. And, as I have already suggested, I don’t think grades and learning belong in the same sentence.

But to understand what I mean by learning requires a bit of a coda, of a concluding piece that refigures what has come before.

Coda
Throughout this piece I have played with a distinction between writing, Writing, and W-R-I-T-I-N-G. Two things are influencing these distinctions. The distinction between Writing and writing draws upon the distinction between Thought and knowledge made by Bill Readings in his book The University in Ruins. Readings was writing in the mid to late 1990’s, attempting to make sense of the radical changes he saw transforming higher education. He increasingly saw schools losing their historic mission (to transmit State/Culture and/or to craft souls). He didn’t lament the loss of those things so much as he worried about the new mission: the creation and transmission of knowledge. Selling answers. Demanding answers.

Readings’ book is fairly complicated and I will fail to do justice to it here. But, damn it, I am going to try. In his final chapters he tries to create a rationale for the University that steers it away from becoming a souless, bloodless knowledge factory. Instead, he believes, it could reharness the energy of its origins and become a site for questions. For asking questions that stubbornly resist being answered. These questions he calls “Thought.” The University as a site of encounters with Thought.

Below I want to share a few quotes from Readings’ work. More than any other book, I think University in Ruins changed the way I think about teaching, about my role. In part because Readings invested in Levinas, and his discussion of teaching in terms of obligation (rather than spontaneity, self-realization, what he calls the Kantian Enlightenment inheritance) resonates with me.

More than anything, Readings believed education is, like the Yeats poster on the door to faculty offices, the lighting of a fire (and not the filling of a bucket). When teaching stops being the filling of student brains,

Teaching becomes answerable to the question of justice, rather than to the criteria of truth. We must seek to do justice to teaching rather than to know what it is. A belief that we know what teaching is or should be is actually a major impediment to just teaching. Teaching should cease to be about merely the transmission of information and the emancipation of the autonomous subject, and instead should become a site of obligation that exceeds an individual’s consciousness of justice. (154, emphasis original)
No individual can be just, since to do justice is to recognize that the question of justice exceeds individual consciousness, cannot be answered by an individual moral stance. This is because justice involves respect for an absolute Other, a respect that must precede any knowledge about the Other. The other speaks, and we owe the other respect. (162).
Rather, to listen to Thought, to think beside each other and beside ourselves, is to explore an open network of obligations that keeps the question of meaning open as a locus of debate. Doing justice to Thought, listening to our interlocutors, means trying to hear that which cannot be said but trying to make itself heard. And this is a process incompatible with the production of (even relatively) stable and exchangeable knowledge. (165)
To believe that we know in advance what it means to be human, that humanity can be an object of cognition, is the first step to terror, since it renders it possible to know what is non-human, to know what it is to which we have no responsibility, what we can freely exploit. (189)

I don’t have time right now, but here’s where I can write the thing that I’ve always wanted to write but never quite got around to doing: to thinking about postpedagogy as an ethical relationship with each student in which I try to let them dictate the grounds of our experience.

My favorite line from Readings: “Thought is an addiction to which we cannot break free” (128; see “we are addicted to others, 190). But the we here–that one is difficult to pin down. There’s many people (and a lot of teachers) who are afraid of Thought. Who want or need to order the chaos.

You know you are in a weird place when you need Victor Vitanza to clearly explicate something for you. But here I am. The distinction between W-R-I-T-I-N-G and writing comes from Vitanza’s 2003 essay
“Abandoned to Writing: Notes Toward Several Provocations” in Enculturation.
Vitanza was charged with examining the relationship between Rhetoric and Composition, the slash: rhet/comp. His (third) sophistic response is to challenge both disciplines to examine what they want from writing, and based on that desire, to recognize what they might force writing to be. In doing so he hopes to reopen the question of what writing itself might want, what plural forms of writing become erased in the name of a controllable, articulable, teachable form of writing. One that can be efficiently assessed. One that shows its value as standing reserve. Vitanza’s prose can be hard to cite–and, frankly, to understand, but it is worth experiencing here:

Perhapless, there are two possibilities here: “We” can start teaching writing precisely as the university needs it taught. Or “we” can attempt “to teach” writing the way “we” want. But there are, let us not forget, third (interval) wayves. And therefore, “we” should ask: What is it that writing wants? I suspect that “writing” does not want what either the uni-versity thinks it needs nor what “we” think we want.

Taken seriously!? In an institution! Writing scares, frightens, threatens institutions! Take, for example, Jean Genet’s writing in prison and Jean-Paul Sartre’s “Introduction” to Genet’s writing. Take Hélène Cixous’s thinking of Genet in “prison.” Think of misprisions. On the contrary, at our institutions, we are taken far more than seriously. So-so-seriously. That’s Why we would be suppressed so that we could dis/engage by wayves of “learning” students to write from their impotence while at the uni-versity and while graduating into yet other institutions! But then, Who says anything like this at any uni-versity! It is, after all, just silly!

I will skip (rocks across the sur-face) what “we” might want writing to want. Writing just wants. Wants, W.ants. It’s not that writing wants what “we” want when “we” know what “we” want! Rather, WRITING WANTS! Just WANTS.
[…]
Yes, “writing” is about sea CHANGE. And that, my d.ear.est, is why we are afraid of the thing called W~R~I~T~I~N~G~, and why we insist on “teaching” writing and IN institutions! I understand that YOU are afraid of the DRUNKEN BOAT.

The DRUNKEN BOAT was an experimental digital poetry publication. That the link in Vitanza’s article is now dead, a 404 error, seems indicative of what happens to W-R-I-T-I-N-G that does not meet the institutional, professional, efficient, “working” standard.

If I synthesize these two influences together, to engage in W-R-I-T-I-N-G is to try and be(come) in proximity to Thought. An experience of a moment in which something emerges that you cannot believe you thought. From where did this idea come to me? From Thought, approached in a moment of W-R-I-T-I-N-G. Writing as a verb, unfolding and happening and “pac[ing] upon the mountains overhead / And hid[ing] [her] face amid a crowd of stars.” (W-R-I-T-I-N-G will always be a woman for me; one I desire but thankfully cannot control). Vitanza reminds us that we cannot disentangle W-R-I-T-I-N-G and desire. Why would you try? Love and passion are not about control, or at least they shouldn’t be.

writing, with a small w, is about reducing the impossible to the assessible. It is about efficiently scoring a paper and the person who wrote it. It feels to me more a form of police work than an ethical engagement with a person or people. Some people want to discipline, to make writing right. I don’t want to do that. I want to make possible an experience of W-R-I-T-I-N-G.

Writing is reaching to an audience, reaching out, imagining responses that you will likely never hear. Sometimes I write to the living, sometimes I write to the dead. But in both cases the responses are imaginary, others that haunt. W-R-I-T-I-N-G might lurk under the spectral sur-face of those imagingings.

The machine is as far as one can be from Readings’ pedagogy of Others and alterity. All it is, all it does, in Levinas’ sense is “return the same.” It feeds our narrative back to us. The faceless commonplaces that circulate. From its desire to please to its synthetic processing of expected tokens (words) to its white-washing of language into a cold and lifeless spew, there’s no love there. There’s no encounter with an other. There’s no wilderness to explore. It institutes institutional writing, returning answers to questions with confidence and pleasure. It never tires, though your free trial might expire. There’s no surprise, though you might feel the uncanny. Readings reminds us that Thought is non-transferable precisely because it is a disorienting experience of difference that in turn changes us to think about ourselves differently. It cannot be packaged or commodified. It cannot be taught, only learned. The hard way.

Print Friendly, PDF & Email
This entry was posted in Uncategorized. Bookmark the permalink.