I have seen many discussions of the use of artificial intelligence in the classroom, especially lamentations from college instructors.
A frequent comment is about the theory that AI writing apps like to use em dashes, and the presence of multiple em dashes in a student's essay raises the instructor's suspicion. Many instructors say this makes them sad -- because they like to use em dashes too.
However, I have not seen that as a marker of AI use. More telling, to me, is the presence of clinical topic sentences on every paragraph. By clinical I mean the sentences are precise and concise, unlike other sentences in something produced by AI, which tend to be filled with adjectives and adverbs and other efforts to sound erudite.
Few undergraduates have mastered the topic sentence, especially when they are writing about narratives (I teach literature at the university level). Their sentences tend to be focused on the plot rather than the point of their analysis. I would venture to say that many professional academics have not mastered the topic sentence.
In my classes, I would say the biggest indicator of the influence of ChatGPT, Gemini, or whatever is the substance of the essay rather than the style. Essays produced by AI seem like empty shells to me.
In my class, I emphasize this formula for essay writing: claim + evidence + explanation.
Added to this formula is the guideline: explanation should be longer than evidence. For instance, if you cite two sentences from a text, you need to write more than two sentences that explain its meaning and its significance to your claim; this needs to be true explanation and not simply summary or restatement.
AI writing tends to be big on claims and short on evidence. Sometimes it offers explanations, but without evidence those explanations are lacking. (This may be like the adage: All hat, no cattle.) Those claims are often sophisticated and heavy on adjectives and adverbs, as if the author is very familiar with the topic in a way that undergraduates rarely are. For instance, describing a story in laudatory terms can suggest the student is familiar with the author's work, the genre, or other the work of other writers to which this author can be compared.
During the fall semester, my encounters with AI-produced essays reminded me of Bloom's Taxonomy. It looks like this:
AI writing frequently demonstrates evaluative thinking without providing the analysis, application, or understanding that forms its foundation. If it provides analytical claims, it frequently does not support those with the application and understanding to support them.
For instance, in the last batch of papers for my class on the history of the American short story, I asked students to explain an element of fiction (understand) and then discuss how this element is found in two short stories we read (apply). They need to explain how this element is used to create an effect on the reader (analyze). I do not require them to say to what degree the element is effectively used (evaluate).
Essays that I suspected involved AI made big claims about the themes or effects of a story without providing a clear explanation of the element being used for analysis, without evidence from the text that demonstrates that element, and without explanation of how those textual moments support the claim of a particular theme.
(When evidence from the text is provided, it occasionally consists of fake quotes.)
A
sign that suggests to me AI was involved (in the absence of fake quotes) is when an essay is heavy
on analytical conclusions without the supporting application: claims
without evidence and explanation. A big sign is the presence of
sophisticated evaluative statements without corresponding analysis and
application.
For instance, several essays this past semester discussed "grief" as a theme in Raymond Carver's "Why Don't You Dance?" Students made the claim that a character's actions were driven by grief without explaining what he might be grieving (the story famously never states this explicitly; little in that story is explicit). They claimed the character's odd actions were the product of grief without explaining how those could be related to some kind of sorrow. They did not question the discussion of grief in what AI provided for them possibly because they had not done their own thinking about the story.
It seems grief has been discussed in relation to Carver's story many times elsewhere, because if you ask ChatGPT or Gemini to write about "Why Don't You Dance?" you will get responses that discuss grief. AI mentions grief because it notes the frequency of that word in discussions of this story, and it is likely using that word will satisfy the prompt it has been given.
It seems grief has been discussed in relation to Carver's story many times elsewhere, because if you ask ChatGPT or Gemini to write about "Why Don't You Dance?" you will get responses that discuss grief. AI mentions grief because it notes the frequency of that word in discussions of this story, and it is likely using that word will satisfy the prompt it has been given.
Academic writing is like math in a classroom: you need to show your work. AI writing provides answers, but it rarely shows how that answer was derived from the text being analyzed.

No comments:
Post a Comment