TLDR: A reader can tell when an AI rewrites your work and that shift in the text will give your readers pause in the same way generative images do, eroding their trust in what they are reading.
A more academic version of this research will be submitted for peer review but I wanted to talk about it here as well.
First, a disclaimer: I am hopeful for our AI-supported future. I recognize that it will come with major disruptions to our way of life and the knowledge industry will be impacted in the same way that manufacturing was disrupted by the introduction of robots in the 1980s.
The changes this will create won't be simple or straightforward.
Those considerations are for a different post and time.
Right now, I want us to look at where we are.
During the 2023-24 academic year, I began integrating generative AI into my classes. As I am an English professor, most of this focused on how to responsibly use large language models and similar technologies to better student writing and assist students with their research.
It was a process that moved in fits and starts, as you night imagine. As the models changed (sometimes for the better, sometimes for the worse) and advanced (as, for example, ChatGPT moved from 3.0 to 3.5 to 4o), I had to adjust what I was doing.
My biggest adjustment, however, came with the final papers and I unexpectedly found myself staring into an uncanny valley.
One of the things English professors are trained to do is hear the shifts in language as we read. The voice we hear — or at least listen for — is the unique voice of a writer. It's what makes it possible to read parts of “Ode to the West Wind” and “Ode Upon a Grecian Urn” and recognize which one is Shelley and which one is Keats.
We don't learn to do this as a parlor trick or to be ready for some kind of strange pub quiz. We learn to do this because shifts in tone and diction are used to carry meaning in the same way the tone something is spoken with carries meaning for a listener.
Listening for a writer's voice may be the single most important skill an editor develops, as it allows them to suggest changes that will improve an author's writing that will be invisible to a reader. It's what separates good editors from great ones.
For those grading papers, this skill is what lets us know when students have begun to plagiarize, whether accidentally or intentionally.
But this time, I heard a different kind of change — one that I quickly learned wasn't associated with plagiarism.
It had been created by the AI that I had told the students to use.
After grading the semester's papers, the question I was asking shifted from if a generative AI noticeably changed a writer's voice to how significantly generative AI changed a writer's voice.
Spoilers: The changes were noticeable and significant.
How I went about determining this is the subject of tomorrow's post.