Writing

AI and the Would-Be Author*

At the recent AI for Everyone Summit, one of the things I was asked to think about is the "write my book" tools** that have begun to make an appearance. The goal behind these tools, to offer one 'using AI correctly and ethically’ use case, is to get a rough but clean first draft done by having the AI respond to a detailed, complex prompt (We’re talking about a paragraph or two of instructions with additional material potentially uploaded to feed the LLM with more data to work from.). From there, the would-be author would proceed to the process of rewriting what had been generated to check for GenAI hallucinations and places where the would-be author needs to expand on the generated text to better capture their idea.

These tools, then, can serve as ghost writers for those who question whether they have the time, inclination, dedication, or skill to produce a vehicle*** for their idea. The complex prompt and the editing of the generated text is where the thinking part of writing takes place.

“Their idea” is where I can sense a kind of value here. If you scroll long and far enough on LinkedIn, you are almost certain to come across a post that reminds you that ideas don't have value because only the products of ideas have value. I'm sure that you, like me, can think back over the many times you have offered others (or had offered to you) good ideas that would have benefitted them only to have them not taken up and watched while others, with a similar idea, benefitted when they acted.

It's common enough to be a trope — often framed as an "I told you so" being delivered to a bungling husband by a long-suffering sit-com wife.

And if all you are looking for is to get your idea out into the world in a publication, it's hard to argue with using these tools — especially for those in the academy whose annual assessments and tenure and promotion are tied to the number of publications appearing on their c.v.

But the transfer of information from one person to another is only one of writing's purposes. Like any medium of communication, part of writing is engaging the reader and keeping them interested in whatever it is a would-be author is writing about.

During a recent livestream of 58 Keys, I asked William Gallagher for his thoughts on the GenAI tools that are appearing and if he intended to use any of them, given what they can do to an author's voice. In brief, he replied that he could see the utility of a more advanced grammar checking tool but balked at autogenerated text — including autogenerated email.

He pointed out how we, as writers, were advertising our skills with every email (joking that the advanced grammar check may result in false advertising). And he highlighted the response of another participant, who wrote in the chat "If I'm not interested in writing the message, why should I expect someone to be interested in reading it?"

That question, I think, gives hope for would-be authors, gives important guidance for those considering generated text tools, and should give pause to those who believe they can outsource writing to AI.

Using AI to write a message to a team and find a list of possible times to meet is the kind of message an AI can and should write — assuming everyone's AI Agent has access to accurate data. Asking an AI to pitch an idea or propose a solution is more risky because it doesn't pass the "why should I expect them to read it" test.

Rather than de-valuing writing, this highlights the value of good writers — people who have learned the how and why of communicating in a way that creates an expectation of interest in what's being written and why it's important.

———————

* I will be using this phrase through out this post but I ask you, gentle reader, to not read it pejoratively. To call this theoretical Individual “the user" misses the mark, I think, as it focuses us too much on the tool and not enough on the intent. "Would-be" is needed, however, because our theoretical individual has not completed the task of bringing their completed work to publication. Real world users of these tools, after all, may or may not be authors of prior works.

** I haven't experimented with a specific one at the time of writing.

*** I use "vehicle" here because there are tools that generate images (still or moving), music, presentations, computer code, and likely other forms media I don't know about. This question isn't exclusive to writing.

Writing and AI’s Uncanny Valley, Part Three

We respond to different kinds of writing differently. We do not expect a personal touch, for example, from a corporate document — no matter how much effort a PR team puts into it. We know the brochure or email we are reading (choose your type of organization here) is not tailored for us — even when a mail merge template uses our name.

Or perhaps I should say especially when it uses our name and then follows it with impersonal text.

But a direct message from someone we know, whether personally or professionally? We expect that to be from the person writing us.

And sound like it.

This is where AI-assisted writing will begin to differentiate itself from the writing we have engaged with up to now. If it is a corporate document — the kind of document we expect to have come from a committee then pass through many hands, no one will blink if they detect AI. I suspect that this is where AI-assisted writing will flourish and be truly beneficial.

To explain why, let me offer a story from a friend who used to work in a financial services company. Back in the 80s, some of their marketing team started using a new buzz word and they wanted to incorporate it into their product offerings, explaining to the writing team that they wanted to tell their older clients how they intended to be opportunistic with the money they had given them to invest.

The writers in the room, imagining little old ladies in Florida who preferred the Merriam Webster Dictionary definitions to whatever market-speak was the flavor of the month, were horrified and asked if the marketers understood that ‘opportunistic’ meant “exploiting opportunities with little regard to principle.” The marketers, oblivious to how their audience would near that word, had been prepared to tell their company’s clients they wanted to take advantage of them.

AI is simultaneously well positioned to catch mistakes like that and assist teams in fixing it and to manufacture such mistakes, which will need to be caught by the same teams — all based on the data (read: writing) it was trained on. I stress assist here because even the best generative AI needs to have its work looked over in the same way any human writer does. And because they are statistical models of language, they are subject to making the same kinds of mistakes the marketers I wasn’t mentioning did.

AIs will also be beneficial as first-pass editors, giving writers a chance to see what grammar and other issues might need fixing. I don’t expect to see them replacing human editors any time soon, as the skill to integrate changes into a text in someone else’s voice is a challenging skill to develop.

Personal correspondence, whether it is an email or a letter, and named-author writing will be the most challenging writing to integrate with generative AI. While lower level AI work of the kind we are currently used to — spellcheck and grammar checking — will continue to be tools that writers can freely use. I haven't downloaded Apple’s iPadOS 18 beta software but will be interested to see how it performs in that regard.

The kind of changes generative AI produces runs the risk of eroding the trust of the reader, whether they can put their finger on why they are uncomfortable with what they are reading or not.

Yes, there is a place for generative AI in the future of writing. I suspect the two companies best positioned to eventually take the lead are Apple (which is focusing its efforts on device, where it can learn the voice of a specific user ) and Google (which has been ingesting its users’ writing for some time, even it it has been for other purposes). Microsoft's Office suite could be similarly leveraged in the enterprise space but I don't have the sense people turn to it for personal writing.

That may tell you more about me than the general population.

These three usual suspects and whatever startups that believe they are ready to change our world will need to learn more about how to focus large language models on individual users more broadly. Most text editors can already complete sentences in our voices. The next hurdle will be getting these tools to effectively compose longer texts.

If, in fact, that is what we decide we want.*

———————-

* Not go full 1984 on you, but I do wonder how the power to shift thought through language might be used and abused by corporations and nation states through a specifically trained LLM.

Writing and AI’s Uncanny Valley, Part Two

As I mentioned yesterday, the final papers I received this semester read oddly, with intermittent changes in the voice of the writer. In years past, this shift would be a sure sign of plagiarism. The occasional odd word suggested by a tutor or thesaurus isn't usually enough to give me the sense of this kind of shift. It's when whole sentences (or significant parts of sentences) shift that I feel compelled to do a spot search for the original source.

More often than not, this shift in voice is the result a bad paraphrase that's been inappropriately cited (e.g., it's in the bibliography but not cited in the text). More rarely, it's a copy/paste job.

With this semester’s final papers, I have begun to hear when students using AI appropriately are having sections of their paper "improved" in ways that changes their voice.

This matters (for those who are wondering) because our own voices are what we bring to a piece of writing. To lose one's voice through an AI's effort is to surrender that self-expression. There may be times when a more corporate voice is appropriate, but even the impersonal tone of a STEM paper has something of its author there.

To get a sense of how much was being lost when an AI was asked to improve a piece of writing, I took five posts from this blog and asked ChatGPT 4o and Google Gemini to improve them. I uploaded the fifteen files into Lexos, a textual evaluation tool developed at Wheaton College by Dr Michael Drout, Professor of English and Dr. Mark LeBlanc, Professor of Computer Science.

The Lexos tool is sufficiently powerful that I am certain that I’m not yet using it to its full capacity and learning to do so is quickly becoming a summer project. But the initial results from two of the tools were enough to make me expand my initial experiment by adding four additional texts and then a fifth.

The four texts were William Shakespeare's Henry V, Christopher Marlowe's Tamburlaine and Doctor Faustus, and W.B. Yeats' The Celtic Twilight — as found on Project Gutenberg. The first three were spur of the moment choices of texts distant enough from me in time as to be neutral. I added Yeats out of a kind of curiosity to see if my reading and rereading of his work had made a noticeable impact on my writing.

Spoilers: It hadn't — at least not at first blush. But the results made me seek out a control text. For this fifth choice, I added Gil Scott Heron's "The Revolution Will Not Be Televised" because of its hyper-focus on events and word choice of the late sixties and early seventies. This radical difference served as a kind of control for my experiment.

The first Lexos tool that hinted at something was the Dendogram visualization, which shows family tree-style relationships between texts. There are different methodologies (with impressive sounding names ) that Lexos can apply that produce variant arrangements based on different statistical models.

The Dendogram Groupings generated by Lexos.

These showed predictable groupings. Scott Heron was the obvious outlier, as was expected of a control text. The human composed texts by other authors clustered together, which I should have expected (although the close association between Henry V and Tamburlaine — perhaps driven by the battle scenes — was an interesting result). Likewise, the closer association between the ChatGPT rewrites and the originals came as no surprise, as Gemini had transformed the posts from paragraphs to bulleted lists.

What did come as a surprise was the results of the Similarity Query, which I as much stumbled across as sought out. Initially, I had come to Lexos looking for larger, aggregate patterns rather than looking at how a single text compared with the others.

It turned out  the Similarity Queries were the results that showed the difference between human-written text and machine generated text.

Similarity Query for Blog Post Zero. The top of the letters for “The Revolution Will Not Be Televised” can be barely seen at the bottom of the list.

Gil Scott Heron remained the outlier, as a control text should.

The ChatGPT 4o rewrite of any given post was listed as the closest text to the original, as one would expect.

What I did not expect was what came next. in order, what appeared was:

  • The non-control human texts.

  • The ChatGPT texts.

  • The Gemini texts.

The tool repeatedly marked the AI-generated text as different and like itself rather than being like a human.

Here, Lexos dispassionately quantifies what I experienced while reading those essays. The changes made by generative AI change the voice of the writer, supplanting that voice with its own.

This has serious implications for writers, editors, and those who teach them.

It also has implications for the companies that make these AI/LLM tools.

I will discuss some of that in tomorrow’s post.

Writing and AI’s Uncanny Valley, Part One

TLDR: A reader can tell when an AI rewrites your work and that shift in the text will give your readers pause in the same way generative images do, eroding their trust in what they are reading.

A more academic version of this research will be submitted for peer review but I wanted to talk about it here as well.

First, a disclaimer: I am hopeful for our AI-supported future. I recognize that it will come with major disruptions to our way of life and the knowledge industry will be impacted in the same way that manufacturing was disrupted by the introduction of robots in the 1980s.

The changes this will create won't be simple or straightforward.

Those considerations are for a different post and time.

Right now, I want us to look at where we are.

During the 2023-24 academic year, I began integrating generative AI into my classes. As I am an English professor, most of this focused on how to responsibly use large language models and similar technologies to better student writing and assist students with their research.

It was a process that moved in fits and starts, as you night imagine. As the models changed (sometimes for the better, sometimes for the worse) and advanced (as, for example, ChatGPT moved from 3.0 to 3.5 to 4o), I had to adjust what I was doing.

My biggest adjustment, however, came with the final papers and I unexpectedly found myself staring into an uncanny valley.

One of the things English professors are trained to do is hear the shifts in language as we read. The voice we hear — or at least listen for — is the unique voice of a writer. It's what makes it possible to read parts of “Ode to the West Wind” and “Ode Upon a Grecian Urn” and recognize which one is Shelley and which one is Keats.

We don't learn to do this as a parlor trick or to be ready for some kind of strange pub quiz. We learn to do this because shifts in tone and diction are used to carry meaning in the same way the tone something is spoken with carries meaning for a listener.

Listening for a writer's voice may be the single most important skill an editor develops, as it allows them to suggest changes that will improve an author's writing that will be invisible to a reader. It's what separates good editors from great ones.

For those grading papers, this skill is what lets us know when students have begun to plagiarize, whether accidentally or intentionally.

But this time, I heard a different kind of change — one that I quickly learned wasn't associated with plagiarism.

It had been created by the AI that I had told the students to use.

After grading the semester's papers, the question I was asking shifted from if a generative AI noticeably changed a writer's voice to how significantly generative AI changed a writer's voice.

Spoilers: The changes were noticeable and significant.

How I went about determining this is the subject of tomorrow's post.

On the Need for a New Rhetoric: Part V — The Beginnings of a New Rhetoric

To recap, for those who are joining us now but not quire ready to review four blog posts of varying length, we are confronted with a sea change in writing — whether you look at it from the point of view of a practitioner or scholar. Our means of production have sufficiently changed to shift composition and distribution. The old system, which involved separate, paper-based spaces for research, drafting, and production has been replaced by digital spaces that allow for these to take place within a single, evolving file. To use an old term, we all now possess an infinitely cleanable palimpsest, which can incorporate audio-visual material alongside the written word, that can be instantly shared with others — including those with whom we might choose to collaborate. 

This change not only has changed the way we write, it necessitates a change in the  way we teach writing and approach the idea of composition.

Having raised the issue, I am obligated to provide some thoughts on the way forward. Before doing so, I wish to stress something: Although I have, like most English professors, taught composition and rhetoric courses, I am not a Rhet-Comp specialist. There are others who have studied this field much more closely than my dilettantish engagement has required. I suspect that it is from one of them that the better answers will come about the merging of aural, oral, and visual rhetorics. That said, this path forward cannot begin without us addressing the tools of the trade.

We must begin to teach the tools alongside the process of writing. 

One of the first step for any apprentice is to learn their tools — how to care for them and how to use them. Masters pass on the obvious lessons as well as the tricks of the trade, with each lesson pitched to the level of the student and focused on the task at hand.  Those who teach writing must begin to incorporate a similar process into writing instruction. Indeed, if you consider the process described in the Part II of this series, I described a tool set that was explicitly taught to students at one point in the past. 

As much as I would like to say that this should be done within K-12 and university professors like me can abdicate any responsibility for this. The reality is, however, that this kind of instruction must take place all levels by faculty in a variety of disciplines. This breadth is demanded by the reality of the tasks at hand. A third grade English teacher will be focused on a different skill set, writing style, and content-driven focus than a university-level Chemistry instructor will. They will be engaging in different kinds of writing tasks and expect different products. Each, therefore, must be ready to teach their students how to create the final product they expect and there is no magic moment when a student will have learned how to embed spreadsheets and graphs within a document. 

This is no small demand to place on our educational system — especially upon composition faculty. Keeping up with technology is not easy and the cast majority of those teaching writing are already stretched dangerously thin by the demands of those attempting to maximize the number of students in each class to balance resources in challenging financial times. Nevertheless, the situation demands that this become a new part of business as usual for us.

We need to adapt to the tools that are here rather that attempt to force prior mental frameworks onto the tools.

Those of us who were raised in the prior system might have students try and adopt a` “clean room” approach about the research — keeping separate files for the notes associated research and the final document, for example — in order to replicate the notebook-typescript divide described before. There is a certain utility to this, of course, and there is nothing wrong with presenting it as an option to students as a low cost, immediately accessible solution to the problems inherent in the Agricultural Model. And this system will work well for some —especially for adult learners who were taught the Architectural Model. To do so exclusive to all other approaches, however, is to ignore new tools that are available and recognize that students have their own workflows and ingrained habits they may not be interested in breaking. The options provides by Scrivener and Evernote, for example, may better provide for students’ needs. And while there is some cost associated with purchasing these tools and services, we should not let ourselves forget that notecards, highlighters, and the rest of the Architectural Model’s apparatus were not free either.

We must be more aware of what tools before us are for and apply that knowledge accordingly.

If all you have is a hammer, the saying goes, everything looks like a nail. The same metaphor applies to word processing. 

If you are word processing, the assumption is that you are using Word. For the vast majority of people, however, using a desktop version of Word is overkill. Most users do not need the majority of the tools within Word. This does not make Word a bad choice for an institution nor does it make Microsoft an inherently evil, imperialist company. Microsoft developed Word to solve a set of problems and address a set of use cases. 

Why this observation matters is conceptual. Many institutions focus on teaching students how to use the basic functions of Word because it is a standard. Because the accounting and finance areas want and need to use Excel, it makes sense for the majority of companies to purchase licenses of the Microsoft Office suite. As a result, most working within a corporate environment — regardless of operating system platform — will find Word on their computing device for word processing.

If all these users are likely to do, however, is the ability to change a font or typeface, apply a style, and control some basic layout (e.g., add columns and page or column breaks), there is no need for an instructor to focus on teaching Word. They can focus instead on the task and the concerns that the faculty member is addressing (e.g., the appropriate format for the title of a book).

Yes, it will be easier to standardize on a device platform for instruction — especially since, as Fraser Speirs and others have pointed out, faculty need to have a common set of expectations for what can be expected of students and serving as front-line technical support. 

That said, institutions should consider carefully their needs when it comes to purchasing decisions. For the vast majority of students at most educational levels, there is no difference between what they will do in Apple’s Pages, Google’s Docs, Microsoft’s Word, or any of the open source or Markup-based options, like Ulysses. And the choice should be made based on the utility provided rather than a perceived industry standard. For long-form publishing, Word may be the be the best answer. If students are going to do layout incorporating images, Pages will be the stronger choice. 

For some, these three points will feel sufficiently obvious at to wonder what we have been doing all these years. The simple enough answer is that we have been doing the best we can with the limited time we have. These recommendations are, after all, additions to an already over full schedule. They are also changes in orientation. A focus on the tools of writing, rather than on the writing process, will be a change. For the reasons outlined in this series, however, I would argue that they are critical ones. 


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

 

On the Need for a New Rhetoric: Part II — The Architectural Model of Writing

In my last post, I offered an assertion without exposition: That writing on a computer/mobile device screen has significantly changed the model that we use for creating arguments and composing the form they take because writers have moved from an architectural model of production to an agricultural form of production. In this post, I will explain what I mean by an architectural model of composition. 

Readers of a certain age will remember research in a time before the ubiquity of the internet. In such days of yore, the well-equipped researcher went to a library armed with pencils and pens of varying color, at least one notebook, and a stack of 3” x 5” cards gathered together into multiple stacks held together by rubber bands.[1]

For those of you too young to have ever seen such a thing, or too old to remember the system’s details[2], here are how all of these pieces worked together.

To keep things organized, you started with a research outline — one that roughly laid out what you were looking for. This was as much a plan of action as it was an organizational system. It had a hypothesis rather than a thesis — the idea or argument you were testing in your research.

Once in the library, you went to a card catalog — a series of cabinets holding small drawers that contained cards recording bibliographic information. One set of cabinets was alphabetized by author. Another set of cabinets held similar cards but they were organized by subject. Each card also recorded the Library of Congress or Dewey Decimal number that corresponded to the shelf location of the book in question.[3]

If you were looking for more current material, you consulted a Periodical Index of Literature, which was published annually and contained within it entires for articles published in magazines. With that information, you could request from the reference librarian a copy of the bound volume of the periodical or microfilm or microfiche to place into the readers. 

For each source you referenced, you carefully recorded the full bibliographic information onto one note card and added it to your growing stack of bibliographic cards — which, of course, you kept in alphabetical order by author. Each card was numbered sequentially as you examined the source. 

These were the days before cell phone cameras and inexpensive photocopiers. You took handwritten notes in a notebook and/or on index cards. For each note you took, you noted the number of the source’s bibliographic note card in one corner[4] and its place in your organizational outline in another corner. To keep things as neat as possible, each card contained a single quotation or single idea. Following the quotation or note, you listed the page number. Finally, you would write a summary of the note along the top of the card to make it easier to quickly find the information when flipping through your cards.

You did this for every note and every quotation.

At the end of the day of research, you bundled up your bibliography cards in one stack and your notes in a second stack — usually in research outline order though some preferred source order.

When your research was complete, you created your thesis, which was a revision of your hypothesis based on what you had learned in your research. You then created an outline for your paper.[5] Once the outline was ready, you went back through your notecards and recorded the paper outline in a third corner of the card — usually the upper right hand corner. (For those looking to handle revisions to structures or make certain pieces of information stand out, a separate color could be used to write things down.) You then stacked the cards in the order of your outline and proceeded to writing. As you came to each point you wished to make, you hand wrote (You would not have typed a first draft.) the information or quotation, noting the source where and when appropriate. 

Then you revised and edited until you were ready to type the paper. If you were among the fortunate, you had a typewriter with a correction ribbon or had access to correction strips. If not, you got used to waiting for White Out to dry, lest you be forced to retype the entire page.

From this description, I hope you can see why I refer to this system as an architectural model. You gather raw material, shape the raw material into usable units of standardized sizes, then assemble them according to a kind of blueprint.

I suspect you can also see the sources of many of our current digital methods. To put it in the language of contemporary computing, you created a analog database of information that you had tagged with your own metadata by searching through sources that were tagged and sorted by generic metadata. The only differences here are that the database of information is stored on 3” x 5” cards rather than within spreadsheet cells, for example. 

So long as computers were fixed items — desktops located in labs or, for the well to do, on desks in offices or dorm rooms, this model persisted. With the coming of the portable computer, however, a change began to occur and writers shifted from this architectural model to an agricultural one without changing many of the underlying assumptions about how research and writing worked.


 

  1. Those exceptionally well prepared carried their index cards in small boxed that contained dividers with alphabetical tabs.
  2. I hasten to note that this is what we were taught to do. Not everyone did this, of course.
  3. What happens next depended on whether you were in a library with open stacks or closed stacks. In open stack libraries, you are able to go and get the book on your own. In closed stack libraries, you fill out a request slip, noting your table location, and then wait while the librarian retrieves the work in question. The closed stack model is, of course, still the norm in libraries’ Special Collections section.
  4. Some preferred to include the name of the author and title of the work. This could, however, become cramped if it bumped into the heading of the note if you placed it in one of the upper corners. For this reason, most people suggested placing this information on one of the lower corners of the card. I seem to recall using the lower right corner when I did this and placed the note’s location within the organizational outline in the lower left corner.
  5. Some continued to use notecards in this step. Each outline section was written on a card, which allowed them to be shuffled and moved around before they were cast in stone. 

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

On the Need for a New Rhetoric: Part I — Setting the Stage

In 1958, Stephen Toulmin published The Uses of Argument, which espouses a method of argumentation that is one of the current foundations for teaching rhetoric and composition at many universities. In brief, it is a system for building and refuting arguments that takes into account both the claim, and the evidence required to support it, and the assumptions that claim is based upon. Its versatility and effectiveness make it a natural choice for teaching first year composition students how to make the jump from high school to university-level writing. 

As good as this system is, it does not address the largest change in writing in English since 1476 — the year William Caxton introduced England to his printing press and publishing house. Although it is almost impossible for us to conceive in 2017, his editions of Geoffrey Chaucer’s Canterbury Tales and Sir Thomas Mallory’s Le Morte d’Arthur were fast-paced, next generation texts that relied on an advanced and inherently democratizing technology. No more was one limited to the speed of a scribe. Leaves could be printed off in multiple copies rather than laboriously copied by hand.

Over the following five or so centuries, this change in the means of production has had an undeniable impact on the act composition. Dialect[1], punctuation[2], and spelling[3] standardized. Writers began to write for the eye as well as — and then in stead of — the ear as reading moved from a shared experience to a private one.

The world that the printed text created was the world within which Toulmin formulated his method of argumentation. While it used the work of Aristotle and Cicero, it no longer situated itself with an oral and aural world. A world where a reader can stop and reread an argument comes with a different set of requirements than one where an orator must employ repetition to make certain that an audience grasps the point. By way of example, the change in tone associated with Mark Anthony’s pronouncement that “Brutus is an honorable man” is more powerful when heard from the stage (or screen) rather than seen on the page (or screen) of Act 3, Scene Two of Julius Caesar.[4]

There have been several innovations and improvements along the way. But the addition of pictures for readers and typewriters for writers did not inherently change things. Yes, the material produced by a writer would looks like its final form sooner in the process but the process did not change. The limits of experimentation were bound by the limits of the codex[5] format.

With the development of the computer and modern word-processing, however, we are experiencing a change every bit as significant as the one wrought by Caxton — one that will once more change the way we compose. Indeed, it already has changed the way we compose even though we do not all recognize it. What is currently lagging is a new methodology for composition. 

Before I tell you what to expect in the next post, let me tell you what not to expect. I will not be discussing the shortening of attention spans or the evils that screen time has wrought upon our eyes and minds. In truth, many of those arguments have been made before. Novels, in particular, were seen as social ills that harbored potential for weakening understanding. In fact, the tradition of railing against new methods dates back to the invention of writing. When Thoth came to the Egyptian gods to tell them of his revolutionary idea — an idea that would free humans to transmit their thoughts from one to another in space and in time, the other gods objected, noting that writing would come to destroy human memory — much in the way people now mourn their inability to remember phone numbers now that our smart phones remember them for us.

What I will be positing is a materialist argument: That writing on a computer/mobile device screen has significantly changed the model that we use for creating arguments and composing the form they take because writers have moved from an architectural model of production to an agricultural form of production.


For those non-English majors reading, the Middle English spoken by Chaucer, a Londoner, was noticeably different from the Middle English of the English Midlands spoken by the Pearl Poet, the unnamed author of Pearl and Gawain and the Green Knight

  1. The manuscript of Beowulf, for example, is one long string of unpunctuated text. It was assumed a highly educated reader would know where one word ended and the next began.
  2. William Shakespeare, famously, signed his name with different spellings at different times. This was not a sign of a poor education. It is a sign of a period before any dictionary recorded standard English spelling.
  3. It is worth noting that this example carries within it another example of the change being discussed. William Shakespeare did not write for a reading audience. He wrote to be heard and changes in pronunciation have obscured jokes and doubled meanings.
  4. Given what will come later, it is worth beginning to use the technical term for the physical form that books take — leaves of paper within two protective covers.

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.