Desktop Computing

"What's a computer?"

Apple caused a lot of consternation amongst the tech world's equivalent of the chattering classes (On a good day, I include myself in that group.) with the question asked at the end of this ad. For many, the question was one that challenged the preconceived notion of the form factor: Could something without a fixed and attached keyboard and a lot of I/O options really be a computer?

Some probably typed this on their lovingly crafted mechanical keyboards while their laptop was docked in clamshell mode, not giving it a second thought.

For others, the question was one of specs: Could anything with so little RAM or storage or CPU/GPU power really be a computer?

They probably didn't stop to consider that their cellphone's specs are superior to what NASA used to travel to the moon and that Voyager has less computational power than their car's key fob.

For still others, it’s about the software capabilities. Can anything that is incapable of running intensive desktop computing software really be considered a computer?

They probably didn't stop to ask if the real computer they used three to five years before could comfortably run the latest version of the program they are using as a benchmark.

I'm not making these observations to poke fun of the nameless, faceless strawmen I have set up to point at derisively. Rather, I think their objections are critical for understanding what I am beginning to explore in response to that ad's question.

“What’s a computer?”

The ad’s answer, according to the graphics that appear on the screen, includes an iPad Pro running iOS 11 — a device and operating system those of us living the iPadOS lifestyle might consider limited in the same way my strawmen might attribute to our iPads.

But the point of the ad, along with ads like “Homework” —an ad that haunts me because it highlights what I fear are my own pedagogical shortcomings, is that the computer must serve a purpose if it’s going to be real.

I suspect many consumers generally “get it” — whether they are buying an iPad, a Mac, or a Windows machine. It's about what the device can (or can't) do for them and what they're comfortable with.

The iPad Mini I am writing this on right now is, for my immediate need, more powerful than the most impressively tricked out Mac Pro because the Mac Pro doesn't support Scribble or the Apple Pencil. And while there's no question that the new M4 iPad Pro outperforms my Mini, the Mini's form factor still delights me more than it would.

So what's a computer? It's a tool — a tool that can only be measured by its utility to the user and not an abstract set of specs and form factors.

My preference for the Mini comes with clear trade offs. The smaller size that, for reasons I cannot explain, I prefer can feel cramped at times and is less forgiving when dealing with online meetings. And I will need to post this via my iPad Pro because Squarespace doesn’t trust the Mini with hyperlinks. But I get more worthwhile (Your opinion may differ.) writing done on it with my Apple Pencil than I do on my iPad Pro with its excellent Magic Keyboard. And I have noticed I actively dislike the thought of using a traditional computer of any manufacture.

The girl in the ad's question is one every user looking at a new device should ask. What, for them, is a computer? And how open are they to change?

On the Need for a New Rhetoric: Part II — The Architectural Model of Writing

In my last post, I offered an assertion without exposition: That writing on a computer/mobile device screen has significantly changed the model that we use for creating arguments and composing the form they take because writers have moved from an architectural model of production to an agricultural form of production. In this post, I will explain what I mean by an architectural model of composition. 

Readers of a certain age will remember research in a time before the ubiquity of the internet. In such days of yore, the well-equipped researcher went to a library armed with pencils and pens of varying color, at least one notebook, and a stack of 3” x 5” cards gathered together into multiple stacks held together by rubber bands.[1]

For those of you too young to have ever seen such a thing, or too old to remember the system’s details[2], here are how all of these pieces worked together.

To keep things organized, you started with a research outline — one that roughly laid out what you were looking for. This was as much a plan of action as it was an organizational system. It had a hypothesis rather than a thesis — the idea or argument you were testing in your research.

Once in the library, you went to a card catalog — a series of cabinets holding small drawers that contained cards recording bibliographic information. One set of cabinets was alphabetized by author. Another set of cabinets held similar cards but they were organized by subject. Each card also recorded the Library of Congress or Dewey Decimal number that corresponded to the shelf location of the book in question.[3]

If you were looking for more current material, you consulted a Periodical Index of Literature, which was published annually and contained within it entires for articles published in magazines. With that information, you could request from the reference librarian a copy of the bound volume of the periodical or microfilm or microfiche to place into the readers. 

For each source you referenced, you carefully recorded the full bibliographic information onto one note card and added it to your growing stack of bibliographic cards — which, of course, you kept in alphabetical order by author. Each card was numbered sequentially as you examined the source. 

These were the days before cell phone cameras and inexpensive photocopiers. You took handwritten notes in a notebook and/or on index cards. For each note you took, you noted the number of the source’s bibliographic note card in one corner[4] and its place in your organizational outline in another corner. To keep things as neat as possible, each card contained a single quotation or single idea. Following the quotation or note, you listed the page number. Finally, you would write a summary of the note along the top of the card to make it easier to quickly find the information when flipping through your cards.

You did this for every note and every quotation.

At the end of the day of research, you bundled up your bibliography cards in one stack and your notes in a second stack — usually in research outline order though some preferred source order.

When your research was complete, you created your thesis, which was a revision of your hypothesis based on what you had learned in your research. You then created an outline for your paper.[5] Once the outline was ready, you went back through your notecards and recorded the paper outline in a third corner of the card — usually the upper right hand corner. (For those looking to handle revisions to structures or make certain pieces of information stand out, a separate color could be used to write things down.) You then stacked the cards in the order of your outline and proceeded to writing. As you came to each point you wished to make, you hand wrote (You would not have typed a first draft.) the information or quotation, noting the source where and when appropriate. 

Then you revised and edited until you were ready to type the paper. If you were among the fortunate, you had a typewriter with a correction ribbon or had access to correction strips. If not, you got used to waiting for White Out to dry, lest you be forced to retype the entire page.

From this description, I hope you can see why I refer to this system as an architectural model. You gather raw material, shape the raw material into usable units of standardized sizes, then assemble them according to a kind of blueprint.

I suspect you can also see the sources of many of our current digital methods. To put it in the language of contemporary computing, you created a analog database of information that you had tagged with your own metadata by searching through sources that were tagged and sorted by generic metadata. The only differences here are that the database of information is stored on 3” x 5” cards rather than within spreadsheet cells, for example. 

So long as computers were fixed items — desktops located in labs or, for the well to do, on desks in offices or dorm rooms, this model persisted. With the coming of the portable computer, however, a change began to occur and writers shifted from this architectural model to an agricultural one without changing many of the underlying assumptions about how research and writing worked.


 

  1. Those exceptionally well prepared carried their index cards in small boxed that contained dividers with alphabetical tabs.
  2. I hasten to note that this is what we were taught to do. Not everyone did this, of course.
  3. What happens next depended on whether you were in a library with open stacks or closed stacks. In open stack libraries, you are able to go and get the book on your own. In closed stack libraries, you fill out a request slip, noting your table location, and then wait while the librarian retrieves the work in question. The closed stack model is, of course, still the norm in libraries’ Special Collections section.
  4. Some preferred to include the name of the author and title of the work. This could, however, become cramped if it bumped into the heading of the note if you placed it in one of the upper corners. For this reason, most people suggested placing this information on one of the lower corners of the card. I seem to recall using the lower right corner when I did this and placed the note’s location within the organizational outline in the lower left corner.
  5. Some continued to use notecards in this step. Each outline section was written on a card, which allowed them to be shuffled and moved around before they were cast in stone. 

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

On the Need for a New Rhetoric: Part I — Setting the Stage

In 1958, Stephen Toulmin published The Uses of Argument, which espouses a method of argumentation that is one of the current foundations for teaching rhetoric and composition at many universities. In brief, it is a system for building and refuting arguments that takes into account both the claim, and the evidence required to support it, and the assumptions that claim is based upon. Its versatility and effectiveness make it a natural choice for teaching first year composition students how to make the jump from high school to university-level writing. 

As good as this system is, it does not address the largest change in writing in English since 1476 — the year William Caxton introduced England to his printing press and publishing house. Although it is almost impossible for us to conceive in 2017, his editions of Geoffrey Chaucer’s Canterbury Tales and Sir Thomas Mallory’s Le Morte d’Arthur were fast-paced, next generation texts that relied on an advanced and inherently democratizing technology. No more was one limited to the speed of a scribe. Leaves could be printed off in multiple copies rather than laboriously copied by hand.

Over the following five or so centuries, this change in the means of production has had an undeniable impact on the act composition. Dialect[1], punctuation[2], and spelling[3] standardized. Writers began to write for the eye as well as — and then in stead of — the ear as reading moved from a shared experience to a private one.

The world that the printed text created was the world within which Toulmin formulated his method of argumentation. While it used the work of Aristotle and Cicero, it no longer situated itself with an oral and aural world. A world where a reader can stop and reread an argument comes with a different set of requirements than one where an orator must employ repetition to make certain that an audience grasps the point. By way of example, the change in tone associated with Mark Anthony’s pronouncement that “Brutus is an honorable man” is more powerful when heard from the stage (or screen) rather than seen on the page (or screen) of Act 3, Scene Two of Julius Caesar.[4]

There have been several innovations and improvements along the way. But the addition of pictures for readers and typewriters for writers did not inherently change things. Yes, the material produced by a writer would looks like its final form sooner in the process but the process did not change. The limits of experimentation were bound by the limits of the codex[5] format.

With the development of the computer and modern word-processing, however, we are experiencing a change every bit as significant as the one wrought by Caxton — one that will once more change the way we compose. Indeed, it already has changed the way we compose even though we do not all recognize it. What is currently lagging is a new methodology for composition. 

Before I tell you what to expect in the next post, let me tell you what not to expect. I will not be discussing the shortening of attention spans or the evils that screen time has wrought upon our eyes and minds. In truth, many of those arguments have been made before. Novels, in particular, were seen as social ills that harbored potential for weakening understanding. In fact, the tradition of railing against new methods dates back to the invention of writing. When Thoth came to the Egyptian gods to tell them of his revolutionary idea — an idea that would free humans to transmit their thoughts from one to another in space and in time, the other gods objected, noting that writing would come to destroy human memory — much in the way people now mourn their inability to remember phone numbers now that our smart phones remember them for us.

What I will be positing is a materialist argument: That writing on a computer/mobile device screen has significantly changed the model that we use for creating arguments and composing the form they take because writers have moved from an architectural model of production to an agricultural form of production.


For those non-English majors reading, the Middle English spoken by Chaucer, a Londoner, was noticeably different from the Middle English of the English Midlands spoken by the Pearl Poet, the unnamed author of Pearl and Gawain and the Green Knight

  1. The manuscript of Beowulf, for example, is one long string of unpunctuated text. It was assumed a highly educated reader would know where one word ended and the next began.
  2. William Shakespeare, famously, signed his name with different spellings at different times. This was not a sign of a poor education. It is a sign of a period before any dictionary recorded standard English spelling.
  3. It is worth noting that this example carries within it another example of the change being discussed. William Shakespeare did not write for a reading audience. He wrote to be heard and changes in pronunciation have obscured jokes and doubled meanings.
  4. Given what will come later, it is worth beginning to use the technical term for the physical form that books take — leaves of paper within two protective covers.

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

An Observation about My Glasses

So, a couple months ago, I had to replace my glasses (The prior pair's arm had broken in a manner that could not be repaired.). Having reached a certain age, I needed progressive lenses (I had them in the last pair, which means I reached that certain age a while back.).

Now, as this (intermittent) blog indicates, I went all in on iOS a long time ago. That said, I have a Mac Mini at home to serve as a base of operations and a Mid-2011 iMac on my desk here at work. One of the things that has troubled me about this pair of glasses is that the progression is off. In my last pair of glasses, I would look at the iMac's screen and all would be clear. Now, it is fuzzy and I have to tip my head back.

Just moments ago, I finally figured out why. The progression is not set for people who work on a desktop. It is set up for people who work on a laptop or a mobile device, like an iPad or an iPhone.

A number of the podcasts I listen to are populated with people I respect and listen to bemoaning about the death of the Mac in general and desktops in particular. Apparently, it is not just computer companies that have begun to adjust to a post-PC-of-their-memory/imagination world.

There is more potential angst to this than that supplied by aging tech pundits. This kind of change should be noted by the academy, which is a much more conservative and stodgy group than usually imagined. Perhaps we should begin to consider changing some of our ways if and when our glasses are letting us know that the world has moved on.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

What is So Different about the Three Current Categories?

There are currently three broad categories, each of which can be subdivided, of devices that users can now choose from:

Mobile: This category includes both handhelds (mobile phones and devices like the iPod Touch) and tablets. While these each solve a specific use case within the category, they all have mobility and a touch-first interface as their two primary features. They also provide their users with a task-based focus through the use of apps instead of applications.

 Strength: Portability of the device and the focus of the apps.

Needed Leverage:  An always (or almost always) available internet connection for extensive file storage.

Weakness:  Not as powerful, in terms of computing power and application flexibility.

Laptops: This category includes all devices that are portable but require a keyboard. Yes, they may have touchscreen capability, but the primary input is designed to be through a keyboard. They share with their desktop-bound counterparts an approach to tasks that uses application, rather than apps. This represents not only a difference in focus but also in the number of tasks that a user can pretend they are dong at the same time. While the latest iPads can run two apps at once, laptops and desktops can have multiple windows open at the same time.

 Strength: Balances the power of the Desktop with the Portability of a Mobile Device

Needed Leverage: Consistent access to power to recharge its battery.

Weakness:  Heavier than a Mobile Device.

Desktops: These devices are designed to stay in one place and provide significant computing power. Quite frankly, they provide far more computing power than most users need. More important for most users, these devices provide a significant amount of on-device storage for large libraries of files.

 Strength: Raw computing power and the capacity for a lot of local file storage.

Needed Leverage: An Internet connection for off site remote access when you are away from your desk and a device to access things while out of the office.

Weakness: Immobile and, for some, too many windows.

 

Seen in this light, the laptop, which was once the way to get work done on the go, is now a compromise device -- a role usually given to mobile devices (It should be noted that this role is usually assigned by people who use laptops.).

It is also worth noting that there is no longer a primary/base-line device for people to use. All three categories are viable computing choices for users. The question, then, is what does the individual user need at the time (given the constraints of their purchasing power). This is something that strikes home every time I come to Asia. You do not see people using laptops as a primary computing device. You see them using a larger mobile phone. It is a trend, incidentally, that educators should keep in mind, as students everywhere appear to be shifting toward this model and it represents a shift as significant as the move from punch cards to keyboards and text-based to GUI-based operating systems.

The importance of use case here is a critical one in education, which has three primary user groups: students, faculty, and administration/staff. Of the three, the individuals most likely to be tied to a desktop are the administration and staff, who work at their desk (when they are not in a meeting). Professors are semi-mobile users, as they move from their offices to a classroom. Students, however, are clearly mobile users. They move from place to place throughout the day, as they go from class to class. 

It is worth keeping this in mind when making decisions about the technology that students will be issued or be asked to purchase on their own. Mobility and portability, although they may not always recognize it, is an important factor in their interaction with technology. 


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.