And the Pursuit of Happiness

There are currently two significant, measurable side effects of my recent trip to Venice, Italy[1]: A pursuit of preparing cafe/espresso here that tastes like what I made in our AirBnB[2] and an exploration of why it was easier to be happy in Venice than it is here in North Carolina. And while Venice is a beautiful city that promotes feelings of well being — it’s nickname La Serenissima was produced in an era before PR firms generated such titles ad nauseam — happiness is not a place-based phenomenon. Happy people can be happy anywhere and it is partially driven by choice.[3]

I was partially predisposed towards this reflection after reading Frederico Viticci’s “Second Life: Rethinking Myself Through Exercise, Mindfulness, and Gratitude” in MacStories[4], which came out while I was in Venice. I was struck by the parallels between the malaise I recognized in some parts of my life and how his thoughtful approach to technology, which mixed stepping back from some parts of it[5] and embracing others — like activity monitoring[6], was making a difference for him.

This morning, one of the articles recommended by Apple News was Adam Sternbergh’s  “Here is Your Cheat Sheet to Happiness” in The New York Magazine/The Cut, which detailed Yale Professor Laurie Santos’ class on Happiness and Well Being. One of the big takeaways from the article is that we choose our state of happiness and that we can make better choices.

I am no paragon of virtue in this arena. Both Santos’ and Viticci’s work points out many things that I am clearly and demonstrably doing wrong.[7] I won’t bore you with those details here. Suffice it to say that, like many Americans, I have become addicted to the perceived prestige that being busy confers and that I need to reassess how I approach this part of my life.

What I do what to consider here, however, is that these articles have implications for how we value one another in the workplace. As the current Chair of the Faculty Handbook Committee at Johnson C. Smith University, one of my jobs is to shepherd faculty evaluation proposals through a part of the adoption process. It strikes me that one of the engines of our need to appear busy is that evaluation policies put a premium on being busy by requiring us to document our work. There is some truth to the statement that you can only assess what you can measure, but that statement taken alone cuts out the costs of such a world view. What you assess is driven by a value judgement. We assess things we consider important so we can improve them.

One of the truisms of a university faculty, however, is that morale is in need of improvement. Dr. Jerry McGee, then President of Wingate University, once joked in a Faculty Meeting that faculty morale was always at one of two levels: Bad and the Worst it has ever been. Articles in The Chronicle of Higher Education and Inside Higher Ed on this topic appearregularly, alternating between hand wringing over the problem and offering examples of how one campus or another has tried to tackle the issue.

What I don’t recall in any of those articles is the explicit statement that our systems for evaluating faculty might be the things that are manufacturing poor morale.

Of course, this issue is not unique to higher education. All recent discussions of the national problem of K-12 teachers leaving their classrooms in droves indicate that such systems, imposed by state legislatures, are the leading cause of this wave of departures.[8] There are indications that it is also true in other fields, although I do not follow those closely.

I suspect that one of my self-imposed jobs over the coming year or two will be to look at how our evaluation system is actively manufacturing unhappiness and trying to figure out how to change that. It is true that we have been working (painfully) to revise our system over the last few years but that attempt has focused on productivity maximizing individual faculty potential by allowing them to specialize on areas of interest and talent. My areas of greatest strength, for example, are not in the classroom. I am not a bad classroom teacher but my greatest strengths lie in other parts of what it means to be a professor. We have been working on systems that would allow me to focus my time and evaluation more on those areas than on others.

Our work has focused on the happiness/morale question as an effect of our systems. That puts it in a secondary role, which will result in it not being the primary thing assessed. That means faculty morale will always be a secondary issue — one that will be less likely to be addressed than how easy it is for a given member of the faculty to produce a peer reviewed article or serve on a committee.

But with better morale, faculty teach better, write better articles, and are more likely to be productive in meetings and elsewhere. That suggests the morale question should be in the causal role rather than being considered as an effect of other causes.

This requires us to rethink the way we assess and value the time being spent by faculty. I would love to tell you that I have it figured out but these are early days in my thinking about this. I do know that simplistic responses like “Tech is bad and wastes time and produces poor results” because technology can save us time — the most valuable of commodities. We will have to be eschewed for more nuanced responses, like the one detailed by Viticci in his article, and that the nuance must be applied across the board. This means that the numbers generated in our assessments must become secondary to the non-reductive analysis of those numbers.


[1] Don’t hate me because my wife worked hard to design and provide for this trip. Yoda’s advice that Hate leads to the Dark Side applies strongly to this post. If you give in to hate, you are reinforcing your own unhappiness.

[2] So far, I haven’t managed it. In Italy, I was using a gas stovetop, which easily produces the correct level of heat for a stove top Moka by matching the size of the flame to the bottom of the Moka. I suspect the electric stovetop I have here produces too much heat, leading to a different flavor — an almost bunt taste. Experimentation continues.

[3] Michael Crichton writes about this in his autobiographical work Travels.

[4] This may be behind a pay wall. I’m a MacStories member. Apologies to those who cannot access it.

[5] Controlling social media, rather than letting social media control you, is a big theme here. It reminded me that I need to invest some time with Twitter’s lists feature to set some filters to help sort through the kind of thing I am looking for at times.

[6] I have been doing some of this and have noticed over the past year that I am happier those days I am consistently completing my move rings than I am those I am not.

[7] The good news is that both point to ways I can fix that and that those decisions are completely under my control.

[8] Despite the requests for respect, legislators — like many trustees and administrators — interpret these concerns and complaints exclusively in terms of pay. Yes, pay can be improved but reading the statements of teachers clearly indicates that the primary issue isn’t the pay. It is the burden of an evaluation system that does not value them.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

A Quick Note of Praise for Apple Maps

For quite some time, it has been fashionable to poke fun at Apple Maps.[1] In fairness, several have pointed out that Google Maps has its limitations as well, so the issues of trying to provide locations and  directions is not limited to Cupertino’s offering. Nevertheless, if someone is going to ask why one would use that mapping application, it is usually going to be Apple Maps that is being asked about.

For the past few days, I have been in Venice, Italy.[2] Venice is a city notoriously difficult to navigate on foot or by gondola — even for those who have a good sense of direction. I am happy to report that Apple Maps does a very good job of providing walking directions here. The path it laid out for us to walk from our AirBnB to La Fenice[4] was quick and easy.

This is not to say that the potential for getting lost was gone. It is easy to get turned around in a Campo[3] with five or more Calle[5] leading out of it. It’s at moments like this that the arrow pointing in the direction you are facing[6] becomes really important.

This is an older feature but one that mattered a lot to me when I was trying to find the way to the traghetto[7] in a part of the city I wasn’t as familiar with so we could get my hungry daughter to lunch.[8]

I point this out because, sometimes, the killer feature of an app is one that isn’t focused on by commentators or in one-on-one demos but provides utility when you absolutely need it to. Could the databases for locations used by our apps be better? Of course. Could they do a better job of directing us to the correct side of a building. Yes. But those are things I can work around. Not being sure of which direction I am facing on a cloudy day in Venice where there aren’t a lot of trees with directional moss isn’t something I can work around.

This is the kind of thing that Rene Ritchie refers to when he talks about Apple’s ability to produce a minimum delightful product. In this case, this delight was all about the fundamentals. And, just like in sports, getting the fundamentals right can take you far.


[1] My favorite moment of levity at the expense of Apple Maps was a joke told to me by my goddaughter soon after Apple Maps was initially released: “Apple Maps walked into a bar. Or a church. Or a store. Or a tattoo parlor.”

[2] Don’t hate me because I am lucky enough to have a wife who planned this trip.

[3] We took in a performance of Verdi’s La Traviata. It was every bit as good an as powerful as you would expect and was a wonderful reminder of the power and virtue of art.

[4] The large and small squares of the city.

[5] The lanes/walkways/roads that wind though the city.

[6]Google Maps has a blue fan that mimics the look of a flashlight, I believe, which serves the same purpose.

[7] The tragetto is the water bus system of Venice.

[8] If you are looking for something a touch more casual, check out Taverna San Trovaso.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

The Problem with “Pro”

Apple and its associated hardware and software developers have a problem with Pro machines, whether they are the forthcoming Mac Pro or the current iMac Pro, MacBook Pro, and/or iPad Pro, or any of the apps advertising a Pro tier. This problem, incidentally, is not unique to Apple and its ecosystem. It is a problem that bedevils the entire tech industry.

Pro means different things to different people.

I recognize that, in the aftermath of the MacBook Pro v. iPad Pro controversies, this statement is almost cliché. But one of the issues that I am beginning to recognize is that even those who look at these problems most broadly remain trapped by the choice of the abbreviated adjective “Pro”.

Does Pro stand for professional or does Pro stand for productivity?

Grammatical terminology may elicit from you, gentle readers, eye-rolls and a desire to click away from this article as soon as possible, there is more here than an English Professor’s occupational bias to focus on words. Most of the commentary on Pro machines has focused on the meaning of the adjective: “Who is a Pro?” I haven’t heard as much about the ambiguity of the abbreviation — although it immediately enters into the conversation. The absence of this acknowledgement, more often than not, results in people beginning to talk past one another.

It is also worth remembering that the equation Pro = Professional will always result in compromises because the machine is not the professional. The user is the professional and various users have different needs. Claiming that the MacBook Pro is a failed machine because it does not have a lot of ports, for example, requires the assumption that a professional needs a lot of ports to plug in a lot of peripherals. Those of us who don’t need to do that are going to respond negatively to the claim because accepting it requires us to deny that they are professionals. And while I don’t need a lot of peripherals[1], I deny anyone the right to claim I am not a professional.

Likewise, Pro = Productive highlights a series of compromises because what it takes for me to be productive is much different from what it takes for a computer scientist to be productive. I can be as productive on an iPad Pro as I can on a MacBook Pro. Indeed, the ability to scan documents and take quick pictures that I can incorporate into note taking apps like GoodNotes while I am doing research in an archive allows me to be more productive with an iPad Pro. While these compromises are similar to those under the Pro = Professional formulation, there are subtle differences, in terms of technological and production requirements.[2]

The most important distinction, however, is the implied hierarchy. There is an ego issue that has attached itself to the adjective Pro. Several years ago, for example, a colleague claimed that only the needs of computer scientists should be considered when selecting devices to deploy across our campus because the rest of us could get by without them. I hasten to note that, in his extended commentary, there was a good bit of forward thinking about the way we interact with computing devices — especially his observation that we could all receive and respond to email and similar communications on our phones (an observation made before the power of the smartphone was clear to all). But it is illustrative of the kind of hubris that can be attached to self-identifying as a Pro user — that our use case is more complex and power-intensive than those of those users whose workflows we imagine but don’t actually know. While I recognize, for example, that computer programming requires specific, high end hardware. It is equally true that certain desktop publishing applications require similar performance levels[2] for hardware.

It’s for this reason that I prefer to imagine that we are talking about machines designed for a certain kinds of productivity rather than for professionals. Most of us only have the vaguest of ideas about what the professionals in our own work spaces require to be productive in their jobs. Shifting the discussion away from the inherently dismissive designation (I’m a pro user of tech but you are not.) to one that might let us figure out good ways forward for everyone (She needs this heavy workhorse device to be productive at your desk while he needs this lighter, mobile device since he is on the road.) would let people embrace their roles a little better without dismissing others.


[1] What I do need are a variety of the much-derided dongles. A single port — in my case, the lightning port of my iPad Pro — is all I need for daily use. I plug it in for power at home and, when I enter a classroom or lecture hall, I plug it in to either a VGA or HDMI cable to share my screen, depending on what kind of projector or television monitor is in the room. What I really want to see is something that straddles the line between a cable and a dongle — a retractable cable that has lightning on one side and an adapter on the other with a reel that can lock the length once I have it plugged in. If someone is going to be very clever, I would ask them to figure out a way for the non-lighting end to serve male and female connections alike.

[2] This is the reason that when I get exasperated when Andy Ihnatko goes off on the current Apple keyboard design during a podcast, I still respect his position. While I am perfectly happy with the on screen keyboard or the Smart Keyboard of my iPad Pro, he wants/needs a different kind of keyboard to be productive. It isn’t because I am any less of a professional writer (My job requires me to research and write — although it is a different kind of writing than he engages in.). It is a question of how productive we feel we can be.

And, in cases like this, how we feel about the interface matters. It is why I still carry a fountain pen along with my Apple Pencil. It feels better to write with it and it produces a more pleasing line. The comfort and pleasure keeps me working. I have no doubt Ihnatko could bang out as many words on the current MacBook Pro with some practice. But the frustration in that learning curve would hamper his productivity as much as re-learning how to touch type on a slightly different keyboard.

[3] I wish to stress levels here. Both of these applications require high end machines but the specifics of those machines’ configurations are likely to be different. For those scratching their heads over this distinction, I would refer them to the distinction between optimizing for single core v. multi-core but I am not sure I understand that well enough to suggest a good place to read about it. Suffice it to say that different power-intensive applications lend themselves to different computing solutions.

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

The Problem in the Paradigm of Social and Chat Clients

During the first of the JCSU New Faculty Development Summer Institutes , funded by a generous grant from the Andrew W. Mellon Foundation, one of the participants made a request in a conversations about what we wanted to see in the future. They wanted the digital equivalent of a coffee house — a place where they could meet with colleagues, have informal conversations with students, and remain connected. At first blush, there appears to be a few options available for such a thing: Slack and various Google products can provide institutional conversation spaces. Facebook and Twitter, as social networks, provide social spaces for interaction. Messages and SMS allow for direct, instant communication.

None of these, however, fits the bill.

I would argue the primary reason that all of these services are failing in this sphere is not due to their feature sets, which are all robust, or their ubiquity. Instead, I would focus on two things that are preventing them from achieving this desirable goal.

The first is a legacy assumption. With the exception of Messages, SMS, and similar direct messaging services, these apps have an interface that assumes you are sitting at a desk. Yes, they have been adapted to smaller screens, but they are not mobile first designs. The paradigm is one of working on a task and receiving a stream of contact in a separate window. This framework is different from the metaphorical space around the water cooler or coffee machine in the break room. As such, it does not fulfill the need for the coffee shop space as described above.

Lurking behind this paradigm, however, is a more powerful one that will prevent these apps from ever serving the function of a coffee house — a paradigm most clearly seen in the Mute feature. Now, I am not saying that the Mute feature is a bad idea. Sometimes, you need to close your office door to signal to your colleagues that you need to get something done and that now is a bad time for them to stick their head in the door and ask a question or chat about last night’s game. In addition, social networks need the ability to mute the more toxic voices of the internet. But the fact that those toxic voices are more prevalent online than they are offline is a signal that there is something critically different about the virtual spaces these apps create.

Muting signals that these apps are built based on a consumption paradigm — not a conversational one. It’s the kind of thing you do to a television program rather than an interlocutor.  

All of these apps are imagined in terms of consumption — not conversation. So long as that remains the case, they will not break through into a space where true conversation, rather than two or more people consuming communication from each other (much as you are consuming this blog post but are able to respond to it). They will not break through a hard ceiling of their utility and operate in the same conversational manner that messaging apps do.

In pointing this out, I want to stress that this is something users should be as aware of as developers. If we are using these virtual spaces in a manner they are not designed for, we should not be surprised at their limitations. Developers, meanwhile, should note that their apps and services may not be offering what their users are truly looking for.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

On the Reports of Apple’s Doom in the Educational Arena

There are any number of think-pieces on the problems facing Apple in Education. One of my favorites, as I mentioned in an earlier post, was written by Bradley Chambers. And as I said in that post, I agree with everything he has said about what Apple can do to make life easier on the overworked cadre of educational IT support staff out there.

That said, I have finally put my finger on what has been bothering me about the growing groupthink that is setting in the aftermath of Apple’s education event. First, it is worth remembering that we have been here before. There is a parallel between what we are hearing here and what was said back in the early 2010s about the iPhone in enterprise:

Back in those days, IT kept tight control over the enterprise, issuing equipment like BlackBerries and ThinkPads (and you could have any color you wanted — as long as it was black). Jobs, who passed away in 2011, didn’t live long enough to see the “Bring Your Own Device” (BYOD) and ‘Consumerization of IT,’ two trends that were just hovering on the corporate horizon at the time of his death.[1]

While there are important differences between the corporate market and the education market, I think it is worth remembering that Apple’s shortcomings in device management have been invoked by those in IT to foretell the ultimate failure of its initiatives before. They were proven wrong because customers (In this case, the people in the enterprise sphere they supported.) demanded that the iPhone be let in and because that market grew to be so significant that Microsoft believed supporting the iPhone would be to its benefit.

Despite the differences, Apple appears to be using a similar playbook here. They are not pitching their product to IT. They are pitching it to the teachers and parents who will request and then demand that iPads are considered for their schools. And as Fraser Speirs pointed out in a recent episode of the Canvas podcast[2], the wealthier, developed nations of the world can afford to deploy iPads (and/or Chromebooks) for all of their students.

Second, the focus on the current state of identity “ownership” by companies like Facebook and Google is, perhaps, less of a threat to Apple than it is an opportunity. My bet is that this is a space that is ripe for the kind of disruption Apple specializes in.

The current model for online identity focuses on a company knowing everything about you and using the information that is surrendered by the user for some purpose. In the case of Google, it is to create a user profile. In the case of Microsoft, it is to keep major companies attached to their services.

Apple is not interested in that game. They are interested in maintaining user privacy — a stance that has real value when providing services for children. So, what they would want and need to do is develop a system that creates some kind of anonymized token that confirms the user should be allowed access to a secured system.

They have this, of course, in Apple Pay.

What Apple now needs to do is figure out is some way to have a secured token function within a shared device environment. That is, I suspect, not trivial if they wish to keep TouchID and FaceID exclusively on device. A potential solution would be an education-model Apple Watch (or the equivalent of the iPod Touch in relation to the iPhone) that could match a student identity.

Again, there are a host of technical issues that Apple would have to resolve for a system like that to work. It would, however, be a much more Apple-like approach to securing identity than mimicking what Microsoft and Google do.


[1] This stroll down memory lane is from Ron Miller’s 20 January 2018 TechCrunch article “Apple’s Enterprise Evolution”.

[2] It is worth noting that Chambers and Speirs’ podcast series Out of School may have come to an end but it is still one of the best places to go to get a grip on the details of education deployments.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Photo Libraries in the Abstract

I took a little time over the past few weeks to get my Photos library in order. This was a long term project for a few reasons. First, I did not have a solid day to devote to it, so I engaged in the work here and there. Nor was there any rush, so I could attend to it a little at a time during lunch or in the evenings, as time permitted, without the pressure of a deadline.

Second, I needed to wrap my head around the idiosyncrasies of Apple’s Photos app — as one must do with any program. In most blog posts that address photo management, this would be the paragraph where I would discuss the app’s shortcomings. But, as Rene Ritchie reminds us, every computer and every app require a series of compromises between promise, practice, and the possible. So, while I would like some more granular control over the facial recognition scanning (I would especially like the option to identify a person in a photo rather than just say that it is not a particular person when the scan misidentifies someone during the confirm new photos process.) Yes, I recognize that, in between Google and Facebook, there are plenty of images of me and my family out there. That doesn’t mean I want to add to it. Nor does it mean I think others are foolish for taking advantage of Google’s storage and computing power — so long as they understand the exchange they are making.

Second and a half, I spent a good bit of time in some obscure parts of the application and its library because I needed to convert a whole bunch of older videos. The Photos app does not play well (or at all) these older AVI and MOV files and I wanted them in my library, rather than sending them off to VLC to play after I made it to a desktop or laptop computer. After some experimentation, I decided to convert them using Handbrake (using my Mac Mini) and then import the converted files and manually edit the date and location metadata.

And third, I spent some time considering the philosophy underpinning my personal photo library.

As an English Professor, thinking about the philosophy of a library is something of an occupational hazard. A portion of my research involves considering primary texts and unpublished materials — ephemera, marginalia, letters, and the notes and notebooks of W. B. Yeats and his wife, George.[1] And while I doubt that the scholars of the future will be sifting through my digital life in the hope of developing a deeper understanding of my thought, there is a decent chance that descendents might be looking through these pictures to lean about their family’s past when I am no longer there to explain what they are pictures of, who is in the pictures, and why they are there.

What came as a conceptual surprise were the pictures that I remembered but were not there — not because of a corrupted file but because I remembered the image from social media rather than from my library. I started to download some of these from friends’ Facebook pages and bumped up against two problems. First, the resolution was less than impressive. What looked perfectly fine on a phone did not scale well when appearing on my television screen.[2]

The second was the philosophical question. The pictures may be of me and of events that took place at my home, but were they mine. I don’t mean in the sense of copyright. My friends shared these images publicly. I do mean that placing them in my digital library carries implications of a sort. A picture of friends at my house implies that I took the photo in a way that placing a printed photo in a physical album does not because the digital file serves as both print and negative.

These are the kind of questions that those who try to figure out the significance of a piece of paper in a folder in a Special Collections library: What does this letter tell us? What is this note written on the back? How does it situate the document in the context of my research question.

Many of you will likely find it a silly question. After all, pictures can be seen exclusively as personal mementos — images to invoke memories we might otherwise leave buried. And it is difficult to argue on behalf of some genealogically-minded descendent four generations in the future. But what we choose to put into our own collection matters and the act of collecting is driven, in part, by why we did or did not put them there.

In addition, my philosophizing has applicability beyond the data on my hard drive and floating in redundant cloud storage. My decisions about what is appropriate for my own library are the same kind of decisions I should be making about the files on social media. Those photos — some, but not all, posted by me — are part of someone else’s public library. Privacy controls let me control some of this, but not all. In essence, photos of me taken by others are as private as the most permissive settings chosen by my friends. That shifts the boundaries of where public and private memory begins and end. 

It also means that Apple, Facebook/Instagram/What’sApp, Google, Twitter, and WeChat (to name only five) have become the librarians of my life and they are handing out free library cards for those who wish to read the rough draft of the story of my life. 

And it is a surprisingly detailed story. The pictures I was saving were from about a decade ago. Answering the question “Who do those pictures belong to?” can only be answered after you decide why you are asking. The Terms of Service we agree to before we can post anything answer the legal questions the companies want to ask. They don’t answer the secondary questions, like whether or not you retain some kind of right to your images should someone try to resell them. And the questions that concern courts of law are singularly uninterested in my philosophical considerations, as the Terms of Service speak (appropriately) to needs rather than concepts. If we come to grips with this philosophy, however, then we will have a better sense of the story we will tell and the reason we want to tell it.


[1] If you want to know the specifics, click on the academia.edu link below and take a look at my scholarship.

[2] I have a Mac Mini that uses my television as a monitor. My initial use case for the Mini was a combination workhorse computer, for those times when my iPad was insufficient or inappropriate for a task, and as a media player. As the iPad and Apple TV have increased in their capability, it has increasingly become primarily a media storage device — the primary repository for documents, pictures, and the like — and backup hub.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Still Looking through a Glass Darkly: Thoughts on Apple’s Education 2018 Event

Let me begin with an unequivocal statement: Anyone wishing to get a sense of the challenges before Apple in the education arena need look no further than Bradley Chamberswell reasoned and well written response on 9 to 5 Mac to the 2018 Apple Education Event. In his article, he clearly lays out the challenges facing Apple, as a hardware and service provider, and teachers as they try to implement solutions offered by Apple and others.[1]

And while I would not change a word, I would add one word to the title (which Chambers may or may not have written). I would argue that “Making the Grade: Why Apple’s Education Strategy is not Based on Reality” should read “Making the Grade: Why Apple’s Education Strategy is not Based on Today’s Reality”.[2]

Let me explain why.

As I wrote earlier, Apple included an interesting subtext in its event. It challenged the hegemony of the keyboard as the primary computing input device. In fact, there are no keyboards used in the entirety of the “Homework” video they produced to showcase the iPad in an educational setting — although the Pencil, I would note, appears on several occasions.

I don’t think this is Apple trying to hard sell the Pencil for the purpose of profit. If that were the case, we would not have seen the less expensive Logitech Crayon. Nor do I think it is an attempt to employ their famed Reality Distortion Field to deny the need for keyboards. Otherwise, we wouldn’t have seen the Logitech Rugged Combo 2 education-only keyboard.

What I do think is that Apple is trying to get the education market to rethink education’s relationship to technology.

Education, almost always, comes to technology as a tool to solve a known problem: How to we assess more efficiently? How do we maintain records? How do we process students in our systems? How do we crunch data? How to we produce a standard and secure testing environment? How do we make submitting assignments and grading assignment more efficient? How can we afford to deploy enough devices to make a difference?

That we ask these questions is no surprise. These are important questions — critically important questions. If we don’t get answers to them, the educational enterprise begins to unravel. And because of that, it is more than understandable that they form the backbone of Bradley Chambers’ article and the majority of the commentary behind most of the responses I have read or listened to. They are the questions that made Leo LaPorte keep coming back to his wish that Apple had somehow done more in Chicago when the event was being discussed on MacBreak Weekly.

What they are not, however, is the list of questions Apple was positioning itself to answer. As Rene Ritchie pointed out in his response to the event, Apple is focusing on creativity — not tech specs. And from what I have seen from a number of Learning Management Systems and other education technological products, it is an area that is very much underserved and undersupported by ed-tech providers.

Apple is trying to answer the questions: How do you get students to be engaged with the material they are learning? How do I get them to think critically? How do I get them to be creative and see the world in a new way?

Alex Lindsay pointed out in the above-mentioned MacBreak Weekly episode when he said that he was interested in his children (and, by extension, all students) learning as efficiently possible in school. To do that, students have to be engaged and challenged to do something more than the obvious provided in lowest common denominator solutions. Their future will also need them to do more than answer fill-in-the-blank and multiple choice questions on a test. They need to produce the kinds of projects that Apple put on display in Chicago.

Apple is offering the tools to do that.

I don’t think this is an idealized or theoretical response. If Apple wasn’t aware that these things were a challenge, they would not have made the teacher in the “Homework” video a harried individual trying to (barely) keep the attention of a room filled with too many students. Apple has hired too many teachers and gone into too many schools to not know what teachers are facing.

I would also point out that there is something to Apple’s answer. My daughter was in the room with me when I was watching the keynote. Her immediate response was that she wanted her homework to be like what she saw rather than what she did.[3]

Her school, I would point out here, uses Chromebooks. That she would jump that quickly at the chance to change should give anyone considering a Chromebook solution pause and make them look carefully at why they are making the choice they are.[4]

Nevertheless, Apple’s challenge is that it still has to address the questions Bradley Chambers and others have raised or their answers will only be partial solutions for educators.

Because Apple needs to answer these questions, I am very interested in the details of the Schoolwork app once it is released — even if it appears to be targeted at K-12 and not higher education.

I do think that we in education need to listen carefully to Apple’s answer, though. Our questions may be mission critical but they may not be the most important questions to answer. After all, if we are first and foremost not trying to answer “How do we get our students engaged?”, we have ceased to be engaged in education. And while I have a great deal of sympathy for my friends and colleagues in IT (and am grateful for their ongoing support at JCSU), they are there to support my students’ and my work — not the other way around. And every time we take a shortcut to make IT’s job easier,[5] as we have done too often when trying to answer how to assess student learning outcomes, we are decreasing our students’ chances for success.

For those placing long-term bets, however, I would point out one thing: Apple’s positioning itself as the source for solutions for generating curiosity and creativity is a better solution for education than Google’s positioning itself as the solution for how to create a new batch of emails for the next year’s worth of students.


[1] The most important section of the article, incidentally, is this section:

One of the things I’ve become concerned about is the number of items we tend to keep adding to a teacher’s plate. They have to manage a classroom of 15–30 kids, understand all of the material they teach, learn all of the systems their school uses, handle discipline issues, grade papers, and help students learn.

When do we start to take things off of a teacher’s plates? When do we give them more hours in the day? Whatever Apple envisioned in 2012, it’s clear that did not play out.

[2] I wouldn’t run the word today in bold and italics, of course. I am using them here so you can easily find the word.

[3] Or thought she did. When I asked her what stopped her from doing her homework in that manner, she thought and said she didn’t know how she would get it to her teacher. I told her that I could help her with that.

[4] It still might be the best choice, of course. These decisions are a series of trade-offs. But I would point out that if she begins to use an iPad at home to do things her classmates cannot with their Chromebooks and gains a superior education because of her engagement with the material as a result, the argument for deploying Chromebook is significantly weakened.

[5] Making IT’s job easier, I would stress, is significantly different from asking if what is being proposed is technically and practically possible.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Tip: Presenters Rejoice — A New Pages Feature for Faculty and Students

Back on October 18, 2017, I offered a tip on presenting with the iPad — creating a reading version of a speech/presentation in Pages that was formatted with a large enough font size to be easily read at a podium. I didn’t think it was rocket science then and don’t know.

With the latest release of Pages, however, the need to create a second copy is gone. Apple has programmed in Presenter Mode, which automatically resizes the font as I had described.

IMG_0245.jpg

In addition, it switches (by default) to a dark mode, providing a high-contrast screen and reducing light for dimly lit rooms. It also has an autoscroll feature (with a modifiable scroll speed). The autoscroll starts and stops with a tap of the screen.

IMG_0244.jpg

This is a really nice feature — one that will quietly make presenting much easier for iPad users (Thus far, I have not seen a parallel option appear in the MacOS version of Pages.). It also points to Apple’s method, as posited by Steve Jobs in an often quoted part of Walter Isaacson’s biography of him: “Some people say, ‘Give the customers what they want.’ But that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, ‘If I'd asked customers what they wanted, they would have told me, “A faster horse!”' People don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page.”[1]


[1] This idea is going to be central to my upcoming reaction to Apple’s Education event. If you want some homework in advance of that post, you should take a look at Bradley Chamberswell reasoned and well written response on 9 to 5 Mac.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Quick Thoughts: Today’s Apple Education Event

As I write, I am in the twilight zone between having read iMore’s live blog of today’s Apple Education event in Chicago and getting to watch it via the Apple Event app on the AppleTV. That is a strange place to write from but one thing is clear enough to comment on:

Apple is challenging the keyboard’s hegemony.

To listen to most people in tech (and to see me using the Smart Keyboard now), the keyboard is the best and only way to interact with computing devices. With Apple pushing the Apple Pencil across more of the iPad line and, critically, into the updates of the iWorks apps, this paradigm is being challenged on two fronts. Annotating works with a keyboard has always been a less than ideal experience. The Apple Pencil (and other styli) is a superior approach. With Siri, voice is another front.

I don’t think Apple is out to deprecate the keyboard entirely but I do think that these two other options signals a real differentiation between Apple and others — Google especially. It feels much more like Apple is offering options to users rather than choosing one for us. In education, this is especially important. I don’t want a keyboard when marking up a paper. I want a Pencil and the best tools to annotate a document and direct a student. When I am walking between meetings, I want to ask Siri to remind me to do something — not stop and type it into my Phone.

The real question is whether the accreditation industry will be ready to quickly accept that the era of keyboard only needs to be shifted to accommodate the best method for the moment rather than what it simplest for them.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Peering into the Black Mirror: Tomorrow’s Apple Education Event

Recently, Rene Ritchie asked his Twitter followers what they wanted to see happen at Apple’s upcoming Education Event in Chicago. In response, I quipped that I wanted iTunesU to be TouchID enabled. (Fraser Speirs, who has rightly lamented iTunesU’s molasses-slow development, warned me away from asking for too fancy an update.).

I mention the exchange not so much to name drop but to calibrate the importance of the event for Apple. And if I can see it, I suspect Apple can as well.

Much of the commentary in the tech media has focused on the possibility of an No. 2 Apple Pencil and a semi-Pro iPad priced for the education market as well as the need for Apple to produce management tools that would make it competitive with Google’s offerings.

I want to offer another possibility. If I were to say that it has been on my radar screen for about a year, it would imply that I had a clearer view of it than I do. I’d go with the metaphorical crystal ball but the iPad’s black glass slate seems to invoke images of Dr. John Dee’s Spirit Mirror, so I will go with that instead.

It was actually Fraser Speirs who, during a break in the Mellon Summer Institute on Technology and New Media, the increasing capabilities available to those who wanted to create their own Swift Playground. As he showed me what was possible with some of the mapping features, I couldn’t help but notice how similar it felt to iBooks Author — Apple’s underutilized eBook authoring tool.

Perhaps it won’t be tomorrow, but I can’t help but think that Swift Playground development and iBooks Author are on a path to merge — perhaps bringing iTunesU and Apple Classrooms along with them — into a new, more modern and more powerful platform. Such a move would possibly explain why Apple appears to be moving more slowly in this sector than they should.

Apple’s successes, I would argue, are based in looking carefully at the first causes of problems and developing well-grounded responses to them that leapfrog entire industries and paradigms rather than doing a quick patch that makes them appear up to date in the current news cycle. My bet is on them doing something along those lines — whether it is the tomorrow or next year — in education.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Get Comfortable: An Older Long Form Piece on Next Generation Texts

For those of you who missed it, Apple has purchased Texture. As Alex Lindsay said on MacBreak Weekly, I have long suspected Apple is positioning iBooks (reportedly to be renamed Books) to create the next generation of texts — texts sufficiently different from what we have now that we don’t have a name for them.

What follows is a longer piece that I wrote (and presented) in 2013. While some of the examples are no longer current (or no longer available — an issue that highlights a problem inherent in digital media), the overarching argument is, I think, still current and still points towards where we will eventually go.

I resisted the urge to update and significantly edit the text, other than adding some links.  

 

Ghosts in the Machines: The Haunting of Next Generation Texts

There are spirits, if not ghosts, in these new machines. That is what has made e-books so troubling to those of us in the literati and chattering classes. They foreshadow unavoidable change.

While early adopters and technophiles have debated utility, screen resolution, and processing power, even everyday users have found themselves confronted by an issue that used to only bother a subset of Humanities scholars: What is the nature of a book? As scholars, we don't usually use the term book, of course – partially as a way of avoiding the problems raised by such a question. "The text" usefully covers a wider range of material: short stories, novellas, novels, poems, collections of poems, plays, films, essays, epistles, audiobooks, and songs. They are all texts. We enter into conversations with them. We ask our students to engage with them. Among ourselves, we agree they are elusive but, in order to get on with the business at hand, we tend to set aside the complexities unless we are trying to be clever. They are the unknown but hinted at things we, Jacob-like, wrestle with.

Such grappling has usually taken place well out of the public eye and average readers, unless they have a troublesome relative who inconveniently holds forth on such topics over Thanksgiving dinner, are quite content to get on with it and read their books.

E-books, however, are beginning to make manifest the debate. The general reading public knows what a book is and what an e-book is and recognizes that they are subtly different. If they weren't, bibliophiles would not protest that they like the feel and smell of the book as they turn its pages when explaining why they don't want a Kindle.

But there is more to this visceral reaction than just the love of cardboard, pulp, ink, and glue.

Behind the text is the thing everyone is truly after: the story. It's the Muse-spun Platonic idea grasped at by the author and glimpsed in the imagination of the reader. The text invokes it. With care, it is woven together and bound by specific words into the thing we read that transmits the story to our mind, as Steven King described in On Writing, via a kind of telepathy that ignores distance as well as time. (103-7)

Texts, then, transmit stories and the act of reading allows the mind of the readers to take them into their imaginations and there be re-visioned.

That such a process exists is something we all sense. The BBC series Sherlock receives high praise because it invokes and evokes what is essential in the original stories and recasts them in a new form and time. Sometimes the parallels are exact – the use of poison pills to murder in "A Study in Scarlet" and "A Study in Pink" – but they are played with in a manner that leaves the Sherlockian viewer of the series guessing – as in the case of the reversal of Rachel and rache as the correct interpretation of a fingernail-scratched message. Something, we sense, is importantly the same in Sherlock in a way it is not in other versions of Conan Doyle's detective – even those that are "correct" in period and dress.

At its core, this difference is the thing the general public wrestles with when they encounter e-books and will increasingly wrestle with as the thing, as yet unnamed, that will replace the book comes into being. These works make manifest old problems that have haunted books and the scholarship about them – and, perhaps, will begin to solve them. Obviously, the play's the thing in Shakespeare. What, however, is the play? The text on the page? The text when read aloud? The text that is performed? The performance itself? It is the problem Benjamin Bagby speaks of when discussing the difference between performing and reading Beowulf aloud, which feels "unnatural" to him:

 [Beowulf] has become for me an aural experience.... All of those things [The techniques of performance, including music and timing.] have nothing to do with printed word. And actually, when I actually go and read it from the printed page, I am deprived of all of my tools.... That whole feeling of being trapped in the book comes back to me. And What I have found over the years being chained to the printed word. That, for me, is the crux of the matter. ("Round Table Discussion")

Bagby's role is that of a contemporary scop – a shaper of words. Much like a blacksmith is sensitive to the choices he is making about the physical properties of metal as he hammers it into a set shape, Bagby is sensitive to the limitations the printed word places on a story. In some near future, however, next generation texts will allow performance to synchronize with the printed word. The performance – or more than one performance – will be available, perhaps filmed in the round as on the Condition One app, while the text helpfully scrolls along. Commentary, definitions, or analysis to aid a reader will be a tap away.

Such texts have already begun to appear: Al Gore's Our Choice; T. S. Eliot's The Waste Land app for the iPad; Bram Stoker's Dracula HD;and The Beatles' The Yellow Submarine e-book for iBooks, to name but a few. Indeed, it is important to note that these titles include both the chronologically new (Gore) and the less new (Stoker). The possibilities of next generations texts will more fully realize the ambitions of long-dead authors.

Take, for example, Stoker's Dracula. As traditionally published, it is standard text on a standard page. Yet as anyone who has read it closely sees, that is not what is invoked by Stoker. As is made clear in the dedication, Dracula is to be imagined as a scrapbook – a confederation of texts that build a single, meta-story out of their individual narratives:

How these papers have been placed in sequence will be made manifest in the reading of them. ... [A]ll the records chosen are exactly contemporary, given from the standpoints and within the range of knowledge of those who made them. (xxiv)

While the text that follows is normal typeset material, he notes that each of his characters produce their textual contributions differently. For example, Harker keeps a stenographic journal in shorthand(1), as does Mina Harker, née Murray – who also types out much of the final "text." (72) Dr. Seward records his medical journal on a phonograph. (80) Additional material includes handwritten letters (81), telegrams (82), and newspaper cuttings. (101)

While the citations here may seem overcrowded, the proximity of the referenced page numbers serves to demonstrate how rapidly Stoker has the material imaginatively gathered shift within his text. While Stoker required the imagination of his reader to change the forms, the Dracula HD iPad app re-visioning of the novel makes these shifts manifest.

 A screen shot of the now unavailable Dracula HD app.

A screen shot of the now unavailable Dracula HD app.

It is equally true that James Joyce's Ulysses, with its elusive and allusive multimedia structure, evokes similar shifts in presentation – ones that played in Joyce's imagination but are potentially kept from the reader by the limitations of the page. His successor, Samuel Beckett, likewise plays with the confluences and dissonances of multimedia presentation in works like Krapp's Last Tape – a written play that performed as a combination of live action and prerecording. Contemporary playwright Michael Harding adds video to recorded audio in his play Misogynist. And all of these are producing their work long after William Blake published a series of multimedia tours de force that were so far ahead of their time that it took generations for them to receive wide-scale recognition. Even Gildas wanted his History and Topography of Ireland to be illustrated in order to help clarify his points.

While it is impossible to know if Gildas, Blake, Stoker, Joyce, Beckett, or Harding would have crafted their works differently if our technology were then available, it is clear that contemporary content creators have begun to do so. The Fantastic Flying Books of Morris Lessmore, a children's text originally crafted for the iPad, is most accurately seen as a next generation confederated text. Unlike Dracula, which presents itself exclusively through the printed word, The Fantastic Flying Books of Morris Lessmore is an interactive e-book, a short film, and a hardcover book that interacts with an app – each a different approach to William Joyce's imaginative children's story of a man and his love for books and the written word – rather than a single, discrete text.

William Joyce's creation is not the first next generation confederated text, of course. There have been others. The entire story of The Blair Witch Project, for example, was only partially revealed as a film. While the "missing students" marketing campaign is the best known segment of the larger confederated text, the film's website offered information that changed the meaning of what reader-viewers perceived. When the filmmaker Heather Donahue records an apology to her and her fellow filmmaker's parents, saying "It's all my fault," viewers have no way of knowing that she is a practicing neo-pagan who has been sending psychic/magical energy to the Blair Witch for years in the hopes of contacting her. For Donahue, then, her guilt is based not only in insisting that they make the film and go into the woods but possibly in literally (and ironically) empowering the evil witch she mistakenly believed to be a misunderstood, proto-feminist Wiccan. (“Journal” 4-6)

This prefiguration a unified confederated text across multiple forms of media is identical to the prefiguration of film techniques in Nineteenth Century literature noted by Murray in Hamlet on the Holodeck:

We can see the same continuities in the tradition that runs from nineteenth-century novels to contemporary movies. Decades before the invention of the motion picture camera, the prose fiction of the nineteenth century began to experiment with filmic techniques. We can catch glimpses of the coming cinema in Emily Brontë's complex use of flashback, in Dickens' crosscuts between intersecting stories, and in Tolstoy's battlefield panoramas that dissolve into close-up vignettes of a single soldier. Though still bound to the printed page, storytellers were already striving towards juxtapositions that were easier to manage with images than with words. (29)

Of course, these techniques can also be found in The Odyssey and Beowulf. Nevertheless, Murray's point is unassailably correct. The imagination of the creator and, by extension, the reader or viewer, is primary and will always outstrip the technology available. The revolution, then, is not that confederated texts and the thing that will replace the book are coming but that they are becoming mainstream because they can, as a practical matter, appear on a single, portable device (an iPad) and in a unified delivery mechanism (an app or an e-book) instead of having to be viewed across multiple, non-portable devices (a movie screen or VHS played on a television and a computer screen and a book).

The collapsing of multimedia, confederated texts into a single reader experience is is a revolution that will be as transformative as the one kicked off by Gutenberg's Press. That revolution was not just about an increased availability of texts. What was more important than availability, although it is less often spoken of – assuming you discount the voices of Church historians who speak of the mass availability of the Bible and how it changed books from relics found chained in a church to something anyone could read and consider – is the change in people's relationship to the text. Their increasing presence changed them from valued symbols of status to democratizing commodities – things that could be purchased by the yard, if necessary, and that anyone could use to change and elevate themselves and the world around them.

This changing relationship, I suspect, is what lies behind people's anxieties about the loss of the experience of the book. The book – especially an old, rare, valued book like The Book of Kells–  is certain, as the the Latin Vulgate was certain and some now say the King James Version is certain. Even five hundred years after Gutenberg inadvertently brought forth his revolution, Christians – Fundamentalist or not – get very uncomfortable when you talk about uncertainty of meaning in the Bible because the text; due to issues of translation, context, and time; cannot be fixed.

Despite their appearances, all books, or at least that which is bound between their covers, are uncertain. The words can and do change from edition to edition. Sometimes, these changes are due to the equivalent of scribal error, as famously happened with The Vinegar Bible. Sometimes, the changes occur due to authorial intent, as happened with the riddle game played by Bilbo Baggins and Gollum in The Hobbitafter the nature of the One Ring changed as Tolkien began to write The Lord of the Rings, explaining away the change in the back story provided in the trilogy's Prologue. (22) These changes, however, happen as you move from one edition to the next. Bits and bytes can change, be changed, or disappear even after the text is fixed by "publication" – as happened, in an extreme and ironic case, to one Kindle edition of 1984. (Stone)

That former certainty of a printed, fixed object was comforting and comfortable. You can go to a particular page in a particular book and see what's there. The e-book is more fluid – pages vanish with scalable text, returning us to the days of scrolls – prone to update and possessing the possibility of existing within a hyperspace full of links. It offends our sense of the text as being a three dimensional object – something it has, in fact, never been and never will be. Whether we foreground them or not, a web of hyperlinks exist for every text.

Texts themselves are, at minimum, four-dimensional objects. It's something scholars tacitly admit when they write about the events of a story in a perpetual present tense. They exist simultaneously within time – the part of the continuum where we interact with them – and outside of it – where the stories await readers to read, listen to, and think about them.

That fluidity will go further – stretching the idea of the future text beyond the limits we currently imagine to be imposed upon it. The Monty Python: The Holy Book of Days app, which records the filming of Monty Python and the Holy Grail, will interact with a network connected Blu-ray disk – jumping to scenes selected on the iPad. In such a case, which is the primary text: The film on DVD or the app that records its making and that is controlling the scene being watched? Indeed, the technical possibilities of an e-books and confederated texts makes the complex interplay between texts explicit rather than implicit. Tap a word, and its dictionary definition pops up. References to other texts can be tapped and the associated or alluded text is revealed, as is currently done to handle cross references in Olive Tree Software's Bibles by opening a new pane. Shifting between texts, which would benefit from a clear visual break, may eventually be marked by animation ("See: Tap here and the book rotates down – just the way it does when the map app shifts to 3D view – and the book that it alludes to appears above it! The other texts alluded to appear on the shelf behind them. Just tap the one you want to examine!"). While such a vision of the future may sound like so much eye candy, consider the benefits to scholarship and teaching to have the conversations between texts become more accessible.

Because they could be made photorealistic (Facsimile editions made for the masses, much as pulp paperbacks offered classics – alongside the penny dreadfuls – to everyone.), critical and variorum editions could bring a greater sense of the texts being compared than our current method of notation. Indeed, as The Waste Landapp shows, such scholarly apparatus can include the author's manuscripts as well as the canonical text. These can be done for significantly less than print editions. The published manuscript facsimile copy of The Waste Land is listed at $20 (almost $400 for hardcover – a bargain compared to the facsimileBook of Kells). The app, however, is $14 and includes commentary and audio and visual material – readings by Eliot and others. Likewise, the Biblion series, by the New York Public Library, is even more ambitious – especially the one focusing on Frankenstein (although a clearly identifiable copy of the novel itself is conspicuous in its absence) and is a model for what a critical e-edition of the future might look like.

These technological flourishes and innovations will be increasingly pushed not by just by designers and developers – although forthcoming flexible screen technology holds the promise of devices that could be issued, with relative safety, to schoolchildren. Changes in the market – our market – will begin to drive them. Already, iTunes U integrates with iBooks. As that platform, shared instruction via online courses, and Massively Open Online Courses (MOOCs) begin to grow and push on the academy, innovations in pedagogy will join accessibility, design, and "value added" as factors, like the adaptive learning advertised as a part of McGraw Hill's forthcoming SmartBook offerings, going into the creation of next-generation texts. 

This isn't postmodernist denial of a fixed definition of anything. Nor is it an embrasure of the Multiform Story posited by Murray in Hamlet on the Holodeck as the way of the future. The story, the text, the book, and the "one more thing" that is slouching towards Cupertino to be born are all defined, concrete things. While the narratives created by a game may vary in detail (Do you turn to the Dark Side or not in the latest Star Wars video game?), the narrative skeleton – the conflict that drives the character to and through the choice towards whichever resolution – remains constant. Without such a frame, the Kingian telepathy that produces narrative and the ability of those participating in that narrative to have a shared experience with others is impossible and will remain impossible until artificial intelligence advances far enough for computers (or their descendants) to share in our storytelling. The issue arises from our desire to conflate them into a single thing. For centuries, they could be conveniently spoken of in one breath. With the e-books that will be, they can no longer conflated with the same ease.

Nor is this merely prognosticating a future that is already here. Criticism necessarily follows created content – wherever that content might lead. While paper may have cachet for some time to come, it is inherently limiting. This article, for example, incorporates a still image. An iBook version might include moving images, sound, and hyperlinks to other texts, additional material in an iTunes U class, and other apps. In fairness to paper, it would also be harder to read in brightly lit settings and impossible to read after the battery ran down. 

The core reason, amidst these changes, things will remain the same is that – with the possible exception of the postmodernists – what motivates us to explore the issues inherent in the texts before us are the stories they convey. Whatever the medium, be it papyri, pulp, or LED, the story rivets us and invites us to immerse ourselves in it – perhaps to the point of trying to learn the mechanics behind the curtain that  keep us spellbound.

What we as scholars do, then, will have to adjust. Our mainstream critical apparatus, and the frame of our discipline, is inadequate for the coming task. Dracula HD may provide a greater sense of verisimilitude than a traditional novel, but this push for verisimilitude has meant liberties were taken with the text – small additions and deletions that make it slightly different from the canonical novel. The text of an iBook edition of Dracula may be canonical but it's also scalable, making page references meaningless.

We will also have to learn how to talk about confederated texts. Some techniques will come from what is still the relatively new – the language of film criticism, for example. Other moves will come from reincorporating into mainstream criticism what everyone once knew – the language of the Medievalist who have to discuss manuscripts and the art historians who still work with the way the image influences the viewer.

And then, there are the things we do not yet see and will require entirely new modes of thought and reflection. We study narrative and storytelling as a part of our discipline. With the increase of computing power, the old "Choose Your Own Adventure" book format is growing up quickly, forming a new genre of literature, or something like literature – a concept posited in Murray's Hamlet on the Holodeck over a decade ago. Will we be the ones to address how narrative flows in the twilight world between books and games? What will we have to do differently when we are dealing with stories whose outcomes become different with different readers not because of their responses to a fixed text – the reactions Byron feared when he sent his works out into the world – but because they cooperate with the characters and creators in fashioning the story itself? Or should we, as Murray posits in "Inventing the Medium," surrender these creations to a new field – New Media Studies in a Department of Digital Media – and go the way of the Classics Departments that many of us have watched shuttered or be absorbed into our own Departments.

And if we exclude such forms of storytelling, how are we then choosing to self-define our profession? Are we content to surrender the keys to our garden up to the publishing houses of the world – whether they be great or small? Is it the static nature of the text – the printed word bound between two covers – what we claim to value? If so, how do we continue to justify researching the manuscripts of writers? Who do we ask what is the canonical version of any work that saw editorial revision over its life – whether those changes were overseen by editors, literary executors, or the artists themselves?

The interactivity made possible by the iPad not only challenges the definition of the objects we study, it challenges our assumptions about where the borders of our form of study lie. We no longer exclusively "cough in ink" as we imagine our Catullus. (Yeats 154, 7)

As first steps to this re-imagining of our discipline, we should consider the nuts and bolts of how we talk rather than of what we talk about. How, for example, do you properly cite a passage in an e-book in a manner that does not lead to confusion? Do you make it up as you go along, as I did for the sake of an example, with my reference to Yeats' "The Scholars" (poem number, line number)? Should you list only the year of release for iPad Apps – essentially treating them as books – or, given the ability to update them, should we list the month and day as well? Will we need to do the same with books, given that these, too, can now be updated?

While search tools and hyperlinks (or their descendants) may render some concerns moot, they will not fully resolve the issues until our journals become e-books or confederate themselves with the texts they examine. Even in these cases, however, they may not eliminate all of them. "Which 'Not at all' in The Waste Land," a future reader might find himself asking a scholar, "did you want me to weigh again?" And while that scholar has the ability to reference line numbers, those working with fiction and many kinds of drama will not. Such a process should not, however, be seen as an exercise in pedantry. How we choose to record our sources and cite them is not just a roadmap for those who follow what we write. It marks what we consider essential information – what we value in our sources. It will help us to get a greater sense of what we wish to preserve and enhance in next generation texts, much as Andrew Piper's Book Was There attempts to assay what it is we value in the book through an almost free-associative exploration of the words and metaphors that surround and support the book.

Even more challenging shift will be in our most fundamental relationship to the text which, until now, we currently imagine as a private experience. While we may currently hold conversations with the text within our minds, the text itself remains a static thing – a fixed object that we react to. In essence, the conversation is one way. The adaptive textbooks being developed by the major textbook publishers will make that interaction two ways. In short, the book will read us as we read it. While this may be a boon for learning, it is not an unalloyed boon. Because they reside on a device that is always connected, the books can communicate what is learns of us to its publisher and its marketing partners. Or a government. While such arrangements can bring us greater convenience and security, they do so a the cost. And while I acknowledge that there is a loss of a kind of privacy, we should not forget that targeted advertising is nothing new. Remington Arms is not going to place its advertisements in Bon Appetite. Likewise, Amazon's recommendations are not so far removed from the book catalogues found in the back of Victorian novels. Indeed, William Caxton, the first English printer, made sure to mention his backer's high opinions in his prefaces – as he did in his second printing of The Canterbury Tales – and advertised his publications in hopes of driving sales. So the practice of using ads and reader reviews of one book that you like to try to sell the next has been with us for a very long time.

And yet, Big Data gives the appearance of a violation of privacy.  While we may like the efficiency offered by such techniques (e.g., getting coupons for the things we want rather than things we don't), we prefer not to think about the mountain of data being compiled about each and every one of us every day of our lives. Nor do we like to think of our books coming to us as a part of a business, although the major publishing houses are nothing if not businesses – businesses desperate to know what we want to read so that they can sell us more of the same. That they can now use our books to mine for information feels like it is crossing a new line – even if it is not. After all, how many of us have willingly participated in an offer that gave us access to coupons based on the number of books we bought at a store – one that assigned us a traceable number? Bought a book online from Amazon or Barnes and Noble? Or became a regular enough customer that a small, independent bookstore owner or friendly staffer could recommend books that we might like? Because the last of these involves actual human contact, it is more intimately associated with us as individuals than the algorithm-generated, aggregation-based results of an Amazon. But we are not yet ready to embrace a trust of the machine and whatever sinister, calculating faceless figure we imagine to be controlling it.

In short, we may want texts that in some way touch us. At the same time, we want to read the text but we do not want it to read us. We want to find book but not let those same book be used to locate us. We wish to classify a text by genre but not let it place us into a category. We are willing to give ourselves to a book -- to lose ourselves so deeply in it that we cease to be -- but we do not want it to give us away.

We want all the advantages of our love affair with reading to remain without giving the innermost, anonymous part of ourself away.

We don't want books betray the secrets we offer them.

And given the company these next generation texts keep, perhaps there is cause for fearing such betrayal. Goodreads is now owned by Amazon and spends too much time talking to Facebook -- and it deals primarily with traditional texts. But, ultimately, we will control how much we let these texts tell others about us. It will be up to us to check out applications' settings.

These next generation texts will not change the core of what we study – although they may challenge many of the assumptions underlying the critical approaches we use when coming to a text. They will also ask us to consider revising what we consider a "legitimate" text and "legitimate" means of publication – a distinction those approaching tenure in a shrinking publication market must face with a certain anxiety. Yet, stories have always resisted these when they are used too narrowly or exclusively. As such, our frustrations and fears may tell us more about ourselves than the stories we purport to be concerned with. In that regard, next generation texts may be the best thing that has happened to our profession in some time. They will force us to confront what our purpose is by making us figure out exactly what it is that we are studying and why we choose to study it.

References

"A Study in Pink," Sherlock. Dir. Paul McGuigan. Perf. Benedict Cumberbatch and Martin Freeman. Hartswood Films/BBC Wales/Masterpiece 2010.

"Apology" The Blair Witch Project. Dir. Daniel Myrick and Eduardo Sánchez. Perf. Heather Donahue, Joshua Leonard and Michael Williams. Lion's Gate 1999. http://www.youtube.com/watch?v=2m_lqGnLtWA&feature=youtube_gdata_player 22 November 2012.

Azevedo, Alisha. "10 Highly Selective Colleges Form Consortium to Offer Online Courses," The Chronicle of Higher Education. 15 November 2012. http://chronicle.com/blogs/wiredcampus/10-colleges-will-offer-online-courses-for-participants-in-study-abroad-programs/41070?cid=at&utm_source=at&utm_medium=en 25 November 2012.

Beatles, The. The Yellow Submarine. Subafilms, Ltd. 2011. iPad App.

Beckett, Samuel. The Complete Dramatic Works of Samuel Beckett. New York: Faber and Faber 2006.

Bible+. Olive Tree Bible Software 2012. iPad App.

Biblion: Frankenstein: The Afterlife of Shelley's Circle. The New York Public Library 2012. iPad App.

Blake, William. The Complete Poetry & Prose of William Blake. New York: Anchor Books 1998.

Bonnington, Christina. "Flexible Displays Landing in 2012, But Not in Apple Gear," Wired. 16 May 2012. http://www.wired.com/gadgetlab/2012/05/apple-flexible-displays/. 25 November 2012.

The Book of Kells. X Communications. 2013. iPad App.

Condition One. Condition One, LLC. 2012. iPad App.

Doyle, Arthur Conan. "A Study in Scarlet," The Complete Sherlock Holmes. New York: Bantam 1986.

Duhigg, Charles. "How Companies Learn Your Secrets," The New York Times Magazine. 16 February 2012. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?pagewanted=1&_r=1&hp. 14 May 2013.

Eliot, T. S. The Waste Land. Touch Press, LLP and Faber and Faber 2011. iPad App.

Fain, Paul. "Establishment Opens Door for MOOCs," Inside Higher Ed. 14 November 2012. http://www.insidehighered.com/news/2012/11/14/gates-foundation-and-ace-go-big-mooc-related-grants 25 November 2012.

"Gerald of Wales," Melvin Bragg. In Our Time. BBC 4. 4 October 2012. http://www.bbc.co.uk/programmes/b01n1rbn

Gore, Al. Our Choice. Push Pop Press 2011. iPad App.

Harding, Michael. The Misogynist in A Crack in the Emerald: New Irish Plays. Ed. David Grant. London: Nick Hern Books 1995.

"Journal" Blair Witch Wikihttp://blairwitch.wikia.com/wiki/Heather_Donahue%27s_Journal 22 November 2012.

Joyce, James. Ulysses. New York: Vintage 1990.

Joyce, William. The Fantastic Flying Books of Morris Lessmore. Moonbot Interactive 2011. iPad App.

King, Stephen. On Writing: A Memoir of the Craft. New York: Pocket Books 2000.

Kolowich, Steve. “Elite Online Courses for Cash and Credit”Inside Higher Ed 16 November 2012.http://www.insidehighered.com/news/2012/11/16/top-tier-universities-band-together-offer-credit-bearing-fully-online-courses 29 November 2012.

Monty Python and the Holy Grail. Dir. Terry Gilliam and Terry Jones. Perf. Graham Chapman, John Cleese, Eric Idle, Terry Gilliam, Terry Jones. Sony Pictures 2001. DVD.

The Monty Python: The Holy Book of Days. Melcher Media 2012. iPad App.

Murray, Janet. Hamlet on the Holodeck. Cambridge, MA: MIT Press 1998.

——, "Inventing the Medium," The New Media Reader. Cambridge, MA: MIT Press 2003.

Piper, Andrew. Book Was There. Chicago: University of Chicago Press 2012.

"Round Table Discussion," Beowulf. Dir. Stellan Olsson, Perf. Benjamin Bagby. Koch Vision 2007. DVD.

Stoker, Bram. Dracula HD: Original Papers Edition. Intelligenti, Ltd.2010. iPad App.

——. The Essential Dracula. Ed. Leonard Wolfe. Plume: New York 1993.

Stone, Brad. "Amazon Erases Orwell Books From Kindle" The New York Times 17 July 2009. http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html 22 November 2012.

Tolkien, J. R. R. The Hobbit. Boston: Houghton Mifflin Co. 1997.

——. The Lord of the Rings. Boston: Houghton Mifflin Co. 1987.

Yeats, W. B. "The Scholars," The Collected Poems of W. B. Yeats: Revised Second Edition. Ed. Richard J. Finneran. Scribner 1996. iBook.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Animoji in the Classroom

At the end of February, Samsung joined Apple in offering its customers Animoji to play with on its Samsung Galaxy S9. I am far less concerned with the question of who got here first and who does it better (Both had to have been working on it for some time and are offering very different experiences.) than I am interested in the fact that both are now offering it in spite of the tech media’s dismissal of Apple’s offering as a gimmick used to show off its facial scanning technology — the kind of thing you play with the first couple of days then never use again.

Now, I am not saying that I have used Animoji after the first couple of days (I do have plans, however, the send more messages to my daughter once the dragon Animoji arrives.). I will say that I think it is too early to count this technology out — especially in the classroom.

Instead of thinking of Animoji as a fully developed feature, it would be better for us to consider it a proof of concept. 

Our smart phones are now capable of reasonably effective motion capture of the kind that used to require a Hollywood studio. No, our students will not be handing us the kind of things we have seen from Andy Serkis or Benedict Cumberbatch on the big screen any time soon. But if you look at the video clip of Serkis I have linked to here, you may notice that Apple’s Animoji are more finished than the first-pass action capture shown of Gollum. That means the iPhone X can do more than the animation studios of Weta, circa 2012, could. 

That is the level of technology now in our students’ pockets.

I could make some of the usual predictions about how students will use this: Adding animated animals and speakers to their presentations, impersonating their friends and members of the faculty and administration; the usual sets of things. But that is seldom the way technology leaps forward. PowerPoint, for example, was initially developed to replace slides for business presentations, not for (sometimes badly designed) classroom lectures. Now, students arrive at university with a working knowledge of how to use PowerPoint to do a class presentation.

The students who will surprise us with how they can use Animoji are probably in middle school now. And before this sounds too far fetched, my daughter, who starts middle school next year, does not have a favorite television show she comes home to. She does, however, follow several Minecraft shows. Her current favorites include Aphmau and Ryguyrocky and his Daycare series. When Markus Persson created Minecraft, I would guess that needing to program options in for people to make animated shows that were available via streaming was not one of the items on his to-do list.

What is predictable, however, is that the potential inherent in Animoji underlines the importance of wrapping our heads around how we approach multimodal communication. If we limit ourselves to the obvious use case of an Animoji-based presentation — say, the Panda and Dragon informing viewers of Chinese ecology, we are looking at helping students learn how to write copy, capture video from appropriate angles, and present verbally and non-verbally. Currently, those are skills taught in different disciplines (Composition — English, Videography — Film, Public Speaking — Communications and/or Acting — Performing Arts) housed in multiple departments. Beginning to work out the practicalities of these kinds of collaborations (Where will the course be housed? Can we afford to have a course like this team taught by three faculty? If not, where will we be able to find someone who can be credentialed to teach it?) now, rather than when the multimodal presentations start arriving and we are already too late, will offer a competitive advantage to both those graduating with the skills and the schools that can offer training in those skills.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Apple’s HomePod Short

Apple has released a video featuring its HomePod speaker. I hesitate to call it an advertisement, because it isn’t quite an ad. It is something closer to a short film or music video. If you haven’t clicked the link to watch it (And, if you haven’t, why haven’t you?), it is a Spike Jonze directed performance — FKA twigs dancing to “’Til It’s Over” by Anderson .Paak. In the performance, she expands her narrow apartment at the end of a dispiriting day through her dance.

First things first: I enjoyed the art — both the music and the dance.

I did want to point out a not-so-subtle subtext to the video. Her narrow apartment expands through the music and her dance — an obvious nod to Apple’s description of the HomePod as producing room filling sound — the kind of audio reproduction that makes you get up and move.

I think there is something else here, though — something near and dear to this English Professor’s heart. Apple has, on more than one occasion, explicitly stated that it tries to exist at the intersection of technology and the Liberal Arts and that technology alone is not enough. Steve Jobs’ assertion, during the iPad2 announcement, that those who just look at the “speeds and feeds” miss something important about a post-PC device. Currently, a lot of tech journalists are critiquing the HomePod because Siri doesn’t do as well as they want.

That is, ultimately, a “speeds and feeds” critique. 

Apple was not trying to manufacture the Star Trek computer with a HomePod. It was trying to manufacture a device that would make you want to get up and dance because the music was good enough to transport you.

While I have not been watching out for the reviews of the Amazon Echo or Google Home, I don’t recall tech journalists asking if these speakers were producing an experience that made you want to get up and dance after spending a long day in places that feel oppressive and confining — as the world that FKA twigs is made to appear. But if the HomePod offers people in those worlds that kind of experience, it will be far more valuable than a device that can more conveniently set two timers or tell you what the GDP of Brazil was in 2010.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Quick Thoughts: Black Panther and Brigadoon

I should really know better than to do this. One should not comment on works they have not seen or reviewed. But what is a blog for if not for occasionally indulging in partially formed thoughts and tossing out ideas for others to follow up on.

Critical and popular response for Black Panther is overwhelmingly positive and I do intend to watch the film — as soon as I find a moment and find the right place for it my the triage list of backlogged books, films, and television shows. Indeed, I am really looking forward to it.

But even before doing so, I have noticed something about the way Wakanda is being positioned in popular culture. It is the place that should have been — an idealized African nation where native culture could develop without of the oppression inherent in a century or more of European colonization.[1] The film then engages this Afrofuturist place with the problem of the African American context through the character of Eric Killmonger.

As I have not seen the film and am not a longtime reader of the comic book adventures of the Black Panther, I have no intention on commenting on the specifics of the confrontation. Nor am familiar enough with Afrofuturism to do more than invoke the name of the genre. I have been struck by a strange contrast between the looking back in time and across the sea (with all the remembrances different cultures have with their immigration, whether is be forced, unavoidable, or seen as some kind of new start in the land of opportunity) creates.

In Black Panther, we are shown an idealized culture that could have been. Strangely, this places it in the same tradition as the idealized nostalgia films of other hybridized American identities, whether it be The Quiet Man, Darby O’Gill and the Little People, or Brigadoon. But Black Panther sits uneasily in this tradition, at best — not because of the skin color of its actors, its geographical location, or its artistic quality (No one is going to claim Darby O’Gill is high art — even in jest.) but because it is looking forward while the above mentioned films associated with Ireland and Scotland look back into an idealized past of the kind Eamon de Valera invoked in his 1943 address “The Ireland that We Dreamed Of”.

This is not an easy comparison/contrast to tease out. The three films that look to the Celtic nations of the British Isles were made in a much different era than Black Panther. And while the Atlantic Slave Trade, the Irish Potato Famine/An Gorta Mór, and the Highland Clearances were all tragedies, there is limited benefit in trying to directly compare them. Indeed, discussing them together only serve to alienate the American descendants of these tragedies from one another rather than building any kind of sympathy or understanding for what each went through — an alienation significant enough that I hesitated to write this post at all.

But I do think that there is an interesting question here — one that someone should tease out because the stories we tell to ourselves about ourselves matter. What is it that caused one group to invoke their nostalgia in an idealized past and the other in an idealized future? What does each tell us about the way we imagine ourselves when we self-identify with these communities?


[1] It is not the first such imagining, of course. It is the same thought experiment that produced this map. It also sets itself against the Victorian and Edwardian imaginings of authors like H. R. Haggard and newer forays into imagined Africa like Michael Crichton’s Congo


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Quick Thoughts: Broken Links in Beowulf

In the first of his three Signum Symposia on the J. R. R. Tolkien’s relationship to Beowulf, Professor Tom Shippey discussed the lists of people who are referenced in passing at a variety of points in epic. In his discussion, he comes down on the side of those who argue that many of these references are allusions to now missing stories. Unferth, for example, remains honored at Herot even though there is a reference to his having been involved in the death of his brothers — an incident that should have marked him indelibly with dishonor.[1] We don’t get the story but the text seems to expect us to already know it.

While listening to the Symposium again on the way to work the other day, two metaphors for this loss came to mind. The first has a direct application to this blog: These stories are broken hyperlinks. As we drift towards next generation texts, allusions will increasingly appear in this technological form — links to click or words that, when tapped, will produce a box summarizing the connection.

To understand this change, however, we have to cease to think about high literature, as we think of it today. Yes, the Modernists alluded to other works all the time, as anyone who has looked at T. S. Eliot’s The Waste Land can tell you.  But even though Eliot wants you to remember the Grail stories in general and Jesse Weston’s From Ritual to Romance in particular, this act of allusion is different from the kind of nod that the Beowulf poet, Chrétien de Troyes, and Tolkien engage in. Their allusions are less scholarly exercises and more the calling up of the kind of fan knowledge possessed by those who can tell you about the history of their favorite super hero or the long history of ships named Enterprise. It is the difference between connecting single stories to other ones and seeing the whole of a Matter, in the way we talk about Arthurian legend being the Matter of Britain and the tales of Charlemagne and his paladins being the Matter of France.

Beowulf can thus be imagined as our reading a partial comic book run.

This difference might help us with our students, who are more likely to possess the latter kind of knowledge about something (e.g., their favorite TV show or sport team) than the former. We might also benefit from spending some time considering whether the allusions within high literature, as it is imagined by the inheritors of the Modernist enterprise, isn’t just a dressed up form of what scholars sometimes dismissively call trivia.


1. I would mention to those not as familiar with Beowulf that kinslaying is at the center of the story. Grendel, for example, is a descendant of Cain. The Finnsburg episode vibrates with the issue. Beowulf himself silently refuses to walk down the road that might lead to such a possibility when he supports his nephew for the Geatish throne rather than challenge him.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

 

On the Need for a New Rhetoric: Part V — The Beginnings of a New Rhetoric

To recap, for those who are joining us now but not quire ready to review four blog posts of varying length, we are confronted with a sea change in writing — whether you look at it from the point of view of a practitioner or scholar. Our means of production have sufficiently changed to shift composition and distribution. The old system, which involved separate, paper-based spaces for research, drafting, and production has been replaced by digital spaces that allow for these to take place within a single, evolving file. To use an old term, we all now possess an infinitely cleanable palimpsest, which can incorporate audio-visual material alongside the written word, that can be instantly shared with others — including those with whom we might choose to collaborate. 

This change not only has changed the way we write, it necessitates a change in the  way we teach writing and approach the idea of composition.

Having raised the issue, I am obligated to provide some thoughts on the way forward. Before doing so, I wish to stress something: Although I have, like most English professors, taught composition and rhetoric courses, I am not a Rhet-Comp specialist. There are others who have studied this field much more closely than my dilettantish engagement has required. I suspect that it is from one of them that the better answers will come about the merging of aural, oral, and visual rhetorics. That said, this path forward cannot begin without us addressing the tools of the trade.

We must begin to teach the tools alongside the process of writing. 

One of the first step for any apprentice is to learn their tools — how to care for them and how to use them. Masters pass on the obvious lessons as well as the tricks of the trade, with each lesson pitched to the level of the student and focused on the task at hand.  Those who teach writing must begin to incorporate a similar process into writing instruction. Indeed, if you consider the process described in the Part II of this series, I described a tool set that was explicitly taught to students at one point in the past. 

As much as I would like to say that this should be done within K-12 and university professors like me can abdicate any responsibility for this. The reality is, however, that this kind of instruction must take place all levels by faculty in a variety of disciplines. This breadth is demanded by the reality of the tasks at hand. A third grade English teacher will be focused on a different skill set, writing style, and content-driven focus than a university-level Chemistry instructor will. They will be engaging in different kinds of writing tasks and expect different products. Each, therefore, must be ready to teach their students how to create the final product they expect and there is no magic moment when a student will have learned how to embed spreadsheets and graphs within a document. 

This is no small demand to place on our educational system — especially upon composition faculty. Keeping up with technology is not easy and the cast majority of those teaching writing are already stretched dangerously thin by the demands of those attempting to maximize the number of students in each class to balance resources in challenging financial times. Nevertheless, the situation demands that this become a new part of business as usual for us.

We need to adapt to the tools that are here rather that attempt to force prior mental frameworks onto the tools.

Those of us who were raised in the prior system might have students try and adopt a` “clean room” approach about the research — keeping separate files for the notes associated research and the final document, for example — in order to replicate the notebook-typescript divide described before. There is a certain utility to this, of course, and there is nothing wrong with presenting it as an option to students as a low cost, immediately accessible solution to the problems inherent in the Agricultural Model. And this system will work well for some —especially for adult learners who were taught the Architectural Model. To do so exclusive to all other approaches, however, is to ignore new tools that are available and recognize that students have their own workflows and ingrained habits they may not be interested in breaking. The options provides by Scrivener and Evernote, for example, may better provide for students’ needs. And while there is some cost associated with purchasing these tools and services, we should not let ourselves forget that notecards, highlighters, and the rest of the Architectural Model’s apparatus were not free either.

We must be more aware of what tools before us are for and apply that knowledge accordingly.

If all you have is a hammer, the saying goes, everything looks like a nail. The same metaphor applies to word processing. 

If you are word processing, the assumption is that you are using Word. For the vast majority of people, however, using a desktop version of Word is overkill. Most users do not need the majority of the tools within Word. This does not make Word a bad choice for an institution nor does it make Microsoft an inherently evil, imperialist company. Microsoft developed Word to solve a set of problems and address a set of use cases. 

Why this observation matters is conceptual. Many institutions focus on teaching students how to use the basic functions of Word because it is a standard. Because the accounting and finance areas want and need to use Excel, it makes sense for the majority of companies to purchase licenses of the Microsoft Office suite. As a result, most working within a corporate environment — regardless of operating system platform — will find Word on their computing device for word processing.

If all these users are likely to do, however, is the ability to change a font or typeface, apply a style, and control some basic layout (e.g., add columns and page or column breaks), there is no need for an instructor to focus on teaching Word. They can focus instead on the task and the concerns that the faculty member is addressing (e.g., the appropriate format for the title of a book).

Yes, it will be easier to standardize on a device platform for instruction — especially since, as Fraser Speirs and others have pointed out, faculty need to have a common set of expectations for what can be expected of students and serving as front-line technical support. 

That said, institutions should consider carefully their needs when it comes to purchasing decisions. For the vast majority of students at most educational levels, there is no difference between what they will do in Apple’s Pages, Google’s Docs, Microsoft’s Word, or any of the open source or Markup-based options, like Ulysses. And the choice should be made based on the utility provided rather than a perceived industry standard. For long-form publishing, Word may be the be the best answer. If students are going to do layout incorporating images, Pages will be the stronger choice. 

For some, these three points will feel sufficiently obvious at to wonder what we have been doing all these years. The simple enough answer is that we have been doing the best we can with the limited time we have. These recommendations are, after all, additions to an already over full schedule. They are also changes in orientation. A focus on the tools of writing, rather than on the writing process, will be a change. For the reasons outlined in this series, however, I would argue that they are critical ones. 


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

 

On the Need for a New Rhetoric: Part IV — The Changing Text

In my last post, I considered the change from an architectural model of composition to an agricultural model of composition. If we were facing just this change, it would be sufficient reason to change our approach to writing. This shift, however, is not the only transformation occurring. The capabilities of the texts we are creating are changing as well. 

Back when research was done on 3” x 5” notecards, the medium of final production was paper — whether the final product was hand-written using a pencil or pen, typed, or printed using a black ink dot-matrix printer. Now, digital-first documents are printed — if they are printed — on color laser or ink-jet printers.

The key word in that sentence is if.

Whether it is the “papers” uploaded to class portals or the emails that have replaced inter office memos, much of what now comes across our desktops are digital-only documents. A growing number of these include more than just text. They include images, video and audio, and hyperlinks that extend the text beyond the borders of the file. These next-generation texts offer those composing the ability to embed source material rather than summarize it. A discussion of how multiple meanings in Hamlet lead to multiple interpretations on the stage, for example, could include clips from different performances in order to demonstrate a point.

This is not the time to go over all of the implications of multimodal, next-generation texts.[1] It is enough for us to recognize that digital-only documents exist and require us to take them on board as we develop a new rhetoric — one that requires us to consider  visual and auditory rhetoric and layout in addition to the written word. 


1. The ability to hyperlink to sources and, at times, specific places within a source should force a reconsideration of citational methodology, for example. Current style guides assume a paper world rather than a digital one. 


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

 

On the Need for a New Rhetoric: Part III — The Agricultural Model of Writing

In my last entry on this topic, I took us back to yesteryear and described to those younger than the 5 and a half inch floppy disk how research was once taught. Whether students did all of these things or not, the overarching system for organizing research was propagated and taken up into the imagination of students as they left high school and went off to college and university and then on grad school before returning to the classroom to teach.

Slowly, however, the tools changed. First, photocopying became available to anyone with enough quarters and different color highlighters were grafted on to the architectural method. Computers became available and the internet provided more sources — sources that could be printed out — requiring more highlighters. Like the printing press, accurate reproduction of information became trivially easy and index cards were replaced by three-ring binders with, for the more obsessively organized, dividers. Others gathered them into loose piles of paper that joined the books stacked near the writer’s computer station as they worked.

Then computers became portable.

This slow change may seem like a small thing in this progression but it is, I would argue, a critical one. When a computer can be carried to the place of research, there is no need for a photocopy or printout. All that is required is to take the information and type[1] it into a file that is saved to memory — whether that memory is a 8”, 5 1/4”, or 3 1/2” disk; a spinning platter hard drive; a USB thumb drive; a solid-state hard drive; or cloud storage service, like Dropbox or iCloud.

As anyone who has taken notes in a word processor can tell you, there is a huge temptation to begin evolving the notes into a draft rather than creating a new draft document. Indeed, it is logical to do so. All of the research is there, ready to be re-ordered through the magic of cut-and-paste then written about when the referenced material — whether they be quotations or notes — is onscreen awaiting response. This approach keeps the material fresh in the mind of the writer while enabling them to take advantage of the benefits of digital composition.

I suspect that will sound familiar to many reading this. I also suspect it is the method most of us now use when composing — whether we were trained in an architectural model of research or not. 

This approach to composition can be seen as an agricultural model of production — one where ideas and information are seeded into a document and then organically grown as the work-in-progress develops throughout research and writing process.

For all of its advantages, and those advantages[2] are significant, there are major limitations to this approach. Skipping the step of transmitting information from one document (say, a notecard) to another promotes accidental plagiarism by increasing the chance that a note will inadvertently become separated from its source. It also makes less clear to the writer who crafted a particular turn of phrase as notes are transformed into the draft. In addition to the problem of plagiarism, growing a paper (rather than building it) trades the organizational system that is created when a writer has to formulate multiple outlines to order their research and writing for the less rigorous world of headings scattered through a draft. It also skips the step where the organization of a writer’s ideas is initially tested before the first word of the first draft is written.

These limitations are less a function of the tools at our disposal than they are the absence of a method that embraces these tools. To push the metaphor, we are at the hunter-gatherer stage of the agricultural model, where means of storage have been developed but we have not fully developed a system for cultivation.

Our means of production has changed. Our pedagogy has not.

This is the core reason that we need to create a new rhetoric — one that accounts for the new method of textual creation that digital composition allows and that embraces the ability to incorporate media into what was once a static document.


1. Typing, of course, is no longer the only way notes are taken. As anyone who has seen lines forming at the whiteboard at the end of a lecture or meeting or watched people hold their cell phones and tablets up to get a quick shot of a presentation slide can tell you, photography has become equally as important for information capture as note taking.

2. To name a few of the advantages: Portability of the research once it has been done, the ubiquity of high quality electronic resources, and a superior means of production. No matter what the hipsters and typewriter aficionados tell you, word processing is superior to typing for the vast majority of users most of the time. This does not mean they are wrong about what work for them — there are times when I feel compelled to write out ideas or to do lists using a fountain pen. But I am under no illusion that there is great benefit in putting that to-do list into Reminders, Todoist, or OmniFocus. I just wish I could decide which of these digital tools work best for me.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

The Algorithmic Vulnerability of Google and Facebook

New York Times reporter Rachael Abrams wrote this week about her recent attempts to convince Google that she is, in fact, still alive. It is a damning article for a company that has, at its core, an information retrieval mechanism driving its advertising revenue stream. If, after all, users cannot trust the information a Google search provides, they will begin to go elsewhere and the user databases those searches generate will degrade and lose value for advertisers looking to target an audience.

Abrams’ article should not only be a wake up call for Google. It exposes a key vulnerability for Facebook and other algorithm-based companies.[1]

While computing power has increased and artificial intelligence has improved dramatically, we have not outstripped the need for human curation.

John Adams’ assertion to the jury weighing the guilt of British soldiers involved in the Boston Massacre that “Facts are stubborn things”  is no less true today. And despite the protests of those who don’t like to have their own biases and world views challenged, there is a difference between reputable and unreputable sources of information. When a company takes upon itself the role of an information aggregator, as Google has, or stakes out a position as a new public square, as Facebook has, it has an ethical and moral obligation to do so in good faith — even in the absence of a legal requirement to do so. Yes, reasonable people can interpret facts differently. Unreasonable people — and opportunists — embrace the factually wrong.

More importantly, however, self interest should drive them to act in good faith. Stories like Abrams’ highlight a credibility gap — one that competitors will exploit. Google was once an upstart that succeeded  because it outperformed Alta Vista and eliminated the need for search aggregators like Dogpile. Google, too, can be supplanted if its core offering became seen as second best because their search results could no longer be trusted.

Abrams’ story points to a need for Google and others to rethink their curation strategies and base them on something other than short-term Return on Investment. There are indications that this is beginning to happen, but Google’s tendency to rely on temporary workers is, ultimately, a losing strategy — one that doubles-down on the primacy of the algorithm rather than accepting the need for humans trained in information literacy and the ability to discern between correct and incorrect. These curators must have the authority and ability to make corrections to algorithmically generated databases before those databases become useless to users — whether the users are looking for a holiday recipe or looking to sell ingredients to cooks.


1. That the two obvious companies to comment on here are focused on advertising may hold a hint to an underlying issue — the unspoken argument’s warrant. The focus on generating information for advertisers has distracted these companies from the need to provide quality information for their users. The need to generate revenue is clear and understandable. They are not running charities and protesting their profit motive sounds as strange to my ear as those who were dismayed that Academia.edu might be trying to make money. The calculation with all such services must be the value proposition — is the user being provided with a service worth the cost and the service provider must make sure it does not lose sight of its users as it focuses on its profit source. The moment services like Google and Facebook become more about advertisers than end users, they open themselves up to competitors with better mousetraps — ones that will provide more value to the advertisers.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

 

On the Need for a New Rhetoric: Part II — The Architectural Model of Writing

In my last post, I offered an assertion without exposition: That writing on a computer/mobile device screen has significantly changed the model that we use for creating arguments and composing the form they take because writers have moved from an architectural model of production to an agricultural form of production. In this post, I will explain what I mean by an architectural model of composition. 

Readers of a certain age will remember research in a time before the ubiquity of the internet. In such days of yore, the well-equipped researcher went to a library armed with pencils and pens of varying color, at least one notebook, and a stack of 3” x 5” cards gathered together into multiple stacks held together by rubber bands.[1]

For those of you too young to have ever seen such a thing, or too old to remember the system’s details[2], here are how all of these pieces worked together.

To keep things organized, you started with a research outline — one that roughly laid out what you were looking for. This was as much a plan of action as it was an organizational system. It had a hypothesis rather than a thesis — the idea or argument you were testing in your research.

Once in the library, you went to a card catalog — a series of cabinets holding small drawers that contained cards recording bibliographic information. One set of cabinets was alphabetized by author. Another set of cabinets held similar cards but they were organized by subject. Each card also recorded the Library of Congress or Dewey Decimal number that corresponded to the shelf location of the book in question.[3]

If you were looking for more current material, you consulted a Periodical Index of Literature, which was published annually and contained within it entires for articles published in magazines. With that information, you could request from the reference librarian a copy of the bound volume of the periodical or microfilm or microfiche to place into the readers. 

For each source you referenced, you carefully recorded the full bibliographic information onto one note card and added it to your growing stack of bibliographic cards — which, of course, you kept in alphabetical order by author. Each card was numbered sequentially as you examined the source. 

These were the days before cell phone cameras and inexpensive photocopiers. You took handwritten notes in a notebook and/or on index cards. For each note you took, you noted the number of the source’s bibliographic note card in one corner[4] and its place in your organizational outline in another corner. To keep things as neat as possible, each card contained a single quotation or single idea. Following the quotation or note, you listed the page number. Finally, you would write a summary of the note along the top of the card to make it easier to quickly find the information when flipping through your cards.

You did this for every note and every quotation.

At the end of the day of research, you bundled up your bibliography cards in one stack and your notes in a second stack — usually in research outline order though some preferred source order.

When your research was complete, you created your thesis, which was a revision of your hypothesis based on what you had learned in your research. You then created an outline for your paper.[5] Once the outline was ready, you went back through your notecards and recorded the paper outline in a third corner of the card — usually the upper right hand corner. (For those looking to handle revisions to structures or make certain pieces of information stand out, a separate color could be used to write things down.) You then stacked the cards in the order of your outline and proceeded to writing. As you came to each point you wished to make, you hand wrote (You would not have typed a first draft.) the information or quotation, noting the source where and when appropriate. 

Then you revised and edited until you were ready to type the paper. If you were among the fortunate, you had a typewriter with a correction ribbon or had access to correction strips. If not, you got used to waiting for White Out to dry, lest you be forced to retype the entire page.

From this description, I hope you can see why I refer to this system as an architectural model. You gather raw material, shape the raw material into usable units of standardized sizes, then assemble them according to a kind of blueprint.

I suspect you can also see the sources of many of our current digital methods. To put it in the language of contemporary computing, you created a analog database of information that you had tagged with your own metadata by searching through sources that were tagged and sorted by generic metadata. The only differences here are that the database of information is stored on 3” x 5” cards rather than within spreadsheet cells, for example. 

So long as computers were fixed items — desktops located in labs or, for the well to do, on desks in offices or dorm rooms, this model persisted. With the coming of the portable computer, however, a change began to occur and writers shifted from this architectural model to an agricultural one without changing many of the underlying assumptions about how research and writing worked.


 

  1. Those exceptionally well prepared carried their index cards in small boxed that contained dividers with alphabetical tabs.
  2. I hasten to note that this is what we were taught to do. Not everyone did this, of course.
  3. What happens next depended on whether you were in a library with open stacks or closed stacks. In open stack libraries, you are able to go and get the book on your own. In closed stack libraries, you fill out a request slip, noting your table location, and then wait while the librarian retrieves the work in question. The closed stack model is, of course, still the norm in libraries’ Special Collections section.
  4. Some preferred to include the name of the author and title of the work. This could, however, become cramped if it bumped into the heading of the note if you placed it in one of the upper corners. For this reason, most people suggested placing this information on one of the lower corners of the card. I seem to recall using the lower right corner when I did this and placed the note’s location within the organizational outline in the lower left corner.
  5. Some continued to use notecards in this step. Each outline section was written on a card, which allowed them to be shuffled and moved around before they were cast in stone. 

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.