The Fallacy Inherent in Chasing AI Plagiarism

Many years ago, Richard Pipes, a librarian at Wingate University revealed his secret to successfully in tracking down the sources used by students who plagiarize. He went to the most obvious source, he explained, because if a student were willing to put the effort to find an obscure source, they would be willing to do the hard work of writing a paper.

That was a different age, of course. The hard copy books and periodicals on the library shelves were still as equally accessible — if not still more accessible — to students as those found on the internet.

Nevertheless, his logic still holds true. A student who is actively trying to plagiarize their way out of an assignment is different from one who does not understand how and when to cite a source -- whether that confusion arises from poor preparation in their prior education or is, in part, culturally determined.

Right now, I want to set aside those students who want to do it correctly (or are at least willing to do it correctly).

Right now, I want us to consider those who are plagiarizing intentionally with malice aforethought.

Catching these students has always been and will always be a cat and mouse game. It is only when confronted in this light that practical approaches can be considered. For several years now, plagiarism detection tools have made the task of documenting their efforts easier.

But plagiarism detection services have always been hit-or-miss at best and actively problematic at worst and things have not improved with the arrival of large language models.

For those who are hoping Turnitin will save you from the threat of an AI generated paper, please know that they will almost certainly be one generation behind. At the time of writing, this means Turnitin believes it can identify work generated by ChatGPT 3.5 but is less certain it can detect work generated by ChatGPT 4.0.

It is possible for Turnitin to catch cases where ChatGPT 4.0 has been used but it comes at a cost. It increases the odds of generating false positives.

Turnitin makes a point of talking about this risk on their pages devoted to AI and what they have written there is worth reading for those trying to wrap their heads around our new normal.

I would stress one point in closing, though -- something those who are looking to the hills for Turnitin or something similar to arrive and solve your problems.

You are choosing to trust an AI with your work instead of engaging in the hard work of adjusting your pedagogy.

That formulation should give you some pause.

The Sky is Still Falling (Long Term)

Before returning to some of the technical and pedagogical issues involved with AI in the classroom, it is worth understanding some of the personal and personnel aspects of all this. Without understanding these concerns, a full appreciation of the existential threat AI presents to the academy in general and the professoriate in particular can get lost in the shuffle while people focus on academic dishonesty and the comedy that can ensue when ChatGPT gets something wrong.

A few data points:

It has not been long since a student at Concordia University in Montreal discovered the professor teaching his online Art History class had been dead for two years.

Not only are Deep Fakes trivially easy to create, 3D capture tools are making it easy for anyone to make full-body models of subjects.

You can now synthesize a copy of your own voice on a cell phone.

We can digitally clone ourselves.

You can guess where this is going.

Many years ago (2014, for those recording box scores), I told a group of faculty that the development of good online teaching carried with it an inherent risk -- the risk of all of us becoming TAs to rock star teachers. When I explained this, I told my audience that, while I considered myself a good teacher, I had (as a chair) observed and (as a student) learned from great teachers.

I asked then and sometimes ask myself now: What benefit could I, and JCSU, offer to students signing up for my class that outweighed the benefit of taking an online class with that kind of academic rock star?

I still don't feel I have a compelling answer for that question.

Now, in addition to competing with the rock stars of the academy, there is a new threat.  it is now simple enough to create an avatar -- perhaps one of a beloved professor or revered figure (say, Albert Einstein or Walter Cronkite) and link it to a version of ChatGPT or Google Bard that has been taught by a master teacher how to lead a class — a scenario discussed in a recent Future Trends Forum on “Ethics, AI, and the Academy".

How long until an Arizona State reveals a plan for working it into their Study Hall offering?

AI may not be ready for prime time because it can still get things wrong.

But, then again, so do I.

The pieces necessary to do that kind of thing have been lying around since 2011. Now, even the slow-moving academy is beginning to pivot in that direction.

Chat GPT: Fear and Loathing

I wanted to spend some time thinking through the fear and loathing ChatGPT generates in the academy and what lies behind it. As such, this post is less a well written essay than it is a cocktail party of ideas and observations waiting for a thesis statement to arrive.

Rational Concerns

While I have already mentioned (and will mention again below), the academy tends to be a conservative place. We change slowly because the approach we take has worked for a long time.

A very long time.

When I say a long time, some of the works by Aristotle that are studied in Philosophy classes are his lecture notes. I would also note we insist on dressing like it is winter in Europe several hundred years ago — even when commencement is taking place in the summer heat of the American south.

While faculty have complained about prior technological advances (as well as how hot it gets in our robes), large language models are different. Prior advances -- say, the calculator/abacus or spell check -- have focused on automating mechanical parts of a process. While spell check can tell you how to spell something, you have to be able to approximate the word you want for the machine to be able to help you.

ChatGPT not only spells the words. it can provide them

In brief, it threatens to do the thinking portion for its user.

Now, in truth, it is not doing the thinking. It is replicating prior thought by predicting the next word based on what people have written in the past. This threatens to replace the hard part of writing -- the generation of original thought -- with its simulacrum.

Thinking is hard. It's tiring.it requires practice.

Writing is one of the places where it can be practiced.

The disturbing thing about pointing out that this generation of simulacra by students, however, is that too many of our assignments ask students to do exactly that. Take, for example, an English professor who gives their students a list of five research topics to choose from.

Whatever the pedagogical advantages and benefits of such an approach, it is difficult to argue that such an assignment is not asking the student to create such a simulacrum of what they think their professor wants rather than asking them to generate their own thoughts on a topic that they are passionate about.

It is an uncomfortable question to have to answer: How is what I am asking of the students truly beneficial and what is the "value add" that the students receive from completing it instead of asking ChatGPT to complete it?

Irrational Concerns

As I have written about elsewhere, faculty will complain about anything that changes their classroom. The massive adjustments the COVID-19 pandemic forced on the academy produced much walling and gnashing of teeth as we were dragged from the 18th Century into the 21st. Many considered retirement rather than having to learn and adjust.

Likewise, the story of the professor who comes to class with lecture notes, discolored by age and unupdated since their creation, is too grounded in reality to be ignored here. (Full disclosure: I know I have canned responses, too. For each generation of students, the questions are new -- no matter how many times I have answered them before.)

Many of us simply do not wish to change.

Practical Concerns

Learning how to use ChatGPT, and thinking through its implications takes time and resources. Faculty Development (training, for those in other fields -- although it is a little more involved than just training) is often focused in other areas -- the research that advances our reputations, rank, and career.

Asking faculty to divert their attention to ChatGPT when they have an article to finish is a tough sell. It is potentially a counter-productive activity depending on where in your career you are.

Why Start Up Again?

One of the things that those of us who teach writing will routinely tell students, administrators, grant-making organizations, and anyone else foolish enough to accidentally ask our thoughts on the matter, is that writing is a kind of thinking.

The process of transmitting thoughts via the written word obligates a writer to recast vague thoughts into something more concrete. And the act of doing so requires us to test those thoughts and fill in the mental gaps for the sake of their reader, who cannot follow the hidden paths thoughts will follow.

I am not sure about all of you, dear readers (at least I hope there is more than one of you), but I am in need of clearer, more detailed thought about technology these days.

Educators have been complaining about how technology makes our students more impoverished learners at least since Plato told the story of how the god Thoth's invention of writing would destroy memory.

In between the arrival of Large Language Model-based Artificial Intelligence and the imminent arrival of Augmented and Virtual Reality in the form of Apple Vision Pro, the volume of concern and complaint is once more on the rise.

I also have my concerns, of course. But I am also excited for the potential these technologies offer to assist students in ways that were once impossible.

For example, ask chat GPT to explain something to you. It will try to do so but, invariably, it will be pulling from sources that assume specialized knowledge — the same specialized knowledge that makes it difficult for students to comprehend a difficult concept.

But after this explanation is given, you can enter a prompt that begins “Explain this to a…”

Fill in that blank with some aspect of your persona. A Biology major. A football player. A theater goer. A jazz aficionado.

You can even fill in types of animals, famous figures — real or fictional (I am fond of using Kermit the Frog), or other odd entities (like Martians).

In short, ChatGPT will personalize an explanation for every difficult concept for every student.

AI and AR/VR/Spatial Computing are easy to dismiss as gimmicks, toys, and/or primarily sources of problems that committees need to address in formal institutional policies.

I am already trying to teach my students how to use ChatGPT to their benefit. There are a lot of digressions about ethics and the dangers of misuse.

But everyone agrees that these technologies will change our future. And as an English Professor, it is my job to try and prepare my students for that future as best I can.

To do that, I think I will need this space to think out loud.

Dreaming of an iPad Mini Pro

Is this thing still on?

It has been quite a while since I posted here. My lack of posts has not been due to an ending of the iPad Experiment. It has continued and become my normal.

In some ways, this normalcy could be seen as the mark of the end of part one of the experiment.

Part Two began with the start of the COVID-19 Pandemic.

Much of the world was asked to rethink and re-examine their relationship with technology. What was once considered perfectly adequate (e.g., the cameras that came with their laptops and desktops ) shifted into entirely inadequate almost overnight.

The iPad fared relatively well. It was not perfect by anyone's stretch of the imagination but when you were overhearing the struggles of a teacher trying to adjust their teaching to fit within the confines of a Chromebook chosen a few years before based on its price rather than its performance, it looked great in

Now, however, the iPad line exists alongside devices that were redesigned with the lessons learned from the pandemic. And some of the limitation-based strengths of the iPad have begun to feel like weaknesses.

One of the things that enabled me to navigate the non-stop meetings of the pandemic (I was serving as an interim dean at the time.) was having access to an iPad Mini. This allowed me to have Zoom running on my iPad Pro's screen while I took notes or referenced documents on the smaller screen.

The need to do this was due to a strength of the iPad -- that it is primarily a unitasking environment. It demands a kind of focus that laptops and desktops seem to actively discourage in their users.

I have been thinking about that a lot of late.

One reason I have been thinking about this is driven by conversations in the tech media -- especially in some of the blogs and podcasts of some of the iPad's greatest champions. Wether it is the time in the wilderness that Federico Viticci felt compelled to take following his struggles with Stage Manager or Jason Snell's realization that the new M-series Macs were his better choice for one device travel, users have started to critically examine the iPad and ask the kind of questions of it that were asked of laptops during the pandemic.

These questions all boil down to a very fair, very simple question: Is living with the collection of pain points that come with this device a good trade off for me?

For tech journalists who increasingly must engage in audio and video production, it is easy to see why the answer might be no. That there are not the kind of advanced tools (hardware and software) that they need to do their job is an understandable dealbreaker. And if, in their frustration, they sometimes state their case in a way that conflates the very specific issues they face with a larger problem with the platform, who can blame them? The issues, after all, do clearly highlight the kind of limitations their readers and listeners should know about.

That said, there are more people in the world who take notes than there are who make podcasts. And the fact that I am writing this post (at least its initial draft) on my iPad Mini in Apple Notes using an Apple Pencil via the Scribble feature tells me that there are stories about the iPad line that are still not being told.

One of them requires us to reexamine our attachment to keyboards.

The other reason I have been thinking about the limits that the iPad has been bumping into has to do with my own list of "but why doesn't Apple just... " thoughts. These are exactly the same kind of things I was referring to above. For tech journalists, the question is why can't Apple allow the M-series iPads to do more with recording multiple audio streams.

My need/want probably involves asking Apple to break some of the laws of physics.

I want an iPad Mini Pro.

Specifically, I want an iPad Mini that can drive an external monitor using Stage Manager, as it functions in The iPadOS 17 betas.*

I want the iPad Mini to be my primary device — one that makes it easier to switch from a keyboard mindset to one that places the Apple Pencil in my hand and think more about what I am writing and, perhaps, makes me consider dictation more often.

This is especially true for those times when I know that giving myself the time to think about the words going down on the digital page is an improvement over the false efficiency of typing.

While I don't know if this could be done, I do know there would be a host of trade offs. The Mini would have to be plugged in while it drove the monitor because the battery would be insufficient for all day usage. The weird rotational issues that crop up with Zoom when being projected to an external monitor would be less ideally solved by keeping that app on the smaller screen. There is even less room to move the camera to a landscape edge (although iPadOS 17 may solve that via external camera support).

And I would need a long list of potentially expensive peripherals to really make such a set-up work.

*Given the battery size of the Mini, this would almost certainly require it to be plugged in. Yes, the battery could be made a little bigger but that path towards a solution eventually leads you to using an iPad Air or Pro, defeating the purpose of the Mini.

Apple’s Values (and Value Proposition) on Display

Since Apple’s March 25th event, I have seen and heard a lot of confusion over what Apple is attempting. Writers and pundits have tried to compare Apple TV+ to offerings from Netflix and Disney, and have worried about the absence of a back catalog of content. They have wondered at the News+ value proposition for publications like the New York Times, which are concerned about loosing touch with the direct feedback from their readers. They have expressed concern over how the Apple Arcade offering will not address the needs of app developers who have figured out how to make their money via in-app purchases. And, above all, they have scratched their heads over what they see as a confused message — one that doesn’t appear to express a unified, focused plan. They conclude from this that Apple is losing its way in a changing technology and media landscape.

I cannot disagree with this assessment more strongly. There is a clear and consistent message from Apple that was on display on the 25th — one that Steve Jobs famously laid out in March 2011: “It is in Apple’s DNA that technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.”

Taking this as a core value and apply it as a lens to the Spring 2019 event, Apple’s consistency becomes clear. Apple is expressing its belief in the artists and creators and placing them first rather than in the technologically driven analytics that will assemble a work that appeals but is something less than art.

  • Apple, with its Apple TV+ offerings, is turning to artists and creators and asking them to produce the content they believe in. Netflix, in contrast, turns to its algorithms to determine what people are watching and fashion programming in response.

  • Apple, with its News+ offerings, is turning to the reporters and writers and asking them to report on what they believe is important, rather than looking at the analytics of click throughs to determine what kinds of stories they should be following and tailor their next issue accordingly.

  • Apple, with its Apple Arcade offering, is turning to the app developers and asking them to create games that are worth playing rather than looking at how many times they need to create choke points in a game to get players to pony up another $0.99. $1.99, or $9.99 (or more) to advance a level or buy a nifty costume,

  • Apple, with each of these as well as its Apple Music+ offering, is turning to curation teams to surface quality content rather than trusting an analytics engine to present content that has been manufactured to be a hit — either through established analytics or via clickbait headlines.

Apple is putting its fortune where it has long said its values lie: in the human using the technology rather than in the technology that can parse the human. They are valuing art and artists (or creators, if you prefer that term). They are providing an opportunity for the crazy ones who want to create something new, exciting, and different rather than the safe, predictable, and manipulative.

Indeed, it is that latter item that has turned me off during so many contemporary big-budget films. All too often, I can see the image on the screen, listen to the music, and feel the film trying to make me feel something rather than having the emotional response organically develop within me, as the final moments of La Traviata did when I was fortunate enough to see it at La Fenice in Venice. That experience has stayed with me in a way that the more manipulative moments in recent blockbusters have not — even if the algorithmic calculations were able to manufacture similar responses in the moment.

This is not just an expression of Apple’s values. It is also a part of Apple’s value proposition (although I hasten to state that I suspect Apple sees this as an effect rather than a goal). Seeking a short term success via algorithm will generate a cash return. Apple has shown an interest in playing the long game — the kind of long game that is blue-chip stock consistently profitable over a generation rather than just the recent quarter — by being willing to focus on design and quality that will last longer than the immediate press cycle. That builds long-term value that is independent of a need to analyze and reduce its users to data.

This is not to say that analytics cannot surface art. It may be what is necessary to link up an artist to an opportunity. But that good fortune is not the same as a manufactured piece of entertainment that, while commercially successful, will pass the time but not last. And Apple is not interested in creation things that do not last.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

And the Pursuit of Happiness

There are currently two significant, measurable side effects of my recent trip to Venice, Italy[1]: A pursuit of preparing cafe/espresso here that tastes like what I made in our AirBnB[2] and an exploration of why it was easier to be happy in Venice than it is here in North Carolina. And while Venice is a beautiful city that promotes feelings of well being — it’s nickname La Serenissima was produced in an era before PR firms generated such titles ad nauseam — happiness is not a place-based phenomenon. Happy people can be happy anywhere and it is partially driven by choice.[3]

I was partially predisposed towards this reflection after reading Frederico Viticci’s “Second Life: Rethinking Myself Through Exercise, Mindfulness, and Gratitude” in MacStories[4], which came out while I was in Venice. I was struck by the parallels between the malaise I recognized in some parts of my life and how his thoughtful approach to technology, which mixed stepping back from some parts of it[5] and embracing others — like activity monitoring[6], was making a difference for him.

This morning, one of the articles recommended by Apple News was Adam Sternbergh’s  “Here is Your Cheat Sheet to Happiness” in The New York Magazine/The Cut, which detailed Yale Professor Laurie Santos’ class on Happiness and Well Being. One of the big takeaways from the article is that we choose our state of happiness and that we can make better choices.

I am no paragon of virtue in this arena. Both Santos’ and Viticci’s work points out many things that I am clearly and demonstrably doing wrong.[7] I won’t bore you with those details here. Suffice it to say that, like many Americans, I have become addicted to the perceived prestige that being busy confers and that I need to reassess how I approach this part of my life.

What I do what to consider here, however, is that these articles have implications for how we value one another in the workplace. As the current Chair of the Faculty Handbook Committee at Johnson C. Smith University, one of my jobs is to shepherd faculty evaluation proposals through a part of the adoption process. It strikes me that one of the engines of our need to appear busy is that evaluation policies put a premium on being busy by requiring us to document our work. There is some truth to the statement that you can only assess what you can measure, but that statement taken alone cuts out the costs of such a world view. What you assess is driven by a value judgement. We assess things we consider important so we can improve them.

One of the truisms of a university faculty, however, is that morale is in need of improvement. Dr. Jerry McGee, then President of Wingate University, once joked in a Faculty Meeting that faculty morale was always at one of two levels: Bad and the Worst it has ever been. Articles in The Chronicle of Higher Education and Inside Higher Ed on this topic appearregularly, alternating between hand wringing over the problem and offering examples of how one campus or another has tried to tackle the issue.

What I don’t recall in any of those articles is the explicit statement that our systems for evaluating faculty might be the things that are manufacturing poor morale.

Of course, this issue is not unique to higher education. All recent discussions of the national problem of K-12 teachers leaving their classrooms in droves indicate that such systems, imposed by state legislatures, are the leading cause of this wave of departures.[8] There are indications that it is also true in other fields, although I do not follow those closely.

I suspect that one of my self-imposed jobs over the coming year or two will be to look at how our evaluation system is actively manufacturing unhappiness and trying to figure out how to change that. It is true that we have been working (painfully) to revise our system over the last few years but that attempt has focused on productivity maximizing individual faculty potential by allowing them to specialize on areas of interest and talent. My areas of greatest strength, for example, are not in the classroom. I am not a bad classroom teacher but my greatest strengths lie in other parts of what it means to be a professor. We have been working on systems that would allow me to focus my time and evaluation more on those areas than on others.

Our work has focused on the happiness/morale question as an effect of our systems. That puts it in a secondary role, which will result in it not being the primary thing assessed. That means faculty morale will always be a secondary issue — one that will be less likely to be addressed than how easy it is for a given member of the faculty to produce a peer reviewed article or serve on a committee.

But with better morale, faculty teach better, write better articles, and are more likely to be productive in meetings and elsewhere. That suggests the morale question should be in the causal role rather than being considered as an effect of other causes.

This requires us to rethink the way we assess and value the time being spent by faculty. I would love to tell you that I have it figured out but these are early days in my thinking about this. I do know that simplistic responses like “Tech is bad and wastes time and produces poor results” because technology can save us time — the most valuable of commodities. We will have to be eschewed for more nuanced responses, like the one detailed by Viticci in his article, and that the nuance must be applied across the board. This means that the numbers generated in our assessments must become secondary to the non-reductive analysis of those numbers.


[1] Don’t hate me because my wife worked hard to design and provide for this trip. Yoda’s advice that Hate leads to the Dark Side applies strongly to this post. If you give in to hate, you are reinforcing your own unhappiness.

[2] So far, I haven’t managed it. In Italy, I was using a gas stovetop, which easily produces the correct level of heat for a stove top Moka by matching the size of the flame to the bottom of the Moka. I suspect the electric stovetop I have here produces too much heat, leading to a different flavor — an almost bunt taste. Experimentation continues.

[3] Michael Crichton writes about this in his autobiographical work Travels.

[4] This may be behind a pay wall. I’m a MacStories member. Apologies to those who cannot access it.

[5] Controlling social media, rather than letting social media control you, is a big theme here. It reminded me that I need to invest some time with Twitter’s lists feature to set some filters to help sort through the kind of thing I am looking for at times.

[6] I have been doing some of this and have noticed over the past year that I am happier those days I am consistently completing my move rings than I am those I am not.

[7] The good news is that both point to ways I can fix that and that those decisions are completely under my control.

[8] Despite the requests for respect, legislators — like many trustees and administrators — interpret these concerns and complaints exclusively in terms of pay. Yes, pay can be improved but reading the statements of teachers clearly indicates that the primary issue isn’t the pay. It is the burden of an evaluation system that does not value them.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

A Quick Note of Praise for Apple Maps

For quite some time, it has been fashionable to poke fun at Apple Maps.[1] In fairness, several have pointed out that Google Maps has its limitations as well, so the issues of trying to provide locations and  directions is not limited to Cupertino’s offering. Nevertheless, if someone is going to ask why one would use that mapping application, it is usually going to be Apple Maps that is being asked about.

For the past few days, I have been in Venice, Italy.[2] Venice is a city notoriously difficult to navigate on foot or by gondola — even for those who have a good sense of direction. I am happy to report that Apple Maps does a very good job of providing walking directions here. The path it laid out for us to walk from our AirBnB to La Fenice[4] was quick and easy.

This is not to say that the potential for getting lost was gone. It is easy to get turned around in a Campo[3] with five or more Calle[5] leading out of it. It’s at moments like this that the arrow pointing in the direction you are facing[6] becomes really important.

This is an older feature but one that mattered a lot to me when I was trying to find the way to the traghetto[7] in a part of the city I wasn’t as familiar with so we could get my hungry daughter to lunch.[8]

I point this out because, sometimes, the killer feature of an app is one that isn’t focused on by commentators or in one-on-one demos but provides utility when you absolutely need it to. Could the databases for locations used by our apps be better? Of course. Could they do a better job of directing us to the correct side of a building. Yes. But those are things I can work around. Not being sure of which direction I am facing on a cloudy day in Venice where there aren’t a lot of trees with directional moss isn’t something I can work around.

This is the kind of thing that Rene Ritchie refers to when he talks about Apple’s ability to produce a minimum delightful product. In this case, this delight was all about the fundamentals. And, just like in sports, getting the fundamentals right can take you far.


[1] My favorite moment of levity at the expense of Apple Maps was a joke told to me by my goddaughter soon after Apple Maps was initially released: “Apple Maps walked into a bar. Or a church. Or a store. Or a tattoo parlor.”

[2] Don’t hate me because I am lucky enough to have a wife who planned this trip.

[3] We took in a performance of Verdi’s La Traviata. It was every bit as good an as powerful as you would expect and was a wonderful reminder of the power and virtue of art.

[4] The large and small squares of the city.

[5] The lanes/walkways/roads that wind though the city.

[6]Google Maps has a blue fan that mimics the look of a flashlight, I believe, which serves the same purpose.

[7] The tragetto is the water bus system of Venice.

[8] If you are looking for something a touch more casual, check out Taverna San Trovaso.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

The Problem with “Pro”

Apple and its associated hardware and software developers have a problem with Pro machines, whether they are the forthcoming Mac Pro or the current iMac Pro, MacBook Pro, and/or iPad Pro, or any of the apps advertising a Pro tier. This problem, incidentally, is not unique to Apple and its ecosystem. It is a problem that bedevils the entire tech industry.

Pro means different things to different people.

I recognize that, in the aftermath of the MacBook Pro v. iPad Pro controversies, this statement is almost cliché. But one of the issues that I am beginning to recognize is that even those who look at these problems most broadly remain trapped by the choice of the abbreviated adjective “Pro”.

Does Pro stand for professional or does Pro stand for productivity?

Grammatical terminology may elicit from you, gentle readers, eye-rolls and a desire to click away from this article as soon as possible, there is more here than an English Professor’s occupational bias to focus on words. Most of the commentary on Pro machines has focused on the meaning of the adjective: “Who is a Pro?” I haven’t heard as much about the ambiguity of the abbreviation — although it immediately enters into the conversation. The absence of this acknowledgement, more often than not, results in people beginning to talk past one another.

It is also worth remembering that the equation Pro = Professional will always result in compromises because the machine is not the professional. The user is the professional and various users have different needs. Claiming that the MacBook Pro is a failed machine because it does not have a lot of ports, for example, requires the assumption that a professional needs a lot of ports to plug in a lot of peripherals. Those of us who don’t need to do that are going to respond negatively to the claim because accepting it requires us to deny that they are professionals. And while I don’t need a lot of peripherals[1], I deny anyone the right to claim I am not a professional.

Likewise, Pro = Productive highlights a series of compromises because what it takes for me to be productive is much different from what it takes for a computer scientist to be productive. I can be as productive on an iPad Pro as I can on a MacBook Pro. Indeed, the ability to scan documents and take quick pictures that I can incorporate into note taking apps like GoodNotes while I am doing research in an archive allows me to be more productive with an iPad Pro. While these compromises are similar to those under the Pro = Professional formulation, there are subtle differences, in terms of technological and production requirements.[2]

The most important distinction, however, is the implied hierarchy. There is an ego issue that has attached itself to the adjective Pro. Several years ago, for example, a colleague claimed that only the needs of computer scientists should be considered when selecting devices to deploy across our campus because the rest of us could get by without them. I hasten to note that, in his extended commentary, there was a good bit of forward thinking about the way we interact with computing devices — especially his observation that we could all receive and respond to email and similar communications on our phones (an observation made before the power of the smartphone was clear to all). But it is illustrative of the kind of hubris that can be attached to self-identifying as a Pro user — that our use case is more complex and power-intensive than those of those users whose workflows we imagine but don’t actually know. While I recognize, for example, that computer programming requires specific, high end hardware. It is equally true that certain desktop publishing applications require similar performance levels[2] for hardware.

It’s for this reason that I prefer to imagine that we are talking about machines designed for a certain kinds of productivity rather than for professionals. Most of us only have the vaguest of ideas about what the professionals in our own work spaces require to be productive in their jobs. Shifting the discussion away from the inherently dismissive designation (I’m a pro user of tech but you are not.) to one that might let us figure out good ways forward for everyone (She needs this heavy workhorse device to be productive at your desk while he needs this lighter, mobile device since he is on the road.) would let people embrace their roles a little better without dismissing others.


[1] What I do need are a variety of the much-derided dongles. A single port — in my case, the lightning port of my iPad Pro — is all I need for daily use. I plug it in for power at home and, when I enter a classroom or lecture hall, I plug it in to either a VGA or HDMI cable to share my screen, depending on what kind of projector or television monitor is in the room. What I really want to see is something that straddles the line between a cable and a dongle — a retractable cable that has lightning on one side and an adapter on the other with a reel that can lock the length once I have it plugged in. If someone is going to be very clever, I would ask them to figure out a way for the non-lighting end to serve male and female connections alike.

[2] This is the reason that when I get exasperated when Andy Ihnatko goes off on the current Apple keyboard design during a podcast, I still respect his position. While I am perfectly happy with the on screen keyboard or the Smart Keyboard of my iPad Pro, he wants/needs a different kind of keyboard to be productive. It isn’t because I am any less of a professional writer (My job requires me to research and write — although it is a different kind of writing than he engages in.). It is a question of how productive we feel we can be.

And, in cases like this, how we feel about the interface matters. It is why I still carry a fountain pen along with my Apple Pencil. It feels better to write with it and it produces a more pleasing line. The comfort and pleasure keeps me working. I have no doubt Ihnatko could bang out as many words on the current MacBook Pro with some practice. But the frustration in that learning curve would hamper his productivity as much as re-learning how to touch type on a slightly different keyboard.

[3] I wish to stress levels here. Both of these applications require high end machines but the specifics of those machines’ configurations are likely to be different. For those scratching their heads over this distinction, I would refer them to the distinction between optimizing for single core v. multi-core but I am not sure I understand that well enough to suggest a good place to read about it. Suffice it to say that different power-intensive applications lend themselves to different computing solutions.

Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

The Problem in the Paradigm of Social and Chat Clients

During the first of the JCSU New Faculty Development Summer Institutes , funded by a generous grant from the Andrew W. Mellon Foundation, one of the participants made a request in a conversations about what we wanted to see in the future. They wanted the digital equivalent of a coffee house — a place where they could meet with colleagues, have informal conversations with students, and remain connected. At first blush, there appears to be a few options available for such a thing: Slack and various Google products can provide institutional conversation spaces. Facebook and Twitter, as social networks, provide social spaces for interaction. Messages and SMS allow for direct, instant communication.

None of these, however, fits the bill.

I would argue the primary reason that all of these services are failing in this sphere is not due to their feature sets, which are all robust, or their ubiquity. Instead, I would focus on two things that are preventing them from achieving this desirable goal.

The first is a legacy assumption. With the exception of Messages, SMS, and similar direct messaging services, these apps have an interface that assumes you are sitting at a desk. Yes, they have been adapted to smaller screens, but they are not mobile first designs. The paradigm is one of working on a task and receiving a stream of contact in a separate window. This framework is different from the metaphorical space around the water cooler or coffee machine in the break room. As such, it does not fulfill the need for the coffee shop space as described above.

Lurking behind this paradigm, however, is a more powerful one that will prevent these apps from ever serving the function of a coffee house — a paradigm most clearly seen in the Mute feature. Now, I am not saying that the Mute feature is a bad idea. Sometimes, you need to close your office door to signal to your colleagues that you need to get something done and that now is a bad time for them to stick their head in the door and ask a question or chat about last night’s game. In addition, social networks need the ability to mute the more toxic voices of the internet. But the fact that those toxic voices are more prevalent online than they are offline is a signal that there is something critically different about the virtual spaces these apps create.

Muting signals that these apps are built based on a consumption paradigm — not a conversational one. It’s the kind of thing you do to a television program rather than an interlocutor.  

All of these apps are imagined in terms of consumption — not conversation. So long as that remains the case, they will not break through into a space where true conversation, rather than two or more people consuming communication from each other (much as you are consuming this blog post but are able to respond to it). They will not break through a hard ceiling of their utility and operate in the same conversational manner that messaging apps do.

In pointing this out, I want to stress that this is something users should be as aware of as developers. If we are using these virtual spaces in a manner they are not designed for, we should not be surprised at their limitations. Developers, meanwhile, should note that their apps and services may not be offering what their users are truly looking for.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

On the Reports of Apple’s Doom in the Educational Arena

There are any number of think-pieces on the problems facing Apple in Education. One of my favorites, as I mentioned in an earlier post, was written by Bradley Chambers. And as I said in that post, I agree with everything he has said about what Apple can do to make life easier on the overworked cadre of educational IT support staff out there.

That said, I have finally put my finger on what has been bothering me about the growing groupthink that is setting in the aftermath of Apple’s education event. First, it is worth remembering that we have been here before. There is a parallel between what we are hearing here and what was said back in the early 2010s about the iPhone in enterprise:

Back in those days, IT kept tight control over the enterprise, issuing equipment like BlackBerries and ThinkPads (and you could have any color you wanted — as long as it was black). Jobs, who passed away in 2011, didn’t live long enough to see the “Bring Your Own Device” (BYOD) and ‘Consumerization of IT,’ two trends that were just hovering on the corporate horizon at the time of his death.[1]

While there are important differences between the corporate market and the education market, I think it is worth remembering that Apple’s shortcomings in device management have been invoked by those in IT to foretell the ultimate failure of its initiatives before. They were proven wrong because customers (In this case, the people in the enterprise sphere they supported.) demanded that the iPhone be let in and because that market grew to be so significant that Microsoft believed supporting the iPhone would be to its benefit.

Despite the differences, Apple appears to be using a similar playbook here. They are not pitching their product to IT. They are pitching it to the teachers and parents who will request and then demand that iPads are considered for their schools. And as Fraser Speirs pointed out in a recent episode of the Canvas podcast[2], the wealthier, developed nations of the world can afford to deploy iPads (and/or Chromebooks) for all of their students.

Second, the focus on the current state of identity “ownership” by companies like Facebook and Google is, perhaps, less of a threat to Apple than it is an opportunity. My bet is that this is a space that is ripe for the kind of disruption Apple specializes in.

The current model for online identity focuses on a company knowing everything about you and using the information that is surrendered by the user for some purpose. In the case of Google, it is to create a user profile. In the case of Microsoft, it is to keep major companies attached to their services.

Apple is not interested in that game. They are interested in maintaining user privacy — a stance that has real value when providing services for children. So, what they would want and need to do is develop a system that creates some kind of anonymized token that confirms the user should be allowed access to a secured system.

They have this, of course, in Apple Pay.

What Apple now needs to do is figure out is some way to have a secured token function within a shared device environment. That is, I suspect, not trivial if they wish to keep TouchID and FaceID exclusively on device. A potential solution would be an education-model Apple Watch (or the equivalent of the iPod Touch in relation to the iPhone) that could match a student identity.

Again, there are a host of technical issues that Apple would have to resolve for a system like that to work. It would, however, be a much more Apple-like approach to securing identity than mimicking what Microsoft and Google do.


[1] This stroll down memory lane is from Ron Miller’s 20 January 2018 TechCrunch article “Apple’s Enterprise Evolution”.

[2] It is worth noting that Chambers and Speirs’ podcast series Out of School may have come to an end but it is still one of the best places to go to get a grip on the details of education deployments.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Photo Libraries in the Abstract

I took a little time over the past few weeks to get my Photos library in order. This was a long term project for a few reasons. First, I did not have a solid day to devote to it, so I engaged in the work here and there. Nor was there any rush, so I could attend to it a little at a time during lunch or in the evenings, as time permitted, without the pressure of a deadline.

Second, I needed to wrap my head around the idiosyncrasies of Apple’s Photos app — as one must do with any program. In most blog posts that address photo management, this would be the paragraph where I would discuss the app’s shortcomings. But, as Rene Ritchie reminds us, every computer and every app require a series of compromises between promise, practice, and the possible. So, while I would like some more granular control over the facial recognition scanning (I would especially like the option to identify a person in a photo rather than just say that it is not a particular person when the scan misidentifies someone during the confirm new photos process.) Yes, I recognize that, in between Google and Facebook, there are plenty of images of me and my family out there. That doesn’t mean I want to add to it. Nor does it mean I think others are foolish for taking advantage of Google’s storage and computing power — so long as they understand the exchange they are making.

Second and a half, I spent a good bit of time in some obscure parts of the application and its library because I needed to convert a whole bunch of older videos. The Photos app does not play well (or at all) these older AVI and MOV files and I wanted them in my library, rather than sending them off to VLC to play after I made it to a desktop or laptop computer. After some experimentation, I decided to convert them using Handbrake (using my Mac Mini) and then import the converted files and manually edit the date and location metadata.

And third, I spent some time considering the philosophy underpinning my personal photo library.

As an English Professor, thinking about the philosophy of a library is something of an occupational hazard. A portion of my research involves considering primary texts and unpublished materials — ephemera, marginalia, letters, and the notes and notebooks of W. B. Yeats and his wife, George.[1] And while I doubt that the scholars of the future will be sifting through my digital life in the hope of developing a deeper understanding of my thought, there is a decent chance that descendents might be looking through these pictures to lean about their family’s past when I am no longer there to explain what they are pictures of, who is in the pictures, and why they are there.

What came as a conceptual surprise were the pictures that I remembered but were not there — not because of a corrupted file but because I remembered the image from social media rather than from my library. I started to download some of these from friends’ Facebook pages and bumped up against two problems. First, the resolution was less than impressive. What looked perfectly fine on a phone did not scale well when appearing on my television screen.[2]

The second was the philosophical question. The pictures may be of me and of events that took place at my home, but were they mine. I don’t mean in the sense of copyright. My friends shared these images publicly. I do mean that placing them in my digital library carries implications of a sort. A picture of friends at my house implies that I took the photo in a way that placing a printed photo in a physical album does not because the digital file serves as both print and negative.

These are the kind of questions that those who try to figure out the significance of a piece of paper in a folder in a Special Collections library: What does this letter tell us? What is this note written on the back? How does it situate the document in the context of my research question.

Many of you will likely find it a silly question. After all, pictures can be seen exclusively as personal mementos — images to invoke memories we might otherwise leave buried. And it is difficult to argue on behalf of some genealogically-minded descendent four generations in the future. But what we choose to put into our own collection matters and the act of collecting is driven, in part, by why we did or did not put them there.

In addition, my philosophizing has applicability beyond the data on my hard drive and floating in redundant cloud storage. My decisions about what is appropriate for my own library are the same kind of decisions I should be making about the files on social media. Those photos — some, but not all, posted by me — are part of someone else’s public library. Privacy controls let me control some of this, but not all. In essence, photos of me taken by others are as private as the most permissive settings chosen by my friends. That shifts the boundaries of where public and private memory begins and end. 

It also means that Apple, Facebook/Instagram/What’sApp, Google, Twitter, and WeChat (to name only five) have become the librarians of my life and they are handing out free library cards for those who wish to read the rough draft of the story of my life. 

And it is a surprisingly detailed story. The pictures I was saving were from about a decade ago. Answering the question “Who do those pictures belong to?” can only be answered after you decide why you are asking. The Terms of Service we agree to before we can post anything answer the legal questions the companies want to ask. They don’t answer the secondary questions, like whether or not you retain some kind of right to your images should someone try to resell them. And the questions that concern courts of law are singularly uninterested in my philosophical considerations, as the Terms of Service speak (appropriately) to needs rather than concepts. If we come to grips with this philosophy, however, then we will have a better sense of the story we will tell and the reason we want to tell it.


[1] If you want to know the specifics, click on the academia.edu link below and take a look at my scholarship.

[2] I have a Mac Mini that uses my television as a monitor. My initial use case for the Mini was a combination workhorse computer, for those times when my iPad was insufficient or inappropriate for a task, and as a media player. As the iPad and Apple TV have increased in their capability, it has increasingly become primarily a media storage device — the primary repository for documents, pictures, and the like — and backup hub.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Still Looking through a Glass Darkly: Thoughts on Apple’s Education 2018 Event

Let me begin with an unequivocal statement: Anyone wishing to get a sense of the challenges before Apple in the education arena need look no further than Bradley Chamberswell reasoned and well written response on 9 to 5 Mac to the 2018 Apple Education Event. In his article, he clearly lays out the challenges facing Apple, as a hardware and service provider, and teachers as they try to implement solutions offered by Apple and others.[1]

And while I would not change a word, I would add one word to the title (which Chambers may or may not have written). I would argue that “Making the Grade: Why Apple’s Education Strategy is not Based on Reality” should read “Making the Grade: Why Apple’s Education Strategy is not Based on Today’s Reality”.[2]

Let me explain why.

As I wrote earlier, Apple included an interesting subtext in its event. It challenged the hegemony of the keyboard as the primary computing input device. In fact, there are no keyboards used in the entirety of the “Homework” video they produced to showcase the iPad in an educational setting — although the Pencil, I would note, appears on several occasions.

I don’t think this is Apple trying to hard sell the Pencil for the purpose of profit. If that were the case, we would not have seen the less expensive Logitech Crayon. Nor do I think it is an attempt to employ their famed Reality Distortion Field to deny the need for keyboards. Otherwise, we wouldn’t have seen the Logitech Rugged Combo 2 education-only keyboard.

What I do think is that Apple is trying to get the education market to rethink education’s relationship to technology.

Education, almost always, comes to technology as a tool to solve a known problem: How to we assess more efficiently? How do we maintain records? How do we process students in our systems? How do we crunch data? How to we produce a standard and secure testing environment? How do we make submitting assignments and grading assignment more efficient? How can we afford to deploy enough devices to make a difference?

That we ask these questions is no surprise. These are important questions — critically important questions. If we don’t get answers to them, the educational enterprise begins to unravel. And because of that, it is more than understandable that they form the backbone of Bradley Chambers’ article and the majority of the commentary behind most of the responses I have read or listened to. They are the questions that made Leo LaPorte keep coming back to his wish that Apple had somehow done more in Chicago when the event was being discussed on MacBreak Weekly.

What they are not, however, is the list of questions Apple was positioning itself to answer. As Rene Ritchie pointed out in his response to the event, Apple is focusing on creativity — not tech specs. And from what I have seen from a number of Learning Management Systems and other education technological products, it is an area that is very much underserved and undersupported by ed-tech providers.

Apple is trying to answer the questions: How do you get students to be engaged with the material they are learning? How do I get them to think critically? How do I get them to be creative and see the world in a new way?

Alex Lindsay pointed out in the above-mentioned MacBreak Weekly episode when he said that he was interested in his children (and, by extension, all students) learning as efficiently possible in school. To do that, students have to be engaged and challenged to do something more than the obvious provided in lowest common denominator solutions. Their future will also need them to do more than answer fill-in-the-blank and multiple choice questions on a test. They need to produce the kinds of projects that Apple put on display in Chicago.

Apple is offering the tools to do that.

I don’t think this is an idealized or theoretical response. If Apple wasn’t aware that these things were a challenge, they would not have made the teacher in the “Homework” video a harried individual trying to (barely) keep the attention of a room filled with too many students. Apple has hired too many teachers and gone into too many schools to not know what teachers are facing.

I would also point out that there is something to Apple’s answer. My daughter was in the room with me when I was watching the keynote. Her immediate response was that she wanted her homework to be like what she saw rather than what she did.[3]

Her school, I would point out here, uses Chromebooks. That she would jump that quickly at the chance to change should give anyone considering a Chromebook solution pause and make them look carefully at why they are making the choice they are.[4]

Nevertheless, Apple’s challenge is that it still has to address the questions Bradley Chambers and others have raised or their answers will only be partial solutions for educators.

Because Apple needs to answer these questions, I am very interested in the details of the Schoolwork app once it is released — even if it appears to be targeted at K-12 and not higher education.

I do think that we in education need to listen carefully to Apple’s answer, though. Our questions may be mission critical but they may not be the most important questions to answer. After all, if we are first and foremost not trying to answer “How do we get our students engaged?”, we have ceased to be engaged in education. And while I have a great deal of sympathy for my friends and colleagues in IT (and am grateful for their ongoing support at JCSU), they are there to support my students’ and my work — not the other way around. And every time we take a shortcut to make IT’s job easier,[5] as we have done too often when trying to answer how to assess student learning outcomes, we are decreasing our students’ chances for success.

For those placing long-term bets, however, I would point out one thing: Apple’s positioning itself as the source for solutions for generating curiosity and creativity is a better solution for education than Google’s positioning itself as the solution for how to create a new batch of emails for the next year’s worth of students.


[1] The most important section of the article, incidentally, is this section:

One of the things I’ve become concerned about is the number of items we tend to keep adding to a teacher’s plate. They have to manage a classroom of 15–30 kids, understand all of the material they teach, learn all of the systems their school uses, handle discipline issues, grade papers, and help students learn.

When do we start to take things off of a teacher’s plates? When do we give them more hours in the day? Whatever Apple envisioned in 2012, it’s clear that did not play out.

[2] I wouldn’t run the word today in bold and italics, of course. I am using them here so you can easily find the word.

[3] Or thought she did. When I asked her what stopped her from doing her homework in that manner, she thought and said she didn’t know how she would get it to her teacher. I told her that I could help her with that.

[4] It still might be the best choice, of course. These decisions are a series of trade-offs. But I would point out that if she begins to use an iPad at home to do things her classmates cannot with their Chromebooks and gains a superior education because of her engagement with the material as a result, the argument for deploying Chromebook is significantly weakened.

[5] Making IT’s job easier, I would stress, is significantly different from asking if what is being proposed is technically and practically possible.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Tip: Presenters Rejoice — A New Pages Feature for Faculty and Students

Back on October 18, 2017, I offered a tip on presenting with the iPad — creating a reading version of a speech/presentation in Pages that was formatted with a large enough font size to be easily read at a podium. I didn’t think it was rocket science then and don’t know.

With the latest release of Pages, however, the need to create a second copy is gone. Apple has programmed in Presenter Mode, which automatically resizes the font as I had described.

IMG_0245.jpg

In addition, it switches (by default) to a dark mode, providing a high-contrast screen and reducing light for dimly lit rooms. It also has an autoscroll feature (with a modifiable scroll speed). The autoscroll starts and stops with a tap of the screen.

IMG_0244.jpg

This is a really nice feature — one that will quietly make presenting much easier for iPad users (Thus far, I have not seen a parallel option appear in the MacOS version of Pages.). It also points to Apple’s method, as posited by Steve Jobs in an often quoted part of Walter Isaacson’s biography of him: “Some people say, ‘Give the customers what they want.’ But that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, ‘If I'd asked customers what they wanted, they would have told me, “A faster horse!”' People don't know what they want until you show it to them. That's why I never rely on market research. Our task is to read things that are not yet on the page.”[1]


[1] This idea is going to be central to my upcoming reaction to Apple’s Education event. If you want some homework in advance of that post, you should take a look at Bradley Chamberswell reasoned and well written response on 9 to 5 Mac.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Quick Thoughts: Today’s Apple Education Event

As I write, I am in the twilight zone between having read iMore’s live blog of today’s Apple Education event in Chicago and getting to watch it via the Apple Event app on the AppleTV. That is a strange place to write from but one thing is clear enough to comment on:

Apple is challenging the keyboard’s hegemony.

To listen to most people in tech (and to see me using the Smart Keyboard now), the keyboard is the best and only way to interact with computing devices. With Apple pushing the Apple Pencil across more of the iPad line and, critically, into the updates of the iWorks apps, this paradigm is being challenged on two fronts. Annotating works with a keyboard has always been a less than ideal experience. The Apple Pencil (and other styli) is a superior approach. With Siri, voice is another front.

I don’t think Apple is out to deprecate the keyboard entirely but I do think that these two other options signals a real differentiation between Apple and others — Google especially. It feels much more like Apple is offering options to users rather than choosing one for us. In education, this is especially important. I don’t want a keyboard when marking up a paper. I want a Pencil and the best tools to annotate a document and direct a student. When I am walking between meetings, I want to ask Siri to remind me to do something — not stop and type it into my Phone.

The real question is whether the accreditation industry will be ready to quickly accept that the era of keyboard only needs to be shifted to accommodate the best method for the moment rather than what it simplest for them.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Peering into the Black Mirror: Tomorrow’s Apple Education Event

Recently, Rene Ritchie asked his Twitter followers what they wanted to see happen at Apple’s upcoming Education Event in Chicago. In response, I quipped that I wanted iTunesU to be TouchID enabled. (Fraser Speirs, who has rightly lamented iTunesU’s molasses-slow development, warned me away from asking for too fancy an update.).

I mention the exchange not so much to name drop but to calibrate the importance of the event for Apple. And if I can see it, I suspect Apple can as well.

Much of the commentary in the tech media has focused on the possibility of an No. 2 Apple Pencil and a semi-Pro iPad priced for the education market as well as the need for Apple to produce management tools that would make it competitive with Google’s offerings.

I want to offer another possibility. If I were to say that it has been on my radar screen for about a year, it would imply that I had a clearer view of it than I do. I’d go with the metaphorical crystal ball but the iPad’s black glass slate seems to invoke images of Dr. John Dee’s Spirit Mirror, so I will go with that instead.

It was actually Fraser Speirs who, during a break in the Mellon Summer Institute on Technology and New Media, the increasing capabilities available to those who wanted to create their own Swift Playground. As he showed me what was possible with some of the mapping features, I couldn’t help but notice how similar it felt to iBooks Author — Apple’s underutilized eBook authoring tool.

Perhaps it won’t be tomorrow, but I can’t help but think that Swift Playground development and iBooks Author are on a path to merge — perhaps bringing iTunesU and Apple Classrooms along with them — into a new, more modern and more powerful platform. Such a move would possibly explain why Apple appears to be moving more slowly in this sector than they should.

Apple’s successes, I would argue, are based in looking carefully at the first causes of problems and developing well-grounded responses to them that leapfrog entire industries and paradigms rather than doing a quick patch that makes them appear up to date in the current news cycle. My bet is on them doing something along those lines — whether it is the tomorrow or next year — in education.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Get Comfortable: An Older Long Form Piece on Next Generation Texts

For those of you who missed it, Apple has purchased Texture. As Alex Lindsay said on MacBreak Weekly, I have long suspected Apple is positioning iBooks (reportedly to be renamed Books) to create the next generation of texts — texts sufficiently different from what we have now that we don’t have a name for them.

What follows is a longer piece that I wrote (and presented) in 2013. While some of the examples are no longer current (or no longer available — an issue that highlights a problem inherent in digital media), the overarching argument is, I think, still current and still points towards where we will eventually go.

I resisted the urge to update and significantly edit the text, other than adding some links.  

 

Ghosts in the Machines: The Haunting of Next Generation Texts

There are spirits, if not ghosts, in these new machines. That is what has made e-books so troubling to those of us in the literati and chattering classes. They foreshadow unavoidable change.

While early adopters and technophiles have debated utility, screen resolution, and processing power, even everyday users have found themselves confronted by an issue that used to only bother a subset of Humanities scholars: What is the nature of a book? As scholars, we don't usually use the term book, of course – partially as a way of avoiding the problems raised by such a question. "The text" usefully covers a wider range of material: short stories, novellas, novels, poems, collections of poems, plays, films, essays, epistles, audiobooks, and songs. They are all texts. We enter into conversations with them. We ask our students to engage with them. Among ourselves, we agree they are elusive but, in order to get on with the business at hand, we tend to set aside the complexities unless we are trying to be clever. They are the unknown but hinted at things we, Jacob-like, wrestle with.

Such grappling has usually taken place well out of the public eye and average readers, unless they have a troublesome relative who inconveniently holds forth on such topics over Thanksgiving dinner, are quite content to get on with it and read their books.

E-books, however, are beginning to make manifest the debate. The general reading public knows what a book is and what an e-book is and recognizes that they are subtly different. If they weren't, bibliophiles would not protest that they like the feel and smell of the book as they turn its pages when explaining why they don't want a Kindle.

But there is more to this visceral reaction than just the love of cardboard, pulp, ink, and glue.

Behind the text is the thing everyone is truly after: the story. It's the Muse-spun Platonic idea grasped at by the author and glimpsed in the imagination of the reader. The text invokes it. With care, it is woven together and bound by specific words into the thing we read that transmits the story to our mind, as Steven King described in On Writing, via a kind of telepathy that ignores distance as well as time. (103-7)

Texts, then, transmit stories and the act of reading allows the mind of the readers to take them into their imaginations and there be re-visioned.

That such a process exists is something we all sense. The BBC series Sherlock receives high praise because it invokes and evokes what is essential in the original stories and recasts them in a new form and time. Sometimes the parallels are exact – the use of poison pills to murder in "A Study in Scarlet" and "A Study in Pink" – but they are played with in a manner that leaves the Sherlockian viewer of the series guessing – as in the case of the reversal of Rachel and rache as the correct interpretation of a fingernail-scratched message. Something, we sense, is importantly the same in Sherlock in a way it is not in other versions of Conan Doyle's detective – even those that are "correct" in period and dress.

At its core, this difference is the thing the general public wrestles with when they encounter e-books and will increasingly wrestle with as the thing, as yet unnamed, that will replace the book comes into being. These works make manifest old problems that have haunted books and the scholarship about them – and, perhaps, will begin to solve them. Obviously, the play's the thing in Shakespeare. What, however, is the play? The text on the page? The text when read aloud? The text that is performed? The performance itself? It is the problem Benjamin Bagby speaks of when discussing the difference between performing and reading Beowulf aloud, which feels "unnatural" to him:

 [Beowulf] has become for me an aural experience.... All of those things [The techniques of performance, including music and timing.] have nothing to do with printed word. And actually, when I actually go and read it from the printed page, I am deprived of all of my tools.... That whole feeling of being trapped in the book comes back to me. And What I have found over the years being chained to the printed word. That, for me, is the crux of the matter. ("Round Table Discussion")

Bagby's role is that of a contemporary scop – a shaper of words. Much like a blacksmith is sensitive to the choices he is making about the physical properties of metal as he hammers it into a set shape, Bagby is sensitive to the limitations the printed word places on a story. In some near future, however, next generation texts will allow performance to synchronize with the printed word. The performance – or more than one performance – will be available, perhaps filmed in the round as on the Condition One app, while the text helpfully scrolls along. Commentary, definitions, or analysis to aid a reader will be a tap away.

Such texts have already begun to appear: Al Gore's Our Choice; T. S. Eliot's The Waste Land app for the iPad; Bram Stoker's Dracula HD;and The Beatles' The Yellow Submarine e-book for iBooks, to name but a few. Indeed, it is important to note that these titles include both the chronologically new (Gore) and the less new (Stoker). The possibilities of next generations texts will more fully realize the ambitions of long-dead authors.

Take, for example, Stoker's Dracula. As traditionally published, it is standard text on a standard page. Yet as anyone who has read it closely sees, that is not what is invoked by Stoker. As is made clear in the dedication, Dracula is to be imagined as a scrapbook – a confederation of texts that build a single, meta-story out of their individual narratives:

How these papers have been placed in sequence will be made manifest in the reading of them. ... [A]ll the records chosen are exactly contemporary, given from the standpoints and within the range of knowledge of those who made them. (xxiv)

While the text that follows is normal typeset material, he notes that each of his characters produce their textual contributions differently. For example, Harker keeps a stenographic journal in shorthand(1), as does Mina Harker, née Murray – who also types out much of the final "text." (72) Dr. Seward records his medical journal on a phonograph. (80) Additional material includes handwritten letters (81), telegrams (82), and newspaper cuttings. (101)

While the citations here may seem overcrowded, the proximity of the referenced page numbers serves to demonstrate how rapidly Stoker has the material imaginatively gathered shift within his text. While Stoker required the imagination of his reader to change the forms, the Dracula HD iPad app re-visioning of the novel makes these shifts manifest.

A screen shot of the now unavailable Dracula HD app.

A screen shot of the now unavailable Dracula HD app.

It is equally true that James Joyce's Ulysses, with its elusive and allusive multimedia structure, evokes similar shifts in presentation – ones that played in Joyce's imagination but are potentially kept from the reader by the limitations of the page. His successor, Samuel Beckett, likewise plays with the confluences and dissonances of multimedia presentation in works like Krapp's Last Tape – a written play that performed as a combination of live action and prerecording. Contemporary playwright Michael Harding adds video to recorded audio in his play Misogynist. And all of these are producing their work long after William Blake published a series of multimedia tours de force that were so far ahead of their time that it took generations for them to receive wide-scale recognition. Even Gildas wanted his History and Topography of Ireland to be illustrated in order to help clarify his points.

While it is impossible to know if Gildas, Blake, Stoker, Joyce, Beckett, or Harding would have crafted their works differently if our technology were then available, it is clear that contemporary content creators have begun to do so. The Fantastic Flying Books of Morris Lessmore, a children's text originally crafted for the iPad, is most accurately seen as a next generation confederated text. Unlike Dracula, which presents itself exclusively through the printed word, The Fantastic Flying Books of Morris Lessmore is an interactive e-book, a short film, and a hardcover book that interacts with an app – each a different approach to William Joyce's imaginative children's story of a man and his love for books and the written word – rather than a single, discrete text.

William Joyce's creation is not the first next generation confederated text, of course. There have been others. The entire story of The Blair Witch Project, for example, was only partially revealed as a film. While the "missing students" marketing campaign is the best known segment of the larger confederated text, the film's website offered information that changed the meaning of what reader-viewers perceived. When the filmmaker Heather Donahue records an apology to her and her fellow filmmaker's parents, saying "It's all my fault," viewers have no way of knowing that she is a practicing neo-pagan who has been sending psychic/magical energy to the Blair Witch for years in the hopes of contacting her. For Donahue, then, her guilt is based not only in insisting that they make the film and go into the woods but possibly in literally (and ironically) empowering the evil witch she mistakenly believed to be a misunderstood, proto-feminist Wiccan. (“Journal” 4-6)

This prefiguration a unified confederated text across multiple forms of media is identical to the prefiguration of film techniques in Nineteenth Century literature noted by Murray in Hamlet on the Holodeck:

We can see the same continuities in the tradition that runs from nineteenth-century novels to contemporary movies. Decades before the invention of the motion picture camera, the prose fiction of the nineteenth century began to experiment with filmic techniques. We can catch glimpses of the coming cinema in Emily Brontë's complex use of flashback, in Dickens' crosscuts between intersecting stories, and in Tolstoy's battlefield panoramas that dissolve into close-up vignettes of a single soldier. Though still bound to the printed page, storytellers were already striving towards juxtapositions that were easier to manage with images than with words. (29)

Of course, these techniques can also be found in The Odyssey and Beowulf. Nevertheless, Murray's point is unassailably correct. The imagination of the creator and, by extension, the reader or viewer, is primary and will always outstrip the technology available. The revolution, then, is not that confederated texts and the thing that will replace the book are coming but that they are becoming mainstream because they can, as a practical matter, appear on a single, portable device (an iPad) and in a unified delivery mechanism (an app or an e-book) instead of having to be viewed across multiple, non-portable devices (a movie screen or VHS played on a television and a computer screen and a book).

The collapsing of multimedia, confederated texts into a single reader experience is is a revolution that will be as transformative as the one kicked off by Gutenberg's Press. That revolution was not just about an increased availability of texts. What was more important than availability, although it is less often spoken of – assuming you discount the voices of Church historians who speak of the mass availability of the Bible and how it changed books from relics found chained in a church to something anyone could read and consider – is the change in people's relationship to the text. Their increasing presence changed them from valued symbols of status to democratizing commodities – things that could be purchased by the yard, if necessary, and that anyone could use to change and elevate themselves and the world around them.

This changing relationship, I suspect, is what lies behind people's anxieties about the loss of the experience of the book. The book – especially an old, rare, valued book like The Book of Kells–  is certain, as the the Latin Vulgate was certain and some now say the King James Version is certain. Even five hundred years after Gutenberg inadvertently brought forth his revolution, Christians – Fundamentalist or not – get very uncomfortable when you talk about uncertainty of meaning in the Bible because the text; due to issues of translation, context, and time; cannot be fixed.

Despite their appearances, all books, or at least that which is bound between their covers, are uncertain. The words can and do change from edition to edition. Sometimes, these changes are due to the equivalent of scribal error, as famously happened with The Vinegar Bible. Sometimes, the changes occur due to authorial intent, as happened with the riddle game played by Bilbo Baggins and Gollum in The Hobbitafter the nature of the One Ring changed as Tolkien began to write The Lord of the Rings, explaining away the change in the back story provided in the trilogy's Prologue. (22) These changes, however, happen as you move from one edition to the next. Bits and bytes can change, be changed, or disappear even after the text is fixed by "publication" – as happened, in an extreme and ironic case, to one Kindle edition of 1984. (Stone)

That former certainty of a printed, fixed object was comforting and comfortable. You can go to a particular page in a particular book and see what's there. The e-book is more fluid – pages vanish with scalable text, returning us to the days of scrolls – prone to update and possessing the possibility of existing within a hyperspace full of links. It offends our sense of the text as being a three dimensional object – something it has, in fact, never been and never will be. Whether we foreground them or not, a web of hyperlinks exist for every text.

Texts themselves are, at minimum, four-dimensional objects. It's something scholars tacitly admit when they write about the events of a story in a perpetual present tense. They exist simultaneously within time – the part of the continuum where we interact with them – and outside of it – where the stories await readers to read, listen to, and think about them.

That fluidity will go further – stretching the idea of the future text beyond the limits we currently imagine to be imposed upon it. The Monty Python: The Holy Book of Days app, which records the filming of Monty Python and the Holy Grail, will interact with a network connected Blu-ray disk – jumping to scenes selected on the iPad. In such a case, which is the primary text: The film on DVD or the app that records its making and that is controlling the scene being watched? Indeed, the technical possibilities of an e-books and confederated texts makes the complex interplay between texts explicit rather than implicit. Tap a word, and its dictionary definition pops up. References to other texts can be tapped and the associated or alluded text is revealed, as is currently done to handle cross references in Olive Tree Software's Bibles by opening a new pane. Shifting between texts, which would benefit from a clear visual break, may eventually be marked by animation ("See: Tap here and the book rotates down – just the way it does when the map app shifts to 3D view – and the book that it alludes to appears above it! The other texts alluded to appear on the shelf behind them. Just tap the one you want to examine!"). While such a vision of the future may sound like so much eye candy, consider the benefits to scholarship and teaching to have the conversations between texts become more accessible.

Because they could be made photorealistic (Facsimile editions made for the masses, much as pulp paperbacks offered classics – alongside the penny dreadfuls – to everyone.), critical and variorum editions could bring a greater sense of the texts being compared than our current method of notation. Indeed, as The Waste Landapp shows, such scholarly apparatus can include the author's manuscripts as well as the canonical text. These can be done for significantly less than print editions. The published manuscript facsimile copy of The Waste Land is listed at $20 (almost $400 for hardcover – a bargain compared to the facsimileBook of Kells). The app, however, is $14 and includes commentary and audio and visual material – readings by Eliot and others. Likewise, the Biblion series, by the New York Public Library, is even more ambitious – especially the one focusing on Frankenstein (although a clearly identifiable copy of the novel itself is conspicuous in its absence) and is a model for what a critical e-edition of the future might look like.

These technological flourishes and innovations will be increasingly pushed not by just by designers and developers – although forthcoming flexible screen technology holds the promise of devices that could be issued, with relative safety, to schoolchildren. Changes in the market – our market – will begin to drive them. Already, iTunes U integrates with iBooks. As that platform, shared instruction via online courses, and Massively Open Online Courses (MOOCs) begin to grow and push on the academy, innovations in pedagogy will join accessibility, design, and "value added" as factors, like the adaptive learning advertised as a part of McGraw Hill's forthcoming SmartBook offerings, going into the creation of next-generation texts. 

This isn't postmodernist denial of a fixed definition of anything. Nor is it an embrasure of the Multiform Story posited by Murray in Hamlet on the Holodeck as the way of the future. The story, the text, the book, and the "one more thing" that is slouching towards Cupertino to be born are all defined, concrete things. While the narratives created by a game may vary in detail (Do you turn to the Dark Side or not in the latest Star Wars video game?), the narrative skeleton – the conflict that drives the character to and through the choice towards whichever resolution – remains constant. Without such a frame, the Kingian telepathy that produces narrative and the ability of those participating in that narrative to have a shared experience with others is impossible and will remain impossible until artificial intelligence advances far enough for computers (or their descendants) to share in our storytelling. The issue arises from our desire to conflate them into a single thing. For centuries, they could be conveniently spoken of in one breath. With the e-books that will be, they can no longer conflated with the same ease.

Nor is this merely prognosticating a future that is already here. Criticism necessarily follows created content – wherever that content might lead. While paper may have cachet for some time to come, it is inherently limiting. This article, for example, incorporates a still image. An iBook version might include moving images, sound, and hyperlinks to other texts, additional material in an iTunes U class, and other apps. In fairness to paper, it would also be harder to read in brightly lit settings and impossible to read after the battery ran down. 

The core reason, amidst these changes, things will remain the same is that – with the possible exception of the postmodernists – what motivates us to explore the issues inherent in the texts before us are the stories they convey. Whatever the medium, be it papyri, pulp, or LED, the story rivets us and invites us to immerse ourselves in it – perhaps to the point of trying to learn the mechanics behind the curtain that  keep us spellbound.

What we as scholars do, then, will have to adjust. Our mainstream critical apparatus, and the frame of our discipline, is inadequate for the coming task. Dracula HD may provide a greater sense of verisimilitude than a traditional novel, but this push for verisimilitude has meant liberties were taken with the text – small additions and deletions that make it slightly different from the canonical novel. The text of an iBook edition of Dracula may be canonical but it's also scalable, making page references meaningless.

We will also have to learn how to talk about confederated texts. Some techniques will come from what is still the relatively new – the language of film criticism, for example. Other moves will come from reincorporating into mainstream criticism what everyone once knew – the language of the Medievalist who have to discuss manuscripts and the art historians who still work with the way the image influences the viewer.

And then, there are the things we do not yet see and will require entirely new modes of thought and reflection. We study narrative and storytelling as a part of our discipline. With the increase of computing power, the old "Choose Your Own Adventure" book format is growing up quickly, forming a new genre of literature, or something like literature – a concept posited in Murray's Hamlet on the Holodeck over a decade ago. Will we be the ones to address how narrative flows in the twilight world between books and games? What will we have to do differently when we are dealing with stories whose outcomes become different with different readers not because of their responses to a fixed text – the reactions Byron feared when he sent his works out into the world – but because they cooperate with the characters and creators in fashioning the story itself? Or should we, as Murray posits in "Inventing the Medium," surrender these creations to a new field – New Media Studies in a Department of Digital Media – and go the way of the Classics Departments that many of us have watched shuttered or be absorbed into our own Departments.

And if we exclude such forms of storytelling, how are we then choosing to self-define our profession? Are we content to surrender the keys to our garden up to the publishing houses of the world – whether they be great or small? Is it the static nature of the text – the printed word bound between two covers – what we claim to value? If so, how do we continue to justify researching the manuscripts of writers? Who do we ask what is the canonical version of any work that saw editorial revision over its life – whether those changes were overseen by editors, literary executors, or the artists themselves?

The interactivity made possible by the iPad not only challenges the definition of the objects we study, it challenges our assumptions about where the borders of our form of study lie. We no longer exclusively "cough in ink" as we imagine our Catullus. (Yeats 154, 7)

As first steps to this re-imagining of our discipline, we should consider the nuts and bolts of how we talk rather than of what we talk about. How, for example, do you properly cite a passage in an e-book in a manner that does not lead to confusion? Do you make it up as you go along, as I did for the sake of an example, with my reference to Yeats' "The Scholars" (poem number, line number)? Should you list only the year of release for iPad Apps – essentially treating them as books – or, given the ability to update them, should we list the month and day as well? Will we need to do the same with books, given that these, too, can now be updated?

While search tools and hyperlinks (or their descendants) may render some concerns moot, they will not fully resolve the issues until our journals become e-books or confederate themselves with the texts they examine. Even in these cases, however, they may not eliminate all of them. "Which 'Not at all' in The Waste Land," a future reader might find himself asking a scholar, "did you want me to weigh again?" And while that scholar has the ability to reference line numbers, those working with fiction and many kinds of drama will not. Such a process should not, however, be seen as an exercise in pedantry. How we choose to record our sources and cite them is not just a roadmap for those who follow what we write. It marks what we consider essential information – what we value in our sources. It will help us to get a greater sense of what we wish to preserve and enhance in next generation texts, much as Andrew Piper's Book Was There attempts to assay what it is we value in the book through an almost free-associative exploration of the words and metaphors that surround and support the book.

Even more challenging shift will be in our most fundamental relationship to the text which, until now, we currently imagine as a private experience. While we may currently hold conversations with the text within our minds, the text itself remains a static thing – a fixed object that we react to. In essence, the conversation is one way. The adaptive textbooks being developed by the major textbook publishers will make that interaction two ways. In short, the book will read us as we read it. While this may be a boon for learning, it is not an unalloyed boon. Because they reside on a device that is always connected, the books can communicate what is learns of us to its publisher and its marketing partners. Or a government. While such arrangements can bring us greater convenience and security, they do so a the cost. And while I acknowledge that there is a loss of a kind of privacy, we should not forget that targeted advertising is nothing new. Remington Arms is not going to place its advertisements in Bon Appetite. Likewise, Amazon's recommendations are not so far removed from the book catalogues found in the back of Victorian novels. Indeed, William Caxton, the first English printer, made sure to mention his backer's high opinions in his prefaces – as he did in his second printing of The Canterbury Tales – and advertised his publications in hopes of driving sales. So the practice of using ads and reader reviews of one book that you like to try to sell the next has been with us for a very long time.

And yet, Big Data gives the appearance of a violation of privacy.  While we may like the efficiency offered by such techniques (e.g., getting coupons for the things we want rather than things we don't), we prefer not to think about the mountain of data being compiled about each and every one of us every day of our lives. Nor do we like to think of our books coming to us as a part of a business, although the major publishing houses are nothing if not businesses – businesses desperate to know what we want to read so that they can sell us more of the same. That they can now use our books to mine for information feels like it is crossing a new line – even if it is not. After all, how many of us have willingly participated in an offer that gave us access to coupons based on the number of books we bought at a store – one that assigned us a traceable number? Bought a book online from Amazon or Barnes and Noble? Or became a regular enough customer that a small, independent bookstore owner or friendly staffer could recommend books that we might like? Because the last of these involves actual human contact, it is more intimately associated with us as individuals than the algorithm-generated, aggregation-based results of an Amazon. But we are not yet ready to embrace a trust of the machine and whatever sinister, calculating faceless figure we imagine to be controlling it.

In short, we may want texts that in some way touch us. At the same time, we want to read the text but we do not want it to read us. We want to find book but not let those same book be used to locate us. We wish to classify a text by genre but not let it place us into a category. We are willing to give ourselves to a book -- to lose ourselves so deeply in it that we cease to be -- but we do not want it to give us away.

We want all the advantages of our love affair with reading to remain without giving the innermost, anonymous part of ourself away.

We don't want books betray the secrets we offer them.

And given the company these next generation texts keep, perhaps there is cause for fearing such betrayal. Goodreads is now owned by Amazon and spends too much time talking to Facebook -- and it deals primarily with traditional texts. But, ultimately, we will control how much we let these texts tell others about us. It will be up to us to check out applications' settings.

These next generation texts will not change the core of what we study – although they may challenge many of the assumptions underlying the critical approaches we use when coming to a text. They will also ask us to consider revising what we consider a "legitimate" text and "legitimate" means of publication – a distinction those approaching tenure in a shrinking publication market must face with a certain anxiety. Yet, stories have always resisted these when they are used too narrowly or exclusively. As such, our frustrations and fears may tell us more about ourselves than the stories we purport to be concerned with. In that regard, next generation texts may be the best thing that has happened to our profession in some time. They will force us to confront what our purpose is by making us figure out exactly what it is that we are studying and why we choose to study it.

References

"A Study in Pink," Sherlock. Dir. Paul McGuigan. Perf. Benedict Cumberbatch and Martin Freeman. Hartswood Films/BBC Wales/Masterpiece 2010.

"Apology" The Blair Witch Project. Dir. Daniel Myrick and Eduardo Sánchez. Perf. Heather Donahue, Joshua Leonard and Michael Williams. Lion's Gate 1999. http://www.youtube.com/watch?v=2m_lqGnLtWA&feature=youtube_gdata_player 22 November 2012.

Azevedo, Alisha. "10 Highly Selective Colleges Form Consortium to Offer Online Courses," The Chronicle of Higher Education. 15 November 2012. http://chronicle.com/blogs/wiredcampus/10-colleges-will-offer-online-courses-for-participants-in-study-abroad-programs/41070?cid=at&utm_source=at&utm_medium=en 25 November 2012.

Beatles, The. The Yellow Submarine. Subafilms, Ltd. 2011. iPad App.

Beckett, Samuel. The Complete Dramatic Works of Samuel Beckett. New York: Faber and Faber 2006.

Bible+. Olive Tree Bible Software 2012. iPad App.

Biblion: Frankenstein: The Afterlife of Shelley's Circle. The New York Public Library 2012. iPad App.

Blake, William. The Complete Poetry & Prose of William Blake. New York: Anchor Books 1998.

Bonnington, Christina. "Flexible Displays Landing in 2012, But Not in Apple Gear," Wired. 16 May 2012. http://www.wired.com/gadgetlab/2012/05/apple-flexible-displays/. 25 November 2012.

The Book of Kells. X Communications. 2013. iPad App.

Condition One. Condition One, LLC. 2012. iPad App.

Doyle, Arthur Conan. "A Study in Scarlet," The Complete Sherlock Holmes. New York: Bantam 1986.

Duhigg, Charles. "How Companies Learn Your Secrets," The New York Times Magazine. 16 February 2012. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?pagewanted=1&_r=1&hp. 14 May 2013.

Eliot, T. S. The Waste Land. Touch Press, LLP and Faber and Faber 2011. iPad App.

Fain, Paul. "Establishment Opens Door for MOOCs," Inside Higher Ed. 14 November 2012. http://www.insidehighered.com/news/2012/11/14/gates-foundation-and-ace-go-big-mooc-related-grants 25 November 2012.

"Gerald of Wales," Melvin Bragg. In Our Time. BBC 4. 4 October 2012. http://www.bbc.co.uk/programmes/b01n1rbn

Gore, Al. Our Choice. Push Pop Press 2011. iPad App.

Harding, Michael. The Misogynist in A Crack in the Emerald: New Irish Plays. Ed. David Grant. London: Nick Hern Books 1995.

"Journal" Blair Witch Wikihttp://blairwitch.wikia.com/wiki/Heather_Donahue%27s_Journal 22 November 2012.

Joyce, James. Ulysses. New York: Vintage 1990.

Joyce, William. The Fantastic Flying Books of Morris Lessmore. Moonbot Interactive 2011. iPad App.

King, Stephen. On Writing: A Memoir of the Craft. New York: Pocket Books 2000.

Kolowich, Steve. “Elite Online Courses for Cash and Credit”Inside Higher Ed 16 November 2012.http://www.insidehighered.com/news/2012/11/16/top-tier-universities-band-together-offer-credit-bearing-fully-online-courses 29 November 2012.

Monty Python and the Holy Grail. Dir. Terry Gilliam and Terry Jones. Perf. Graham Chapman, John Cleese, Eric Idle, Terry Gilliam, Terry Jones. Sony Pictures 2001. DVD.

The Monty Python: The Holy Book of Days. Melcher Media 2012. iPad App.

Murray, Janet. Hamlet on the Holodeck. Cambridge, MA: MIT Press 1998.

——, "Inventing the Medium," The New Media Reader. Cambridge, MA: MIT Press 2003.

Piper, Andrew. Book Was There. Chicago: University of Chicago Press 2012.

"Round Table Discussion," Beowulf. Dir. Stellan Olsson, Perf. Benjamin Bagby. Koch Vision 2007. DVD.

Stoker, Bram. Dracula HD: Original Papers Edition. Intelligenti, Ltd.2010. iPad App.

——. The Essential Dracula. Ed. Leonard Wolfe. Plume: New York 1993.

Stone, Brad. "Amazon Erases Orwell Books From Kindle" The New York Times 17 July 2009. http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html 22 November 2012.

Tolkien, J. R. R. The Hobbit. Boston: Houghton Mifflin Co. 1997.

——. The Lord of the Rings. Boston: Houghton Mifflin Co. 1987.

Yeats, W. B. "The Scholars," The Collected Poems of W. B. Yeats: Revised Second Edition. Ed. Richard J. Finneran. Scribner 1996. iBook.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Animoji in the Classroom

At the end of February, Samsung joined Apple in offering its customers Animoji to play with on its Samsung Galaxy S9. I am far less concerned with the question of who got here first and who does it better (Both had to have been working on it for some time and are offering very different experiences.) than I am interested in the fact that both are now offering it in spite of the tech media’s dismissal of Apple’s offering as a gimmick used to show off its facial scanning technology — the kind of thing you play with the first couple of days then never use again.

Now, I am not saying that I have used Animoji after the first couple of days (I do have plans, however, the send more messages to my daughter once the dragon Animoji arrives.). I will say that I think it is too early to count this technology out — especially in the classroom.

Instead of thinking of Animoji as a fully developed feature, it would be better for us to consider it a proof of concept. 

Our smart phones are now capable of reasonably effective motion capture of the kind that used to require a Hollywood studio. No, our students will not be handing us the kind of things we have seen from Andy Serkis or Benedict Cumberbatch on the big screen any time soon. But if you look at the video clip of Serkis I have linked to here, you may notice that Apple’s Animoji are more finished than the first-pass action capture shown of Gollum. That means the iPhone X can do more than the animation studios of Weta, circa 2012, could. 

That is the level of technology now in our students’ pockets.

I could make some of the usual predictions about how students will use this: Adding animated animals and speakers to their presentations, impersonating their friends and members of the faculty and administration; the usual sets of things. But that is seldom the way technology leaps forward. PowerPoint, for example, was initially developed to replace slides for business presentations, not for (sometimes badly designed) classroom lectures. Now, students arrive at university with a working knowledge of how to use PowerPoint to do a class presentation.

The students who will surprise us with how they can use Animoji are probably in middle school now. And before this sounds too far fetched, my daughter, who starts middle school next year, does not have a favorite television show she comes home to. She does, however, follow several Minecraft shows. Her current favorites include Aphmau and Ryguyrocky and his Daycare series. When Markus Persson created Minecraft, I would guess that needing to program options in for people to make animated shows that were available via streaming was not one of the items on his to-do list.

What is predictable, however, is that the potential inherent in Animoji underlines the importance of wrapping our heads around how we approach multimodal communication. If we limit ourselves to the obvious use case of an Animoji-based presentation — say, the Panda and Dragon informing viewers of Chinese ecology, we are looking at helping students learn how to write copy, capture video from appropriate angles, and present verbally and non-verbally. Currently, those are skills taught in different disciplines (Composition — English, Videography — Film, Public Speaking — Communications and/or Acting — Performing Arts) housed in multiple departments. Beginning to work out the practicalities of these kinds of collaborations (Where will the course be housed? Can we afford to have a course like this team taught by three faculty? If not, where will we be able to find someone who can be credentialed to teach it?) now, rather than when the multimodal presentations start arriving and we are already too late, will offer a competitive advantage to both those graduating with the skills and the schools that can offer training in those skills.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Apple’s HomePod Short

Apple has released a video featuring its HomePod speaker. I hesitate to call it an advertisement, because it isn’t quite an ad. It is something closer to a short film or music video. If you haven’t clicked the link to watch it (And, if you haven’t, why haven’t you?), it is a Spike Jonze directed performance — FKA twigs dancing to “’Til It’s Over” by Anderson .Paak. In the performance, she expands her narrow apartment at the end of a dispiriting day through her dance.

First things first: I enjoyed the art — both the music and the dance.

I did want to point out a not-so-subtle subtext to the video. Her narrow apartment expands through the music and her dance — an obvious nod to Apple’s description of the HomePod as producing room filling sound — the kind of audio reproduction that makes you get up and move.

I think there is something else here, though — something near and dear to this English Professor’s heart. Apple has, on more than one occasion, explicitly stated that it tries to exist at the intersection of technology and the Liberal Arts and that technology alone is not enough. Steve Jobs’ assertion, during the iPad2 announcement, that those who just look at the “speeds and feeds” miss something important about a post-PC device. Currently, a lot of tech journalists are critiquing the HomePod because Siri doesn’t do as well as they want.

That is, ultimately, a “speeds and feeds” critique. 

Apple was not trying to manufacture the Star Trek computer with a HomePod. It was trying to manufacture a device that would make you want to get up and dance because the music was good enough to transport you.

While I have not been watching out for the reviews of the Amazon Echo or Google Home, I don’t recall tech journalists asking if these speakers were producing an experience that made you want to get up and dance after spending a long day in places that feel oppressive and confining — as the world that FKA twigs is made to appear. But if the HomePod offers people in those worlds that kind of experience, it will be far more valuable than a device that can more conveniently set two timers or tell you what the GDP of Brazil was in 2010.


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.

Quick Thoughts: Black Panther and Brigadoon

I should really know better than to do this. One should not comment on works they have not seen or reviewed. But what is a blog for if not for occasionally indulging in partially formed thoughts and tossing out ideas for others to follow up on.

Critical and popular response for Black Panther is overwhelmingly positive and I do intend to watch the film — as soon as I find a moment and find the right place for it my the triage list of backlogged books, films, and television shows. Indeed, I am really looking forward to it.

But even before doing so, I have noticed something about the way Wakanda is being positioned in popular culture. It is the place that should have been — an idealized African nation where native culture could develop without of the oppression inherent in a century or more of European colonization.[1] The film then engages this Afrofuturist place with the problem of the African American context through the character of Eric Killmonger.

As I have not seen the film and am not a longtime reader of the comic book adventures of the Black Panther, I have no intention on commenting on the specifics of the confrontation. Nor am familiar enough with Afrofuturism to do more than invoke the name of the genre. I have been struck by a strange contrast between the looking back in time and across the sea (with all the remembrances different cultures have with their immigration, whether is be forced, unavoidable, or seen as some kind of new start in the land of opportunity) creates.

In Black Panther, we are shown an idealized culture that could have been. Strangely, this places it in the same tradition as the idealized nostalgia films of other hybridized American identities, whether it be The Quiet Man, Darby O’Gill and the Little People, or Brigadoon. But Black Panther sits uneasily in this tradition, at best — not because of the skin color of its actors, its geographical location, or its artistic quality (No one is going to claim Darby O’Gill is high art — even in jest.) but because it is looking forward while the above mentioned films associated with Ireland and Scotland look back into an idealized past of the kind Eamon de Valera invoked in his 1943 address “The Ireland that We Dreamed Of”.

This is not an easy comparison/contrast to tease out. The three films that look to the Celtic nations of the British Isles were made in a much different era than Black Panther. And while the Atlantic Slave Trade, the Irish Potato Famine/An Gorta Mór, and the Highland Clearances were all tragedies, there is limited benefit in trying to directly compare them. Indeed, discussing them together only serve to alienate the American descendants of these tragedies from one another rather than building any kind of sympathy or understanding for what each went through — an alienation significant enough that I hesitated to write this post at all.

But I do think that there is an interesting question here — one that someone should tease out because the stories we tell to ourselves about ourselves matter. What is it that caused one group to invoke their nostalgia in an idealized past and the other in an idealized future? What does each tell us about the way we imagine ourselves when we self-identify with these communities?


[1] It is not the first such imagining, of course. It is the same thought experiment that produced this map. It also sets itself against the Victorian and Edwardian imaginings of authors like H. R. Haggard and newer forays into imagined Africa like Michael Crichton’s Congo


Dr. Matthew M. DeForrest is a Professor of English and the Mott University Professor at Johnson C. Smith University. The observations and opinions he expresses here are his own. You are very welcome to follow him on Twitter and can find his academic profile at Academia.edu.