NowComment
2-Pane Combined
Comments:
Full Summaries Sorted

'All Watched over by Machines of Loving Grace': Care and the Cybernetic University by Audrey Watters (May 27, 2020)

Author: Audrey Watters

Watters, Audrey. “'All Watched over by Machines of Loving Grace': Care and the Cybernetic University.” Hack Education, 27 May 2020, hackeducation.com/2020/05/27/machines-of-loving-grace.

I gave this talk this morning at the Academic Technology Institute.

Everyone is in crisis. I want to recognize that at the outset. Some crises may be deeper, more long lasting; some may be hidden, unspoken, unspeakable; some might seem minor, but loom monstrously; some may be ongoing; some may be sudden. Some might seem surmountable, but roar back into renewed disaster. Few of these will be resolved anytime soon.

It is very challenging, in the midst of all this crisis — personal crisis, medical crisis, mental health crisis, financial crisis, political crisis, institutional crisis, societal crisis — to offer you a message this morning that is insightful, realistic, necessary, and hopeful, although certainly that's the task many speakers aspire to.

Like many Americans, I tuned in a week or so ago to listen to President Obama deliver his graduation speech to the nation, knowing that it wasn't just a speech for those leaving high school. Graduation speeches never are, of course, just for the graduates. They're also for the parents and grandparents and siblings and teachers. They're a way that the community marks the transition — into adulthood, in some cases, but always looking forward into "what's next."

"What's next?" Right now, we don't know. We never do, really. Right now, instability and uncertainty fuel our crises — individually and collectively. Will schools be open in the fall? Will there be face-to-face classes? Will there be adequate testing? Will there be a vaccine? Will you have a job? Can you survive? Can I? What does one say to an audience in the face of all this?

"Don't be afraid," President Obama told graduates. "Do what you think is right," he said, not what feels good. "Build community." These are fine exhortations, I suppose (although I think it's perfectly normal to be afraid).

I will never be a graduation speaker, I'm certain. It's not that I can't deliver something pithy with one or two lines that make for great tweets. It's not that I don't try to inspire my audiences to go out and make change, make the world a better place. But my talks aren't reassuring or congratulatory. And my messages about refusal, luddism, pigeons, critical theory, and historicism aren't really the sort of thing that administrators go for for ceremonial occasions. Thank you, by the way, for inviting me to speak to you today.

I'd like to imagine this talk, nonetheless, as one connected to the genre of "graduation speech." It's the end of the school year, so why the hell not.

Insofar as that genre is about culmination, about welcoming students into the world, I want to offer today a provocation about welcoming students (and staff) back into educational institutions — whether on campus or not. Because just as we are sending students out into an incredibly precarious world, we are also bringing them into an incredibly precarious higher education system. It's a system that has, perhaps more than ever, had its inequalities and injustices exposed. Indeed, we can say that for the whole damn world.

These inequalities and injustices are not new. I want to make that clear. They are exacerbated, no doubt, by the global pandemic and economic depression.

If there is one message that I want to get across to you today, it is that we must ground our efforts to plan for the fall — hell, for the future — in humanity, compassion, and care. And we cannot confuse the need to do the hard work to set institutions on a new course of greater humanity with the push for an expanded educational machinery. We have to refuse and refute those who argue that more surveillance and more automation is how we tackle this crisis, that more surveillance and AI is how we care.

We can trace the histories of our schools, our beliefs and practices about teaching and learning, our disinvestment in public institutions, our investments in technological solutions to discover how and why we got here — to this moment where everything is falling apart and the solution (from certain quarters) is software that sounds like "panopticon."

It's that last bit — the histories of our investment in technological solutions (and our faith in technological solutions even when they are obviously so utterly dystopian) — that is really the focus of my work.

In his book From Counterculture to Cyberculture, historian Fred Turner examines the influence of Stewart Brand and the Whole Earth Catalog on the the emergence of Silicon Valley as the center of technological development in the US in the 1960s. Turner uses a poem by Richard Brautigan, printed and handed out on broadsheets in the Haight-Ashbury district of San Francisco in 1967, to illuminate how the counterculture came to embrace a technocratic vision of the future and how, in turn, technologists came to view computers as tools of personal liberation, how the technocrats came to believe their disruption makes them radicals, revolutionaries.

As I sat down to prepare this talk, I thought immediately of that poem by Brautigan, "All Watched Over By Machines of Loving Grace":

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

The poem imagines a world in which nature and machines have merged — cybernetic meadows, cybernetic forests, cybernetic ecology — seemingly without environmental destruction. Pure water, clear sky. It's a world in which machines have advanced enough that human labor is no longer necessary.

As we look around us today, I think that there is probably a great appeal to this vision of a "green technology." Certainly there are plenty of Covid-related reasons why we might want all humans to stay at home and let the machines work for us. This would require, of course, a complete rearrangement of our economic system — a rearrangement that many politicians are clearly unwilling to embrace: pay people to stay home.

But those last two lines — "all watched over by machines of loving grace" — always turn my stomach. And it's those last two lines that came to mind when I thought about the ways in which college administrators and professors and staff are going to be asked to handle this moment, pressed to adopt more technology, to "do more with less," to automate more tasks, to utilize more analytics. "All watched over by machines of loving grace."

Certainly the machines that we have set forth to watch over students — and even that verb hints at a terrible practice — are not filled with "loving grace." Even if we anthropomorphize the metal as mentor, these machines have no dignity, no decency. They are extractive, mining students' personal data. They are exploitative, selling and sharing this data often without students' knowledge or consent. They are only "loving" if you believe that surveillance, coercion, and discipline are the basis of love. And good grief, no one would ever describe the learning management system, the student information system, or the vast majority of education technology tools as graceful. They're clunky and unwieldy. They suck.

Nonetheless, Brautigan's 1967 poem does capture a technological utopianism that remains powerful, even pervasive — in education technology circles, in Silicon Valley, and even in society in general. That utopianism has become deeply embedded — hard-coded, if you will — in our relationship with technology over the course of the past fifty or sixty years. It's no surprise then that some would see the baby monitor, the predictive algorithms, the online proctoring software, and the like as "machines of loving grace."

There have always been critics of this machinery. Hannah Arendt. Jacques Ellul. Lewis Mumford. Their criticisms accompanied the development, following World War II, of computing, cybernetics, and artificial intelligence. They cautioned about the relationship between computing and warfare. They cautioned about the metaphors of machinery, the mechanization of society. They cautioned that unfettered optimism about these machines obscured power relations. "The myth of technological and political and social inevitability," Joseph Weizenbaum wrote in 1976, "is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes it."

There were plenty of predictions about the inevitability of artificial intelligence — then as now. Carnegie Mellon professor Herbert Simon, for example, had boasted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do," (a man can do — more on that in a minute) and MIT professor Marvin Minsky had said in 1967 "within a generation… the problem of creating 'artificial intelligence' will substantially be solved." It's Herbert Simon, incidentally, who is often invoked these days when people argue that the job of instructional designer and instructional technologist should be rebranded as "learning engineer," a phrase Simon used to describe his vision for a mechanized university administration.

I am particularly interested in the development of computing and education technologies in post-war America because they occurred in such tumultuous times on campuses — both K-12 and colleges — something that we (in education technology at least) seem to rarely consider. You can read far too many stories about the development of ed-tech that fail to mention Brown v Board of Education or the Little Rock Nine. I don't think I've seen Mario Savio's famous speech in 1964 on the steps of Sproul Hall at UC Berkeley mentioned in a history of teaching machines — well, except in the book I've just written which will be out next year:

There's a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can't take part! You can't even passively take part! And you've got to put your bodies upon the gears and upon the wheels ... upon the levers, upon all the apparatus, and you've got to make it stop! And you've got to indicate to the people who run it, to the people who own it, that unless you're free, the machine will be prevented from working at all!

This machine — the university machine — is not a machine of loving grace. "Do not fold, spindle, or mutilate," student protestors exclaimed, demanding at least the same level of care afforded the IBM punch card. The draft and the student information system used the same machinery, after all.

But just as students (and others) identified and protested the burgeoning educational technocracy, recognizing how it reduced them from humans to data, many in universities were happily embracing and entrenching it, particularly those in and around the nascent fields of cybernetics and artificial intelligence who sought to systematize and to mechanize the mind.

This mechanization, I believe — metaphorically or literally, however you choose to read it — is why we must refuse to move farther along a path that equates teaching and learning with computation. More Mario Savio please. Less machine learning.

"Can a machine think?" Alan Turing famously asked in 1950. But rather than answer that question, Turing proposed something we've come to know since as the Turing Test. His original contrivance was based on a parlor game -- a gendered parlor game involving three people: a man, a woman, and an interrogator.

This imitation game is played as follows: the interrogator cannot see the man or woman but asks them questions in order to identify their sex. The man and woman respond via typewritten answers. The goal of the man is to fool the interrogator. Turing's twist: replace the man with a machine. "Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" he asked.

The question is therefore not "can a machine think?" but "can a machine fool someone into thinking it is a woman?"

What we know today as the Turing Test is not nearly as fascinating or as fraught -- I mean, what does it imply that the bar for intelligent machinery was, for Turing, to be better at pretending to be a woman than a man is? What would it mean for a machine to win that imitation game? What would it mean for a machine to fool us into believing, for example, that it could perform affective labor not just computational tasks?

Perhaps it's less that the machine can or might fool us, and more that, when we consider the question "can a machine think" today, our definition of thinking has become utterly mechanistic — and that definition has permeated our institutional beliefs and practices. It is the antithesis of what Kathleen Fitzpatrick has called for in her book Generous Thinking — what she describes as "a mode of engagement that emphasizes listening over speaking, community over individualism, collaboration over competition, and lingering with ideas that are in front of us rather than continually pressing forward to where we want to go." Generous thinking is something a machine cannot do. Indeed it is more than just an intellectual endeavor; it is a political action that might take scholarly inquiry in the opposite direction of technocratic thought and rule. "We have," as Joseph Weizenbaum wrote, "permitted technological metaphors… and technique itself to so thoroughly pervade our thought processes that we have finally abdicated to technology the very duty to formulate questions."

In 1972, MIT professor Hubert Dreyfus published a book highly critical of artificial intelligence — What Computers Can't Do — lambasting those researchers who made these grandiose promises about their work. Dreyfus argued that optimism about the capabilities of AI was unwarranted. AI researchers were working with a very limited notion of "intelligence" — the idea that the human brain is an information processor, and idea that was flawed and misleading. To be "intelligent," Dreyfus wrote, a machine "must only compete in the more objective and disembodied areas of human behavior, so as to be able to win at Turing's game."

Hubert Dreyfus's criticisms of AI were roundly dismissed by almost everyone working in the field at the time (although he has since been largely vindicated). He complained that none of his MIT colleagues would eat lunch with him — very little generosity among the MIT faculty, no surprise — except, that is, for Joseph Weizenbaum.

Weizenbaum, for his part, was a computer scientist (Dreyfus was a philosopher). He was one of the founders of the field of artificial intelligence and with the publication of Computer Power and Human Reason in 1976, one of its most vocal critics. While Dreyfus wrote about "what computers can't do," Weizenbaum was more interested instead in "what computers ought not do."

Weizenbaum had developed in the mid-1960s one of the best known chat-bots. Its name: ELIZA — yes, named after Eliza Doolittle in George Bernard Shaw's play Pygmalion, taught to speak with an upper-class accent so she can "pass."

The program ELIZA ran a script simulating a psychiatrist. A "parody," Weizenbaum wrote. "The Rogerian psychotherapist is relatively easy to imitate because much of his technique consists of drawing his patient out by reflecting the patient's statements back to him."

"Hello," you might type. "Hi," ELIZA responds. "What is your problem?" "I'm angry," you type. Or perhaps "I'm sad." "I am sorry to hear you are sad," ELIZA says. "Do you think coming here will make you happy?" "Well, I need some help," you reply. "What would it mean for you to get some help?" ELIZA asks. "Perhaps I wouldn't fight so much with my mother," you respond. "Tell me more about your family," ELIZA answers. The script always eventually asks about family, no matter what you type, not because it understood trauma but because it's been programmed to do so. That is, ELIZA was programmed to analyze the input for keywords and to respond with a number of canned phrases, that contained therapeutical language of care and support — a performance of "intelligence" or intelligent behavior, if you will, but just as importantly perhaps a performance of "care." The verbal behavior of "loving grace," perhaps.

Weizenbaum's students knew the program did not actually care. Yet they still were eager to chat with it and to divulge personal information to it. Weizenbaum became incredibly frustrated by the ease with which this simple program could deceive people — or the ease with which people were willing to go along with the deception, perhaps more accurately. When he introduced ELIZA to the non-technical staff at MIT, they treated the program as a "real" therapist. When he told a secretary that he had access to the chat logs, she was furious that Weizenbaum would violate her privacy — violate doctor-patient confidentiality — by reading them.

Weizenbaum was dismayed that so many practitioners — both of computer science and of psychology — embraced ELIZA and that they argued the program demonstrated that psychotherapy could be fully automated. There were, after all, a shortage of therapists, and automation would make the process much more efficient as no longer would the field be limited by the one-to-one patient-therapist ratio. Weizenbaum balked at this notion. "What must a psychiatrist who makes such a suggestion think he is doing while treating a patient," he wrote, "that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?"

Let's rewrite Weizenbaum's question for education, for the use of automation in teaching and counseling: "What must a professor or administrator who makes such a suggestion think he is doing while working with a student that he can view the simplest mechanical parody of a single pedagogical technique as having captured anything of the essence of a human encounter?"

I've written before about the legacy of ELIZA in education, about the chat-bots that have been developed for use as "pedagogical agents." These programs were often part of early intelligent tutoring systems and, like ELIZA, were designed to respond helpfully, encouragingly when a student stumbled. Machines of loving grace. The effectiveness of these chat-bots is debated in the research (what do we even mean by "effectiveness"), and there is incomplete understanding of how students respond to these programs, particularly when it comes to vulnerability and trust, such core elements of learning.

Are chat-bots sophisticated enough to pass some sort of pedagogical Turing Test? (Is that test, like Turing's imitation game, fundamentally gendered?) Or rather is it, as I fear, that folks have decided they just don't care. They do not care that the machines do not really care for students, as long as there's an appearance of responsiveness. Indeed, our educational institutions, particularly at the university level, have never really cared about caring at all. And perhaps students do not care that the machines do not really care because they do not expect to be cared for by their teachers, by their schools. "We expect more from technology and less from each other," as Sherry Turkle has observed. Caring is a vulnerability, a political liability, a weakness. It's hard work. And in academia, it's not rewarded.

So what does it mean then if we offload caring and all the affective labor - the substantive and the performative - to technology? To machines of loving grace. It might seem counterintuitive that we'd do so. After all, we're often reassured that computers will never be able to do that sort of work. They're better at repetitive, menial tasks, we're told — at physical labor. "Any work a man can do," as Herbert Simon said.

And yet at the same time, plenty of us seem quite happy to bear our souls, trust our secrets, be vulnerable with and to and by and through our machines. What choice do we have, we're now told. So what does that mean for teaching and learning, now we're told (and possibly resigned to the fact) the machines of loving grace are going to be compulsory.

These are not technical questions, although there are vendors lined up with technical solutions. They are political questions. And they are moral questions. As Weizenbaum wrote, "the question is not whether such a thing can be done, but whether it is appropriate to delegate this hitherto human function to a machine."

Folks, let's be clear. It is not appropriate to delegate the human function of education to a machine. In education there are no machines of loving grace. We must rethink and reorient our institutions away from that fantasy, from the desire to build or buy them.

We must, as Weizenbaum wrote, "learn to say 'No.'" And we must, as Donna Lanclos has written, pay attention to others' refusal as well. Refusal is different than resistance, she argues. Refusal means "not participating in those systems, not accepting the authority of their underlying premises. Refusal happens among people who don't have access to structural power. Refusal is a rejection of framing premises. Recognizing refusal requires attention, and credit to tactics such as obfuscation, or deliberate misinterpretation." Refusal means undermining and unwinding decades of a cybernetic university and building something else in its place.

I don't mean here that we should refuse online education, to be clear. I would rather faculty and students and staff be online than dead. I care. But what I do mean is that we need to resist this impulse to have the machines dictate what we do, the shape and place of how we teach and trust and love. We need to do a better job caring for one another — emotionally, sure, but also politically. We need to recognize how disproportionate affective labor already is in our institutions, how disproportionate that work will be in the future. We need to agitate for space and compensation for it, not outsource care to analytics, AI, and surveillance.

We must refuse to be watched over, to have students and staff watched over by machines of purported loving grace. We must put our bodies upon the gears and upon the wheels and make the machines stop.

DMU Timestamp: October 08, 2020 22:04





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner