NowComment
2-Pane Combined
Comments:
Full Summaries Sorted

What is AI? Artificial Intelligence and the Future of Teaching and Learning (pages 11-17), U.S. Department of Education (May 2023)

Author: U.S. Department of Education

"What is AI?" Artificial Intelligence and the Future of Teaching and Learning (PDF), pp 11-17, May 2023, www2.ed.gov/documents/ai-report/ai-report.pdf.

Artificial Intelligence

and the Future of

Teaching and Learning

Insights and Recommendations

May 2023

1

What is AI?

Our preliminary definition of AI as automation based on associations requires elaboration. Below we address three additional perspectives on what constitutes AI. Educators will find these different perspectives arise in the marketing of AI functionality and are important to understand when evaluating edtech systems that incorporate AI. One useful glossary of AI for Education terms is the CIRCLS Glossary of Artificial Intelligence Terms for Educators.11

AI is not one thing but an umbrella term for a growing set of modeling capabilities, as visualized in Figure 3.

Figure 3: Components, types, and subfields of AI based on Regona et al (2022).12

11 Search for “AI Glossary Educators” to find other useful definitions.

12 Regona, Massimo & Yigitcanlar, Tan & Xia, Bo & Li, R.Y.M. (2022). Opportunities and adoption challenges of AI in the construction industry: A PRISMA review. Journal of Open Innovation Technology Market and Complexity, 8(45). https://doi.org/10.3390/joitmc8010045

11

Perspective: Human-Like Reasoning

“The theory and development of computer systems able to perform tasks normally requiring human intelligence such as, visual perception, speech recognition, learning, decision-making, and natural language processing.” 13

Broad cultural awareness of AI may be traced to the landmark 1968 film “2001: A Space Odyssey”—in which the “Heuristically-programmed ALgorithmic” computer, or “HAL,” converses with astronaut Frank. HAL helps Frank pilot the journey through space, a job that Frank could not do on his own. However, Frank eventually goes outside the spacecraft, HAL takes over control, and this does not end well for Frank. HAL exhibits human-like behaviors, such as reasoning, talking, and acting. Like all applications of AI, HAL can help humans but also introduces unanticipated risks—especially since AI reasons in different ways and with different limitations than people do.

The idea of “human-like” is helpful because it can be a shorthand for the idea that computers now have capabilities that are very different from the capabilities of early edtech applications. Educational applications will be able to converse with students and teachers, co-pilot how activities unfold in classrooms, and take actions that impact students and teachers more broadly. There will be both opportunities to do things much better than we do today and risks that must be anticipated and addressed.

The “human-like” shorthand is not always useful, however, because AI processes information differently from how people process information. When we gloss over the differences between people and computers, we may frame policies for AI in education that miss the mark.

Perspective: An Algorithm that Pursues a Goal

“Any computational method that is made to act independently towards a goal based on inferences from theory or patterns in data.” 14

This second definition emphasizes that AI systems and tools identify patterns and choose actions to achieve a given goal. These pattern recognition capabilities and automated recommendations will be used in ways that impact the educational process, including student learning and teacher instructional decision making. For example, today’s personalized learning systems may recognize signs that a student is struggling and may recommend an alternative instructional sequence. The scope of pattern recognition and automated recommendations will expand.

13 IEEE-USA Board of Directors. (February 10, 2017). Artificial intelligence research, development and regulation. IEEE http://globalpolicy.ieee.org/wp-content/uploads/2017/10/IEEE17003.pdf

14 Friedman, L., Blair Black, N., Walker, E., & Roschelle, J. (November 8, 2021) Safe AI in education needs you. Association of Computing Machinery blog, https://cacm.acm.org/blogs/blog-cacm/256657-safe-ai-in-education-needs-you/fulltext

12

Correspondingly, humans must determine the types and degree of responsibility we will grant to technology within educational processes, which is not a new dilemma.

For decades, the lines between the role of teachers and computers have been discussed in education, for example, in debates using terms such as “’computer-aided instruction,” “blended instruction,” and “personalized learning.” Yet, how are instructional choices made in systems that include both humans and algorithms? Today, AI systems and tools are already enabling the adaptation of instructional sequences to student needs to give students feedback and hints, for example, during mathematics problem solving or foreign language learning. This discussion about the use of AI in classroom pedagogy and student learning will be renewed and intensify as AI-enabled systems and tools advance in capability and become more ubiquitous.

Let’s start with another simple example. When a teacher says, “Display a map of ancient Greece on the classroom screen,” an AI system may choose among hundreds of maps by noting the lesson objectives, what has worked well in similar classrooms, or which maps have desirable features for student learning. In this case, when an AI system suggests an instructional resource or provides a choice among a few options, the instructor may save time and may focus on more important goals. However, there are also forms of AI-enabled automation that the classroom instructor may reject, for example, enabling an AI system or tool to select the most appropriate and relevant readings for students associated with a historical event. In this case, an educator may choose not to utilize AI-enabled systems or tools given the risk of AI creating false facts (“hallucinating”) or steering students toward inaccurate depictions of historical events found on the internet. Educators will be weighing benefits and risks like these daily.

Computers process theory and data differently than humans. AI’s success depends on associations or relationships found in the data provided to an algorithm during the AI model development process. Although some associations may be useful, others may be biased or inappropriate. Finding bad associations in data is a major risk, possibly leading to algorithmic discrimination. Every guardian is familiar with the problem: A person or computer may say, “Our data suggests your student should be placed in this class,” and the guardian may well argue, “No, you are using the wrong data. I know my child better, and they should instead be placed in another class.” This problem is not limited exclusively to AI systems and tools, but the use of AI models can amplify the problem when a computer uses data to make a recommendation because it may appear to be more objective and authoritative, even if it is not.

Although this perspective can be useful, it can be misleading. A human view of agency, pursuing goals, and reasoning includes our human abilities to make sense of multiple contexts. For example, a teacher may see three students each make the same mathematical error but recognize that one student has an Individualized Education Program to address vision issues, another misunderstands a mathematical concept, and a third just experienced a frustrating interaction on the playground; the same instructional decision is therefore not appropriate. However, AI systems often lack data and judgement to appropriately include context as they detect patterns and automate decisions. Further, case studies show that technology has the potential to quickly derail from safe to unsafe or from effective to ineffective when the context shifts even slightly. For this and other reasons, people must be involved in goal setting, pattern analysis, and decision-making.15

15 Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking. ISBN 978-0-525-55861-3.

13

Perspective: Intelligence Augmentation

“Augmented intelligence is a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making, and new experiences.” 16

Foundation #1 (above) keeps humans in the loop and positions AI systems and tools to support human reasoning. “Intelligence Augmentation” (IA)17 centers “intelligence” and “decision making” in humans but recognizes that people sometimes are overburdened and benefit from assistive tools. AI may help teachers make better decisions because computers notice patterns that teachers can miss. For example, when a teacher and student agree that the student needs reminders, an AI system may provide reminders in whatever form a student likes without adding to the teacher’s workload. Intelligence Automation (IA) uses the same basic capabilities of

AI, employing associations in data to notice patterns, and, through automation, takes actions based on those patterns. However, IA squarely focuses on helping people in human activities of teaching and learning, whereas AI tends to focus attention on what computers can do.

Definition of “Model”

The above perspectives open a door to making sense of AI. Yet, to assess AI meaningfully, constituents must consider specific models and how they are developed. In everyday usage, the term “model” has multiple definitions. We clarify our intended meaning, which is a meaning similar to “mathematical model,” below. (Conversely, note that “model” as used in “AI model” is unlike the usage in “model school” or “instructional model” as AI model is not a singular case created by experts to serve as an exemplar.)

AI models are like financial models: an approximation of reality that is useful for identifying patterns, making predictions, or analyzing alternative decisions. In a typical middle school math curriculum, students use a mathematical model to analyze which of two cell phone plans is better. Financial planners use this type of model to provide guidance on a retirement portfolio. At its heart, AI is a highly advanced mathematical toolkit for building and using models. Indeed, in well-known chatbots, complex essays are written one word at a time. The underlying AI model predicts which next words would likely follow the text written so far; AI chatbots use a very large statistical model to add one likely word at a time, thereby writing surprisingly coherent essays.

When we ask about the model at the heart of AI, we begin to get answers about “what aspects of reality does the model approximate well?” and “how appropriate is it to the decision to be made?”

One could similarly ask about algorithms—the specific decision-making processes that an AI model uses to go from inputs to outputs. One could also ask about the quality of the data used to build the model—for example, how representative is that data? Switching among three terms—

16 Gartner (n.d.) Gartner glossary: Augmented intelligence. Gartner. https://www.gartner.com/en/information-technology/glossary/augmented-intelligence

17 Englebart, D.C. (October 1962). Augmenting human intellect: A conceptual framework. SRI Summary Report AFOSR-3223. https://www.dougengelbart.org/pubs/augment-3906.html

14

models, algorithms, and data—will become confusing. Because the terms are closely related, we’ve chosen to focus on the concept of AI models. We want to bring to the fore the idea that every AI model is incomplete, and it's important to know how well the AI model fits the reality we care about, where the model will break down, and how.

Sometimes people avoid talking about the specifics of models to create a mystique. Talking as though AI is unbounded in its potential capabilities and a nearly perfect approximation to reality can convey an excitement about the possibilities of the future. The future, however, can be oversold. Similarly, sometimes people stop calling a model AI when its use becomes commonplace, yet such systems are still AI models with all of the risks discussed here. We need to know exactly when and where AI models fail to align to visions for teaching and learning.

Insight: AI Systems Enable New Forms of Interaction

AI models allow computational processes to make recommendations or plans and also enable them to support forms of interaction that are more natural, such as speaking to an assistant. AI-enabled educational systems will be desirable in part due to their ability to support more natural interactions during teaching and learning. In classic edtech platforms, the ways in which teachers and students interact with edtech are limited. Teachers and students may choose items from a menu or in a multiple-choice question. They may type short answers. They may drag objects on the screen or use touch gestures. The computer provides outputs to students and teachers through text, graphics, and multimedia. Although these forms of inputs and outputs are versatile, no one would mistake this style of interaction with the way two people interact with one another; it is specific to human-computer interaction. With AI, interactions with computers are likely to become more like human-to-human interactions (see Figure 4). A teacher may speak to an AI assistant, and it may speak back. A student may make a drawing, and the computer may highlight a portion of the drawing. A teacher or student may start to write something, and the computer may finish their sentence—as when today’s email programs can complete thoughts faster than we can type them.

Additionally, the possibilities for automated actions that can be executed by AI tools are expanding. Current personalization tools may automatically adjust the sequence, pace, hints, or trajectory through learning experiences.18 Actions in the future might look like an AI system or tool that helps a student with homework19 or a teaching assistant that reduces a teacher’s workload by recommending lesson plans that fit a teacher’s needs and are similar to lesson plans a teacher previously liked.20 Further, an AI-enabled assistant may appear as an additional

“partner” in a small group of students who are working together on a collaborative assignment.21

An AI-enabled tool may also help teachers with complex classroom routines.22 For example, a

18 Shemshack, A., Spector, J.M. (2020) A systematic literature review of personalized learning terms. Smart Learning Environments, 7(33). https://doi.org/10.1186/s40561-020-00140-9

19 Roschelle, J., Feng, M., Murphy, R. & Mason, C.A. (2016). Online mathematics homework increases student achievement. AERA Open, 2(4), 1-12. DOI: 10.1177/2332858416673968

20 Celik, I., Dindar, M., Muukkonen, H. & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66, 616–630. https://doi.org/10.1007/s11528-022-00715-y

21 Chen, C., Park, H.W. & Breazeal, C. (2020). Teaching and learning with children: Impact of reciprocal peer learning with a social robot on children’s learning and emotive engagement. Computers & Education, 150, https://doi.org/10.1016/j.compedu.2020.103836

22 Holstein, K., McLaren, B.M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity. Journal of Learning Analytics, 6(2). https://doi.org/10.18608/jla.2019.62.3

15

tool may help teachers with orchestrating23 the movement of students from a full class discussion into small groups and making sure each group has the materials needed to start their work.

Figure 4. Differences that teachers and students may experience in future technologies.

Key Recommendation: Human in the Loop AI

Many have experienced a moment where technology surprised them with an uncanny ability to recommend what feels like a precisely personalized product, song, or even phrase to complete a sentence in a word processor such as the one being used to draft this document. Throughout this supplement, we talk about specific, focused applications where AI systems may bring value (or risks) into education. At no point do we intend to imply that AI can replace a teacher, a guardian, or an educational leader as the custodian of their students’ learning. We talk about the limitations of models in AI and the conversations that educational constituents need to have about what qualities they want AI models to have and how they should be used.

“We can use AI to study the diversity, the multiplicity of effective learning approaches and think about the various models to help us get a broader understanding of what effective, meaningful engagement might look like across a variety of different contexts.”

—Dr. Marcelo Aaron Bonilla Worsley

23 Roschelle, J., Dimitriadis, Y. & Hoppe, U. (2013). Classroom orchestration: Synthesis. Computers & Education, 69, 512-526. https://doi.org/10.1016/j.compedu.2013.04.010

16

These limitations lead to our first recommendation: that we pursue a vision of AI where humans are in the loop. That means that people are part of the process of noticing patterns in an educational system and assigning meaning to those patterns. It also means that teachers remain at the helm of major instructional decisions. It means that formative assessments involve teacher input and decision making, too. One loop is the cycle of recognizing patterns in what students do and selecting next steps or resources that could support their learning. Other loops involve teachers planning and reflecting on lessons. Response to Intervention is another well-known type of loop.

The idea of humans in the loop is part of our broader discussions happening about AI and society, not just AI in education. Interested readers could look for more on human-centered AI, responsible AI, value-sensitive AI, AI for social good, and other similar terms that ally with humans in the loop, such as “human-centered AI.”

Exercising judgement and control in the use of AI systems and tools is an essential part of providing the best opportunity to learn for all students—especially when educational decisions carry consequence. AI does not have the broad qualities of contextual judgment that people do. Therefore, people must remain responsible for the health and safety of our children, for all students’ educational success and preparation for their futures, and for creating a more equitable and just society.

17

DMU Timestamp: June 30, 2023 01:14





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner