NowComment
2-Pane Combined
Comments:
Full Summaries Sorted

[Intro to 7 more] LLMs: Prompts and Risks

Author: Dr. Ethan Mollick and Dr. Lilach Mollick

"LLMs: Prompts and Risks" Mollick, Ethan R. and Mollick, Lilach, Assigning AI: Seven Approaches for Students, with Prompts, pp 3-5 (June 12, 2023). Available at SSRN: https://ssrn.com/abstract=4475995 or http://dx.doi.org/10.2139/ssrn.4475995

1 additions to document , most recent over 1 year ago

When Why
Jul-13-23 Summary of Seven Approaches

Abstract:

This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementation of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration of AI-assisted learning in classrooms.


Large Language Models (LLMs), such as OpenAI’s ChatGPT and Anthropic’s Claude, have ushered in a transformative period in educational practices, providing innovative, useful tools while also threatening traditional effective approaches to education (Walton Family Foundation, 2023; U.S. Department of Education, 2023). Notably, these tools offer the potential for adaptive learning experiences tailored to individual students’ needs and abilities, as well as opportunities to increase learning through a variety of other pedagogical methods. Yet, AI carries known and unknown risks that need careful navigation, including error-filled responses, unpredictable and potentially unreliable output, and the friction that accompanies learning to use a new and imperfect tool. Additionally, while AI has the potential to help students learn, its ability to quickly output writing tasks, summarize information, provide outlines, analyze information, and draw conclusions may mean that students will not learn these valuable skills. To reap rewards from its potential and activate hard thinking and protect against its risks, educators should play an active role in teaching students how and when to use AI as they instill best practices in AI-assisted learning.

We have previously suggested ways that AI can be used to help instructors teach (Mollick and Mollick, 2023) and the ways in which AI can be used to generate assignments (Mollick and Mollick, 2022), now we address the most direct way to use AI in classrooms – assigning AI use to students. Acknowledging both the risks and opportunities, we take a practical approach to using AI to help students learn, outlining seven approaches that can serve as a complement to classroom teaching. These approaches serve a dual purpose: to help students learn with AI and to help them learn about AI (US Department of Education, 2023). In this paper, we will discuss the following AI approaches: AI-tutor, for increasing knowledge, AI-coach for increasing metacognition, AI-mentor to provide balanced, ongoing feedback, AI-teammate to increase collaborative intelligence, AI-tool for extending student performance, AI-simulator to help with practice, and AI-student to check for understanding. We discuss the theoretical underpinnings of each approach, give examples and prompts, outline the benefits and risks of using the AI in these ways, and provide sample student guidelines.

While our guidelines for students differ with each approach, in each set of guidelines we focus on helping students harness the upsides while actively managing the downsides and risks of using AI. Some of those downsides are well-documented, others are less so; specifically, our guidelines are designed to keep students from developing a sense of complacency about the AI’s output and help them use its power to increase their capacity to produce stellar work. While it may be tempting for students while in school (and later, at work) to delegate all their work to the AI, the AI is not perfect and is prone to errors, hallucinations, and biases, which should not be left unchecked. Our guidelines challenge students to remain the “human in the loop” and maintain that not only are students responsible for their own work but they should actively oversee the AIs output, check with reliable sources, and complement any AI output with their unique perspectives and insights. Our aim is to encourage students to critically assess and interrogate AI outputs, rather than passively accept them. This approach helps to sharpen their skills while having the AI serve as a supportive tool for their work, not a replacement. Although the AI’s output might be deemed “good enough,” students should hold themselves to a higher standard, and be accountable for their AI use.

TABLE 1 SUMMARY OF SEVEN APPROACHES

AI USE

ROLE

PEDAGOGICAL BENEFIT

PEDAGOGICAL RISK

MENTOR

Providing

Frequent feedback improves

Not critically

feedback

learning outcomes, even if all

examining feedback,

advice is not taken.

which may contain

errors.

TUTOR

Direct

Personalized direct

Uneven knowledge

instruction

instruction is very effective.

base of AI. Serious

confabulation risks.

COACH

Prompt

Opportunities for reflection

Tone or style of

metacognition

and regulation, which

coaching may not

improve learning outcomes.

match student. Risks of

incorrect advice.

TEAMMATE

Increase team

Provide alternate viewpoints,

Confabulation and

performance

help learning teams function

errors. “Personality”

better.

conflicts with other

team members.

STUDENT

Receive

Teaching others is a powerful

Confabulation and

explanations

learning technique.

argumentation may

derail the benefits of

teaching.

SIMULATOR

Deliberate

Practicing and applying

Inappropriate fidelity.

practice

knowledge aids transfer.

TOOL

Accomplish

Helps students accomplish

Outsourcing thinking,

tasks

more within the same time

rather than work.

frame.

LLMs: Prompts and Risks

Before going into the details about each approach we will first discuss both prompting in general and the risks associated with AI use.

We provide sample prompts for every AI use case. Prompts are simply the text given to the LLM in order to produce an output. Prompts outlined in this paper are only suggestions; each classroom is different and has different needs. How and if educators use these approaches depends upon their specific context. Educators can experiment by building their own prompts. For each approach we outline a set of directions for building your own prompt. It is important to note that the approaches and use cases for AI in learning we present are still in their infancy and largely untested. Large Language Models hold tremendous potential for increasing learning and providing personalized instruction but we must approach these practices with a spirit of experimentation, discerning which methods yield the most effective outcomes for student learning in individual classrooms through trial and error. Also note, that all prompts work for all LLMs. As of this writing, GPT-4 (accessible via ChatGPT Plus or Microsoft Bing in Creative Mode) is the only model that consistently executes on the given prompts. See Appendix A.

It is also important to note that there are multiple risks associated with AI. For the purpose of this paper, we will not discuss the long-term risks of AI development or the ethics by which AI systems are trained. Instructors will need to consider these factors before using AI in a classrooms setting, and should ensure that they are educating students about these AI risks. In addition to these general risks, there are specific concerns in classroom use, including:

Confabulation Risks: Large Language Models are prone to producing incorrect, but plausible facts, a phenomenon known as confabulation or hallucination. These errors can be deeply woven into the outputs of the AI, and can be hard to detect. While the AI can produce results that appear remarkably insightful and helpful, it can also make up “facts” that sound entirely plausible and weave those into its output. While different LLMs have different rates of these sorts of errors (in general, GPT-4 and Bing have the lowest error rates), they are most common when asking for quotes, sources, citations, or other detailed information. We discuss confabulation risks in each use case, noting where the concern is highest (AI as Tutor) and lowest (AI as Student). We strongly recommend making students responsible for getting the facts correct in their AI output.

Bias Risks: AI is trained on a vast amount of text, and then receive additional training from humans to create guardrails on LLM output. Both of these processes may introduce biases in the text, which can range from gender and racial biases to biases against particular viewpoints, approaches, or political affiliations. Each LLM has the potential for its own sets of biases, and those biases can be subtle. Instructors need to consider potential biases before using LLMs.

Privacy Risks: When data is entered into the AI, it can be used for future training by the organizations developing the AI. While ChatGPT offers a privacy mode that claims to not use input for future AI training, the current state of privacy remains unclear for many models, and the legal implications are often also uncertain. Instructors will need to pay attention to local laws and policies, and to ensure that students are not entering data into the AI that could put their privacy at risk.

Instructional Risks: AIs can be very convincing, and have strong “viewpoints” about facts and theories that the models “believe” are correct. Due to their convincing nature, they could potentially undercut classroom learning by teaching material that is not part of established curricula. And, while we offer specific suggestions about prompts that might improve learning in this paper, there remains a substantial risk that students will use AI as a crutch, undermining learning.

If you decide to use any of the methods we outline, please be aware of these risks, and balance them with the learning opportunities that make the most sense in your classroom. If you are assigning AI use in classs, you will want to allow students to opt-out of AI assignments. With those important notes, we now move on to the potential uses of AI for instruction.

DMU Timestamp: June 30, 2023 01:14

Added July 13, 2023 at 5:00am by Christopher Sloan
Title: Summary of Seven Approaches

Summary of Seven Approaches by Mollick and Mollick

DMU Timestamp: July 11, 2023 22:25





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner