NowComment
Document: Invite Print Info
2-Pane Combined
Comments:
Full Summaries Sorted

Paul's Logs December 8 - 14

Author: Paul Allison

17 additions to document , most recent about 2 years ago

When Why
Dec-08-23 OpenAI's settings
Dec-09-23 Temperature description
Dec-09-23 Intro to the settings
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Condensed Intro to the settings
Dec-11-23 Language for settings
Dec-11-23 Temperature and Top_P description
Dec-11-23 Penalty settings

12 changes, most recent about 2 years ago

Show Changes

0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments


Adding settings to GPT Thinking Partners and other such projects.

New Thinking Partner Conversation New Conversation
Paragraph 1 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 1, Sentence 1 0
No sentence-level conversations. Start one.

DMU Timestamp: December 08, 2023 23:32

Added December 08, 2023 at 6:35pm by Paul Allison
Title: OpenAI's settings

Advanced Settings

New Thinking Partner Conversation New Conversation
Paragraph 2 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 2, Sentence 1 0
No sentence-level conversations. Start one.

The maximum number of tokens to generate. Should be in the range 0 – 32768.

New Thinking Partner Conversation New Conversation
Paragraph 3 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 10
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 10 2023 7:50PM) : You can control how long the output will be. If you want shorter responses, use a lower number. Higher numbers will cause the GPT Thinking Partner to give expansive and extended replies.
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Feb 29
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Bengali Translation Tool (Feb 29 2024 10:53AM) : Please translate this? more
Bengali Translation Tool with Paul A's edits:
Bengali Translation Tool
Translates text into Bengali, also known as Bangla

উৎপন্ন করার জন্য সর্বোচ্চ টোকেন সংখ্যা। এটি ০ থেকে ৩২৭৬৮ পর্যন্ত হওয়া উচিত।

আপনি কন্ট্রোল করতে পারেন যে আউটপুটের দৈর্ঘ্য কেমন হবে। আপনি যদি ছোট উত্তর চান, তবে কম সংখ্যা ব্যবহার করুন। বেশি সংখ্যাগুলি GPT থিংকিং পার্টনারকে বৃহত্তর এবং প্রসারিত উত্তর দিতে প্রেরণা করবে।

profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Feb 29
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Translation Tool (Feb 29 2024 4:34PM) : English please more
Translation Tool with Paul A's edits:
Translation Tool
Translates text into any language you choose

The maximum number of tokens to generate. Should be in the range 0 – 32768.

You can control how long the output will be. If you want shorter answers, then use a smaller number. Larger numbers will motivate the GPT thinking partner to provide larger and more elaborated answers.

profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Feb 29
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Translation Tool (Feb 29 2024 4:25PM) : Bengali please more
Translation Tool with Paul A's edits:
Translation Tool
Translates text into any language you choose

প্রজন্মের জন্য সর্বাধিক টোকেনের সংখ্যা। এর পরিসীমা হওয়া উচিত ০ – ৩২৭৬৮।

আপনি আউটপুটের দৈর্ঘ্য নিয়ন্ত্রণ করতে পারেন। যদি আপনি ছোট উত্তর চান, তবে কম সংখ্যা ব্যবহার করুন। বৃহত্তর সংখ্যা GPT থিঙ্কিং পার্টনারকে প্রসারিত এবং বিস্তৃত উত্তর দিতে বাধ্য করবে।

profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Feb 29
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Translation Tool (Feb 29 2024 4:28PM) : English please more
Translation Tool with Paul A's edits:
Translation Tool
Translates text into any language you choose

The maximum number of tokens to generate. It should be within the range of 0 – 32768.

You can control the length of the output. If you want a short answer, use a lower number. A larger number will force the GPT thinking partner to give extended and elaborate answers.

New Thinking Partner Conversation New Conversation
Paragraph 3, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 3, Sentence 2 0
No sentence-level conversations. Start one.

Up to 4 sequences where the API will stop generating further tokens.

New Thinking Partner Conversation New Conversation
Paragraph 4 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 4, Sentence 1 0
No sentence-level conversations. Start one.

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Default 1.

New Thinking Partner Conversation New Conversation
Paragraph 5 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 8
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Level Adapter Tool (Add your grade level or reading level to the optional descriptive note.) (Dec 08 2023 6:38PM) : Can you simplify this for me? more
Level Adapter Tool (Add your grade level or reading level to the optional descriptive note.) with Paul A's edits:
Level Adapter Tool (Add your grade level or reading level to the optional descriptive note.)
Adepts the text to the the user's level of reading

I’ll be rewriting the text for an 8th-grade reading level, which is between the lexile levels of 1010L to 1150L.

Here’s the simplified text:

When you’re picking a temperature for sampling, you have a choice from 0 to 2. Think of sampling temperature like setting how creative or predictable the output should be. If you choose a higher number, like 0.8, the result is going to feel more surprising and random. But if you pick a lower one, like 0.2, the output will be more consistent and it’ll stick more to what it knows, like following a recipe closely. The usual setting is 1.

Keywords not used in the rewrite:

1. Sampling – Sampling usually means collecting a small part of something so you can learn about the whole thing.
2. Deterministic – Deterministic is when outcomes are totally decided by what happened before, like a cause-and-effect situation.
3. Default – Default is what the setting is when you don’t choose to change it.

Now that you’ve read the simplified version and know what the keywords mean, you can try reading the original text again. It might be easier to understand now.

profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 9
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Summarizer Tool (Dec 09 2023 2:12PM) : Please condense and simplify this text more
Summarizer Tool with Paul A's edits:
Summarizer Tool
Provides an expanded summary, written for an 8th grader

Hey there! So, you know how when you’re playing a video game, you can adjust the difficulty setting? Well, imagine you can do something similar when you’re asking a really smart computer to create stuff for you. There’s a thing called ‘sampling temperature,’ and it ranges from 0 all the way up to 2.

Think of this like how spicy you want your food. A low temperature, like 0.2, means the computer plays it safe—like ordering just a tiny bit of chili. The stuff it comes up with won’t be too wild and will mostly make a lot of sense, sticking to what it knows best—kind of like how you stick to the rules when you first learn to play a game.

On the other hand, a higher temperature, like 0.8, is like asking for a lot of spice. It can lead to some unexpected flavors, or in the computer’s case, really weird and wacky ideas. The higher the number, the more it feels like the computer is taking a walk on the wild side, thinking way outside the box to surprise you.

Normally, the temperature is set right in the middle at 1—kind of like medium-spiced salsa, not too crazy but not too boring, either. It’s the best of both worlds, being creative while still making sense.

It’s like you’re the chef and the computer is your kitchen. You get to decide how adventurous you want to be with today’s special dish!

Now that this has been broken down, take a second look at the original explanation with this culinary and gaming adventure in mind. How does that change your understanding? If you think of any other cool ways to look at it, definitely throw your ideas into the mix!

profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 9
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Summarizer Tool (Dec 09 2023 2:17PM) : I need this to be 75 words. Condense it to 75 words or less. more
Summarizer Tool with Paul A's edits:
Summarizer Tool
Provides an expanded summary, written for an 8th grader

When you ask a smart computer to make or predict stuff, there’s a “spiciness” setting called sampling temperature. A low setting, like 0.2, means it’s cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1, is just right—not too predictable, not too crazy. Imagine you’re a chef deciding how zesty you want tonight’s dish to be!

Now, look back at the original details—as if you’re adjusting the spice level in your favorite game or food—and think about how it shapes the computer’s creativity. If you have more thoughts or a different way of seeing it, share your ideas with me!

New Thinking Partner Conversation New Conversation
Paragraph 5, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 5, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 5, Sentence 3 0
No sentence-level conversations. Start one.

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Should be in the range 0 – 1. Default 1.

New Thinking Partner Conversation New Conversation
Paragraph 6 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 4 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 5 0
No sentence-level conversations. Start one.

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Default 0.

New Thinking Partner Conversation New Conversation
Paragraph 7 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 7, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 7, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 7, Sentence 3 0
No sentence-level conversations. Start one.

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Default 0.

New Thinking Partner Conversation New Conversation
Paragraph 8 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 3 0
No sentence-level conversations. Start one.

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. Should be in the range 0 – 2147483647.

New Thinking Partner Conversation New Conversation
Paragraph 9 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 9, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 9, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 9, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 9, Sentence 4 0
No sentence-level conversations. Start one.

DMU Timestamp: December 08, 2023 23:32

Added December 09, 2023 at 5:27pm by Paul Allison
Title: Temperature description

10
Paragraph 10 changes
When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature. A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1.0, is just right—not too predictable, not too crazy. Note that you can choose any of these dicimal points: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0.

When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature. A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1.0, is just right—not too predictable, not too crazy. Note that you can choose any of these dicimal points: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0.

New Thinking Partner Conversation New Conversation
Paragraph 10 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 10, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 10, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 10, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 10, Sentence 4 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 10, Sentence 5 0
No sentence-level conversations. Start one.

DMU Timestamp: December 08, 2023 23:32

Added December 09, 2023 at 11:36pm by Paul Allison
Title: Intro to the settings

11
Paragraph 11 changes

A GPT Thibnking Thinking Partner gets it's life from A a Large Language Model (LLM) taking in the text you selectthat a user selects, the text from the prompt, and the text from your the user's question or statement and your their (optional) description.The LLM answers your the user's question and follows the instructions in the prompt to carry out tasks with the selected text by recognizing the knowledge required and by making a list of predictions for what words might be a useful response, then more and more predictions of what might come next, some of which are common and others of which are rare in the database. Then the . Some of the predicted strings of text are seen more often than other, more rarely seen text strings in the database. After listing and ordering (from most to least likely) some or all (depending on the Top setting) of the possible next strings of text, the LLM more or less randomly chooses one of the possible strings of text. This is a complex process where layers of text are dynamically impacting each other's predictions and decision making until a meaningful conversational, output is produced. The prompt and the advanced settings here guide the determine just how many predictions are considered and how random the choice is. A GPT Thinking Partner guides the LMM in deciding which of predictions it puts on the list of predictions and how it chooses. The This is highly the pompt and these setting shape the output of the LMM.

A GPT Thinking Partner gets it's life from a Large Language Model (LLM) taking in the text that a user selects, the text from the prompt, and the text from the user's question or statement and their (optional) description.The LLM answers the user's question and follows the instructions in the prompt to carry out tasks with the selected text by recognizing the knowledge required and by making a list of predictions for what words might be a useful response, then more and more predictions of what might come next. Some of the predicted strings of text are seen more often than other, more rarely seen text strings in the database. After listing and ordering (from most to least likely) some or all (depending on the Top setting) of the possible next strings of text, the LLM more or less randomly chooses one of the possible strings of text. This is a complex process where layers of text are dynamically impacting each other's predictions and decision making until a meaningful conversational, output is produced. The prompt and the advanced settings here determine just how many predictions are considered and how random the choice is. A GPT Thinking Partner guides the LMM in deciding which predictions it puts on the list and how it chooses.This is highly the pompt and these setting shape the output of the LMM.

New Thinking Partner Conversation New Conversation
Paragraph 11 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 10
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Summarizer Tool (Dec 10 2023 4:16PM) : Please simplify this for a sixth grader. Condense this to 75 words or less. more
Summarizer Tool with Paul A's edits:
Summarizer Tool
Provides an expanded summary, written for an 8th grader

Hey there! Imagine your computer is like a super smart helper that can write and chat with you. This helper, called a GPT Thinking Partner, is really a big bunch of information that can read what you type and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some special settings. So, what we tell it affects what it tells us back. Now, try to read the original text again and let me know if you think this made it a bit easier to understand, and if you have anything else to add!

New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 4 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 5 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 6 0
No sentence-level conversations. Start one.

DMU Timestamp: December 08, 2023 23:32

Added December 10, 2023 at 7:46pm by Paul Allison
Title: Condensed Intro to the settings

Imagine your computer has team of super smart helpers that can read, write, and chat with you. These helpers, called GPT Thinking Partners, rely on a large data base that can understand any text you give it and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some the special settings you see here. So, both what we tell it to be like and to do in our prompts and how we mark these settings affects what it tells us back.

New Thinking Partner Conversation New Conversation
Paragraph 22 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 4 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 5 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 6 0
No sentence-level conversations. Start one.

DMU Timestamp: December 08, 2023 23:32

Added December 11, 2023 at 2:41pm by Paul Allison
Title: Language for settings

New Thinking Partner Conversation New Conversation
Paragraph 23 (Image 1) 0
No whole image conversations. Start one.
New Thinking Partner Conversation New Conversation
Whole Image 0
No whole image conversations. Start one.
New Thinking Partner Conversation New Conversation
Introduction 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 11
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 11 2023 2:42PM) : Please add this language [Edited] more

Introduction

Imagine your computer has a team of super smart helpers that can read, write, and chat with you. These helpers, called GPT Thinking Partners, rely on a large database that can understand any text you give it and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some of the special settings you see here. So, both what we tell it to be like and to do in our prompts and how we mark these settings affects what it tells us back.

New Thinking Partner Conversation New Conversation
Max Reponse Length 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 11
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 11 2023 2:55PM) : The text for under this slider. [Edited] more

Use this to control how much the GPT Thinking Partner says in its reply. Do you want its responses to be a paragraph long (150 output tokens) or many paragraphs long (4096 output tokens)?

DMU Timestamp: December 08, 2023 23:32

Added December 11, 2023 at 2:59pm by Paul Allison
Title: Temperature and Top_P description

New Thinking Partner Conversation New Conversation
Paragraph 24 (Image 1) 0
No whole image conversations. Start one.
New Thinking Partner Conversation New Conversation
Whole Image 0
No whole image conversations. Start one.
New Thinking Partner Conversation New Conversation
Temperature description 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 11
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 11 2023 3:01PM) : use this under the Temperature slider. more

When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature. A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1.0, is just right—not too predictable, not too crazy.

New Thinking Partner Conversation New Conversation
Top p 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 11
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 11 2023 3:10PM) : Use this text under the Top p slider: [Edited] more

You can also control how creative a GPT Thinking Partner will be by setting its Top p. A low setting, like 0, means the GPT Thinking Partner will only make a few, more likely guesses when it responds. At a high setting, like 1.0, it will guess from a longer list of possible responses. We generally recommend altering this or temperature one at a time, but not both at the same time. The default is 0.

DMU Timestamp: December 08, 2023 23:32

Added December 11, 2023 at 3:13pm by Paul Allison
Title: Penalty settings

New Thinking Partner Conversation New Conversation
Paragraph 25 (Image 1) 0
No whole image conversations. Start one.
New Thinking Partner Conversation New Conversation
Whole Image 0
No whole image conversations. Start one.
New Thinking Partner Conversation New Conversation
Presence penalty 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 11
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 11 2023 3:20PM) : text for below the slider [Edited] more

Use positive numbers between 0 and 2.0 to encourage your GPT Thinking Partner to talk about new topics and generate more diverse and creative output. Negative numbers, between -2.0 and 0, will give your GPT Thinking Partner more consistent, predictable responses.

New Thinking Partner Conversation New Conversation
Frequency penalty 0
profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Dec 11
Paul A

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A (Dec 11 2023 3:28PM) : text for under the slider: more

Use positive numbers between 0 and 2.0 to encourage your GPT Thinking Partner to be find more and more creative ways of saying things. Negative numbers, between -2.0 and 0, will make your GPT Thinking Partner more focused and clear, and also have more repetition.

DMU Timestamp: December 08, 2023 23:32

General Document Comments 0
New Thinking Partner Conversation Start a new Document-level conversation

profile_photo

Paul Allison is a nationally-known educator and EdTech expert… (more)

Feb 6
Paul A's Question

Paul Allison is a nationally-known educator and EdTech expert… (more)

Paul A and/or Reflection Coach (Feb 06 2024 7:33AM) : Help me with my reflective writing more
Reflection Coach with Paul A's edits:
Reflection Coach
Provides a reflection on the selected writing, inferring the writer's p...

Certainly, let’s delve into the process of reflecting on your composition of the selected text: “Adding settings to GPT Thinking Partners and other such projects.”

Firstly, you should revisit the text and examine its structure, style, and themes. Look especially at its conciseness or brevity and what that might suggest about the intention and expected audience of the text. Is it meant to be informational, proposing an idea, or is it a directive for a specific audience? Your reflection should consider these aspects.

Begin your reflection by paraphrasing the essence of your text: “You intended to address the enhancement of a chatbot project by implementing additional settings.” Then, dive into its structure. Was the sentence constructed to be straightforward for functional purposes, or could it be stylistically richer?

Next, try to reconstruct the process you might have gone through while crafting this piece. Were there bullet points or an outline? Did you do research on the topic, or did you rely on pre-existing knowledge? Imagine revisiting a moment of pause where you chose each word deliberately, considering how it contributes to your overall goal. Reflect upon any challenges overcome, such as finding the most accurate terminology or ensuring clarity for potential readers who may interact with your text.

In the course of reviewing your writing, identify the cognitive strategies you deployed. For instance, were you “Thinking and communicating with clarity and precision,” making sure that every addition was necessary, clear, and effectively communicated? Did you engage in “Metacognition,” thinking about your own thinking and possibly evaluating if the way you articulated your message aligned with your intent? Maybe you also had to “Apply past knowledge to new situations,” especially if you drew upon prior technical experience to write about “GPT Thinking Partners.”

Discuss how each identified Habit of Mind played a role in your writing and how these habits are essential in other areas of your life. Giving personal examples, such as how striving for accuracy has served you in both writing and coding practices, might provide a holistic view of your skill set. Connect these habits to larger life skills, such as critical thinking or problem-solving, and reflect on how practicing them through writing enhances your ability in these domains.

As you prepare to write your in-depth reflection, remember how the reflective practice is akin to looking in a mirror, not just to see what is apparent but to understand what lies beneath. This practice allows you to grow as both a writer and a thinker. Allow your reflection to be more than a series of statements. Instead, let it be a journey through your thought process, decisions, and the personal learning that took place.

I invite you to let your mind wander, rereading your text and weighing every word and its purpose. From there, let that introspective path wind its way to your keyboard, and in the reply box, paint me a detailed landscape of your internal dialogue and decision-making trees during the writing process. Reflecting is not just part of learning—it’s part of becoming more self-aware, and it begins with the first word you write down. I look forward to seeing the horizons of your thoughts unfurl.

Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner