| When | Why |
|---|---|
| Dec-08-23 | OpenAI's settings |
| Dec-09-23 | Temperature description |
| Dec-09-23 | Intro to the settings |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Wording change |
| Dec-10-23 | Condensed Intro to the settings |
| Dec-11-23 | Language for settings |
| Dec-11-23 | Temperature and Top_P description |
| Dec-11-23 | Penalty settings |
Adding settings to GPT Thinking Partners and other such projects.
Added December 08, 2023 at 6:35pm
by Paul Allison
Title: OpenAI's settings
The maximum number of tokens to generate. Should be in the range 0 – 32768.
Up to 4 sequences where the API will stop generating further tokens.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Default 1.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Should be in the range 0 – 1. Default 1.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Default 0.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Default 0.
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. Should be in the range 0 – 2147483647.
Added December 09, 2023 at 5:27pm
by Paul Allison
Title: Temperature description
When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature.
A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring.
A high setting, like 1.8, is when things get wild and super random.
The middle setting, 1.0, is just right—not too predictable, not too crazy.
Note that you can choose any of these dicimal points: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0.
Added December 09, 2023 at 11:36pm
by Paul Allison
Title: Intro to the settings
A GPT Thibnking Thinking Partner gets it's life from A a Large Language Model (LLM) taking in the text you selectthat a user selects, the text from the prompt, and the text from your the user's question or statement and your their (optional) description.The LLM answers your the user's question and follows the instructions in the prompt to carry out tasks with the selected text by recognizing the knowledge required and by making a list of predictions for what words might be a useful response, then more and more predictions of what might come next, some of which are common and others of which are rare in the database. Then the . Some of the predicted strings of text are seen more often than other, more rarely seen text strings in the database. After listing and ordering (from most to least likely) some or all (depending on the Top setting) of the possible next strings of text, the LLM more or less randomly chooses one of the possible strings of text. This is a complex process where layers of text are dynamically impacting each other's predictions and decision making until a meaningful conversational, output is produced. The prompt and the advanced settings here guide the determine just how many predictions are considered and how random the choice is. A GPT Thinking Partner guides the LMM in deciding which of predictions it puts on the list of predictions and how it chooses. The This is highly the pompt and these setting shape the output of the LMM.
A GPT Thinking Partner gets it's life from a Large Language Model (LLM) taking in the text that a user selects, the text from the prompt, and the text from the user's question or statement and their (optional) description.The LLM answers the user's question and follows the instructions in the prompt to carry out tasks with the selected text by recognizing the knowledge required and by making a list of predictions for what words might be a useful response, then more and more predictions of what might come next.
Some of the predicted strings of text are seen more often than other, more rarely seen text strings in the database.
After listing and ordering (from most to least likely) some or all (depending on the Top setting) of the possible next strings of text, the LLM more or less randomly chooses one of the possible strings of text.
This is a complex process where layers of text are dynamically impacting each other's predictions and decision making until a meaningful conversational, output is produced.
The prompt and the advanced settings here determine just how many predictions are considered and how random the choice is.
A GPT Thinking Partner guides the LMM in deciding which predictions it puts on the list and how it chooses.This is highly the pompt and these setting shape the output of the LMM.
Added December 10, 2023 at 7:46pm
by Paul Allison
Title: Condensed Intro to the settings
Imagine your computer has team of super smart helpers that can read, write, and chat with you. These helpers, called GPT Thinking Partners, rely on a large data base that can understand any text you give it and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some the special settings you see here. So, both what we tell it to be like and to do in our prompts and how we mark these settings affects what it tells us back.
Added December 11, 2023 at 2:41pm
by Paul Allison
Title: Language for settings
Added December 11, 2023 at 2:59pm
by Paul Allison
Title: Temperature and Top_P description
Added December 11, 2023 at 3:13pm
by Paul Allison
Title: Penalty settings
Logging in, please wait... 
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
উৎপন্ন করার জন্য সর্বোচ্চ টোকেন সংখ্যা। এটি ০ থেকে ৩২৭৬৮ পর্যন্ত হওয়া উচিত।
আপনি কন্ট্রোল করতে পারেন যে আউটপুটের দৈর্ঘ্য কেমন হবে। আপনি যদি ছোট উত্তর চান, তবে কম সংখ্যা ব্যবহার করুন। বেশি সংখ্যাগুলি GPT থিংকিং পার্টনারকে বৃহত্তর এবং প্রসারিত উত্তর দিতে প্রেরণা করবে।
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
The maximum number of tokens to generate. Should be in the range 0 – 32768.
You can control how long the output will be. If you want shorter answers, then use a smaller number. Larger numbers will motivate the GPT thinking partner to provide larger and more elaborated answers.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
প্রজন্মের জন্য সর্বাধিক টোকেনের সংখ্যা। এর পরিসীমা হওয়া উচিত ০ – ৩২৭৬৮।
আপনি আউটপুটের দৈর্ঘ্য নিয়ন্ত্রণ করতে পারেন। যদি আপনি ছোট উত্তর চান, তবে কম সংখ্যা ব্যবহার করুন। বৃহত্তর সংখ্যা GPT থিঙ্কিং পার্টনারকে প্রসারিত এবং বিস্তৃত উত্তর দিতে বাধ্য করবে।
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
The maximum number of tokens to generate. It should be within the range of 0 – 32768.
You can control the length of the output. If you want a short answer, use a lower number. A larger number will force the GPT thinking partner to give extended and elaborate answers.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I’ll be rewriting the text for an 8th-grade reading level, which is between the lexile levels of 1010L to 1150L.
Here’s the simplified text:
When you’re picking a temperature for sampling, you have a choice from 0 to 2. Think of sampling temperature like setting how creative or predictable the output should be. If you choose a higher number, like 0.8, the result is going to feel more surprising and random. But if you pick a lower one, like 0.2, the output will be more consistent and it’ll stick more to what it knows, like following a recipe closely. The usual setting is 1.
Keywords not used in the rewrite:
1. Sampling – Sampling usually means collecting a small part of something so you can learn about the whole thing.
2. Deterministic – Deterministic is when outcomes are totally decided by what happened before, like a cause-and-effect situation.
3. Default – Default is what the setting is when you don’t choose to change it.
Now that you’ve read the simplified version and know what the keywords mean, you can try reading the original text again. It might be easier to understand now.
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Hey there! So, you know how when you’re playing a video game, you can adjust the difficulty setting? Well, imagine you can do something similar when you’re asking a really smart computer to create stuff for you. There’s a thing called ‘sampling temperature,’ and it ranges from 0 all the way up to 2.
Think of this like how spicy you want your food. A low temperature, like 0.2, means the computer plays it safe—like ordering just a tiny bit of chili. The stuff it comes up with won’t be too wild and will mostly make a lot of sense, sticking to what it knows best—kind of like how you stick to the rules when you first learn to play a game.
On the other hand, a higher temperature, like 0.8, is like asking for a lot of spice. It can lead to some unexpected flavors, or in the computer’s case, really weird and wacky ideas. The higher the number, the more it feels like the computer is taking a walk on the wild side, thinking way outside the box to surprise you.
Normally, the temperature is set right in the middle at 1—kind of like medium-spiced salsa, not too crazy but not too boring, either. It’s the best of both worlds, being creative while still making sense.
It’s like you’re the chef and the computer is your kitchen. You get to decide how adventurous you want to be with today’s special dish!
Now that this has been broken down, take a second look at the original explanation with this culinary and gaming adventure in mind. How does that change your understanding? If you think of any other cool ways to look at it, definitely throw your ideas into the mix!
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
When you ask a smart computer to make or predict stuff, there’s a “spiciness” setting called sampling temperature. A low setting, like 0.2, means it’s cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1, is just right—not too predictable, not too crazy. Imagine you’re a chef deciding how zesty you want tonight’s dish to be!
Now, look back at the original details—as if you’re adjusting the spice level in your favorite game or food—and think about how it shapes the computer’s creativity. If you have more thoughts or a different way of seeing it, share your ideas with me!
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Hey there! Imagine your computer is like a super smart helper that can write and chat with you. This helper, called a GPT Thinking Partner, is really a big bunch of information that can read what you type and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some special settings. So, what we tell it affects what it tells us back. Now, try to read the original text again and let me know if you think this made it a bit easier to understand, and if you have anything else to add!
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Introduction
Imagine your computer has a team of super smart helpers that can read, write, and chat with you. These helpers, called GPT Thinking Partners, rely on a large database that can understand any text you give it and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some of the special settings you see here. So, both what we tell it to be like and to do in our prompts and how we mark these settings affects what it tells us back.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Use this to control how much the GPT Thinking Partner says in its reply. Do you want its responses to be a paragraph long (150 output tokens) or many paragraphs long (4096 output tokens)?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature. A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1.0, is just right—not too predictable, not too crazy.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
You can also control how creative a GPT Thinking Partner will be by setting its Top p. A low setting, like 0, means the GPT Thinking Partner will only make a few, more likely guesses when it responds. At a high setting, like 1.0, it will guess from a longer list of possible responses. We generally recommend altering this or temperature one at a time, but not both at the same time. The default is 0.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Use positive numbers between 0 and 2.0 to encourage your GPT Thinking Partner to talk about new topics and generate more diverse and creative output. Negative numbers, between -2.0 and 0, will give your GPT Thinking Partner more consistent, predictable responses.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Use positive numbers between 0 and 2.0 to encourage your GPT Thinking Partner to be find more and more creative ways of saying things. Negative numbers, between -2.0 and 0, will make your GPT Thinking Partner more focused and clear, and also have more repetition.
New Conversation
Hide Full Comment
General Document Comments 0
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Certainly, let’s delve into the process of reflecting on your composition of the selected text: “Adding settings to GPT Thinking Partners and other such projects.”
Firstly, you should revisit the text and examine its structure, style, and themes. Look especially at its conciseness or brevity and what that might suggest about the intention and expected audience of the text. Is it meant to be informational, proposing an idea, or is it a directive for a specific audience? Your reflection should consider these aspects.
Begin your reflection by paraphrasing the essence of your text: “You intended to address the enhancement of a chatbot project by implementing additional settings.” Then, dive into its structure. Was the sentence constructed to be straightforward for functional purposes, or could it be stylistically richer?
Next, try to reconstruct the process you might have gone through while crafting this piece. Were there bullet points or an outline? Did you do research on the topic, or did you rely on pre-existing knowledge? Imagine revisiting a moment of pause where you chose each word deliberately, considering how it contributes to your overall goal. Reflect upon any challenges overcome, such as finding the most accurate terminology or ensuring clarity for potential readers who may interact with your text.
In the course of reviewing your writing, identify the cognitive strategies you deployed. For instance, were you “Thinking and communicating with clarity and precision,” making sure that every addition was necessary, clear, and effectively communicated? Did you engage in “Metacognition,” thinking about your own thinking and possibly evaluating if the way you articulated your message aligned with your intent? Maybe you also had to “Apply past knowledge to new situations,” especially if you drew upon prior technical experience to write about “GPT Thinking Partners.”
Discuss how each identified Habit of Mind played a role in your writing and how these habits are essential in other areas of your life. Giving personal examples, such as how striving for accuracy has served you in both writing and coding practices, might provide a holistic view of your skill set. Connect these habits to larger life skills, such as critical thinking or problem-solving, and reflect on how practicing them through writing enhances your ability in these domains.
As you prepare to write your in-depth reflection, remember how the reflective practice is akin to looking in a mirror, not just to see what is apparent but to understand what lies beneath. This practice allows you to grow as both a writer and a thinker. Allow your reflection to be more than a series of statements. Instead, let it be a journey through your thought process, decisions, and the personal learning that took place.
I invite you to let your mind wander, rereading your text and weighing every word and its purpose. From there, let that introspective path wind its way to your keyboard, and in the reply box, paint me a detailed landscape of your internal dialogue and decision-making trees during the writing process. Reflecting is not just part of learning—it’s part of becoming more self-aware, and it begins with the first word you write down. I look forward to seeing the horizons of your thoughts unfurl.
New Conversation
Hide Full Comment Hide Thread Detail