NowComment
Document: Invite Print Info
2-Pane Combined
Comments:
Full Summaries Sorted

Paul's Logs December 8 - 14

Author: Paul Allison

17 additions to document , most recent about 2 years ago

When Why
Dec-08-23 OpenAI's settings
Dec-09-23 Temperature description
Dec-09-23 Intro to the settings
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Wording change
Dec-10-23 Condensed Intro to the settings
Dec-11-23 Language for settings
Dec-11-23 Temperature and Top_P description
Dec-11-23 Penalty settings

12 changes, most recent about 2 years ago

Show Changes

Adding settings to GPT Thinking Partners and other such projects.

DMU Timestamp: December 08, 2023 23:32

Added December 08, 2023 at 6:35pm by Paul Allison
Title: OpenAI's settings

Advanced Settings

The maximum number of tokens to generate. Should be in the range 0 – 32768.

Up to 4 sequences where the API will stop generating further tokens.

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Default 1.

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Should be in the range 0 – 1. Default 1.

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Default 0.

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Default 0.

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. Should be in the range 0 – 2147483647.

DMU Timestamp: December 08, 2023 23:32

Added December 09, 2023 at 5:27pm by Paul Allison
Title: Temperature description

10
Paragraph 10 changes
When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature. A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1.0, is just right—not too predictable, not too crazy. Note that you can choose any of these dicimal points: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0.

When you create a GPT Thinking Partner, there’s a “spiciness” setting called Temperature. A low setting, like 0.2, means the GPT Thinking Partner will be cautious and predictable, a bit boring. A high setting, like 1.8, is when things get wild and super random. The middle setting, 1.0, is just right—not too predictable, not too crazy. Note that you can choose any of these dicimal points: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0.

DMU Timestamp: December 08, 2023 23:32

Added December 09, 2023 at 11:36pm by Paul Allison
Title: Intro to the settings

11
Paragraph 11 changes

A GPT Thibnking Thinking Partner gets it's life from A a Large Language Model (LLM) taking in the text you selectthat a user selects, the text from the prompt, and the text from your the user's question or statement and your their (optional) description.The LLM answers your the user's question and follows the instructions in the prompt to carry out tasks with the selected text by recognizing the knowledge required and by making a list of predictions for what words might be a useful response, then more and more predictions of what might come next, some of which are common and others of which are rare in the database. Then the . Some of the predicted strings of text are seen more often than other, more rarely seen text strings in the database. After listing and ordering (from most to least likely) some or all (depending on the Top setting) of the possible next strings of text, the LLM more or less randomly chooses one of the possible strings of text. This is a complex process where layers of text are dynamically impacting each other's predictions and decision making until a meaningful conversational, output is produced. The prompt and the advanced settings here guide the determine just how many predictions are considered and how random the choice is. A GPT Thinking Partner guides the LMM in deciding which of predictions it puts on the list of predictions and how it chooses. The This is highly the pompt and these setting shape the output of the LMM.

A GPT Thinking Partner gets it's life from a Large Language Model (LLM) taking in the text that a user selects, the text from the prompt, and the text from the user's question or statement and their (optional) description.The LLM answers the user's question and follows the instructions in the prompt to carry out tasks with the selected text by recognizing the knowledge required and by making a list of predictions for what words might be a useful response, then more and more predictions of what might come next. Some of the predicted strings of text are seen more often than other, more rarely seen text strings in the database. After listing and ordering (from most to least likely) some or all (depending on the Top setting) of the possible next strings of text, the LLM more or less randomly chooses one of the possible strings of text. This is a complex process where layers of text are dynamically impacting each other's predictions and decision making until a meaningful conversational, output is produced. The prompt and the advanced settings here determine just how many predictions are considered and how random the choice is. A GPT Thinking Partner guides the LMM in deciding which predictions it puts on the list and how it chooses.This is highly the pompt and these setting shape the output of the LMM.

DMU Timestamp: December 08, 2023 23:32

Added December 10, 2023 at 7:46pm by Paul Allison
Title: Condensed Intro to the settings

Imagine your computer has team of super smart helpers that can read, write, and chat with you. These helpers, called GPT Thinking Partners, rely on a large data base that can understand any text you give it and figure out the best way to reply. It thinks about everything you say and ask, and then makes guesses about what to say back. Sometimes there are lots of different guesses, and it has to pick one. How it picks one and how many guesses it thinks about can change based on some the special settings you see here. So, both what we tell it to be like and to do in our prompts and how we mark these settings affects what it tells us back.

DMU Timestamp: December 08, 2023 23:32

Added December 11, 2023 at 2:41pm by Paul Allison
Title: Language for settings

DMU Timestamp: December 08, 2023 23:32

Added December 11, 2023 at 2:59pm by Paul Allison
Title: Temperature and Top_P description

DMU Timestamp: December 08, 2023 23:32

Added December 11, 2023 at 3:13pm by Paul Allison
Title: Penalty settings

DMU Timestamp: December 08, 2023 23:32





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner