Eliot, Lance. “Generative AI Prompt Engineering Boosted Brightly via Clever Use of Macros and by Devising Prompts Based on Clearcut End-Goal Planning.” Forbes, 14 July 2023, www.forbes.com/sites/lanceeliot/2023/07/13/generative-ai-prompt-engineering-boosted-brightly-via-clever-use-of-macros-and-by-devising-prompts-based-on-clearcut-end-goal-planning/?sh=7801f9b3294a.
In today’s column, I am going to provide special coverage on some key new techniques and breakthroughs underlying prompt engineering when using generative AI. Prompt engineering or also referred to as prompt design is a rapidly evolving realm. Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude (Anthropic), etc. ought to be paying close attention to the latest insights in crafting prompts.
Here's why.
The prompt that you opt to enter into any text-based generative AI will materially impact the kinds of essays and interactive dialogue that you have with the AI app.
If your prompt is thin and insufficient, the odds are that the generative AI is not going to provide you with an especially fulfilling output or interaction. If your prompt is bloated and contains extraneous remarks, generative AI is bound to be unable to home in on what you are particularly seeking. All in all, you could say that generative AI is somewhat like a box of chocolates, namely that you don’t necessarily know what you will get, but if you at least aim to first adequately describe what you want, there is a heightened chance that you will have a fulfilling dialogue.
I have extensively discussed prompt engineering in my columns. For example, one handy approach consists of doing a chain-of-thought (COT) interactive chat with generative AI, see my discussion at the link here. The gist is that if you overtly choose to do a step-by-step walkthrough with the AI, each time distinctly delineating the elements of what you wish to cover, this can produce substantive results (often attaining better consequences than doing an all-at-once junkyard narrative). The computational and mathematical pattern-matching of generative AI can seemingly more closely hone to your indications when provided on a stepwise basis.
Another example of a state-of-the-art innovation involving prompt engineering consists of using in-context modeling and a vector database, see my discussion at the link here. Allow me to briefly herein elaborate on this.
The idea is that you can do some astute pre-processing to get generative AI up-to-speed in a narrow domain, which otherwise the AI might not have had sufficient detailed data training on. For example, if you want to interact with generative AI on legal matters, generic generative AI can be quite weak in the legal domain, as exemplified by the recent incident of two attorneys that got into hot water by citing some AI-faked legal cases, see my coverage at the link here.
How can you get generic generative AI up to snuff in a specific domain? One means involves using newly available AI tools that will take as input the content of a particular domain and pre-process it for readiness by generative AI. The content is tokenized and otherwise pre-digested, as it were, and placed into a special database known as a vector database. Then, when you want to invoke that domain, you use a prompt that tells the generative AI to do in-context modeling or data-learning that uses your prompt and also engages the generative AI to reach into the vector database.
The above examples of advances in prompt design or prompt engineering highlight that there are two major considerations associated with boosting prompt effectiveness and veracity:
Let’s next unpack those two considerations.
Duality Of Prompt Wording And Prompt Automation
You might assume that anyone using generative AI already knows the importance of entering suitably worded prompts.
Sorry, but you’d be mistaken for making that assumption.
Many people that are using generative AI seem to be mindlessly unaware that the wording of their prompts will markedly sway the direction of the AI interaction. The problem that these people face is that they might get off-target responses from the AI due to their own lackluster wording.
When this happens repeatedly, those people become outwardly irritated and upset at the AI, oftentimes blasting the generative AI as useless or impossible to contend with. Sadly, they might be shooting their own foot simply by the ineffective wording of their prompts, and not realize that they are undercutting their determined efforts accordingly.
Prompt-related tools or add-ons aid can significantly aid those people by providing automated guidance and steering toward a much more productive direction when using generative AI. In some instances, a person that lacks savviness when it comes to the wording of their prompts can be noticeably bolstered by using an AI add-on that nudges them in a better direction.
Not only do those AI add-ons help those that are prompt-devising novices, but even someone that becomes a guru at prompt wording can beneficially make use of AI add-ons. Those prompt-masters are likely to find themselves at times frustrated by various limitations of the generative AI, having reached the harsh boundaries of what clever wording alone can achieve. This is where automation can doubly come to the rescue. A well-devised prompt-focused AI tool or similar piece of automation can make a huge difference in nudging the generative AI beyond existing technological limits, along with aiding the crafting or wording of your prompts.
We, therefore, have two simultaneous considerations at play, such that you want to make sure your prompt wording is as good as you can get it (maximize your wording effectiveness), and at the same time you should consider using various prompt automation or tools that can further aid your prompt-related activities (maximize the generative AI and possibly exceed system limits prudently).
You want to try and naturally maximize your prompt wording effectiveness if you can. For example, sometimes you might simply learn over time about how to best devise prompts via a seat-of-the-pants or ad hoc trial-and-error journey. Another path consists of relying upon prompt-related cheat sheets or taking a training course on prompt engineering. And so on.
Few seem to as yet know about the prompt engineering add-ons and AI automation that is gradually being researched and rolled out. I’ve predicted that we will ultimately all be making use of such prompt-focused capabilities and that this will lessen to a great extent manually having to know how to best word your prompts, see my analysis at the link here. These tools will either be built into generative AI directly or easily be added into generative AI so that nearly everyone will have access to them.
AI Ethics And AI Law Enter Into The Prompting Sphere
There are some significant AI Ethics and AI Law ramifications associated with all of this.
Consider for example the AI Ethics question about AI being democratized.
Worries have been that only an elite set of tech-savvy people will fruitfully make use of generative AI, leaving in the dust those that are potentially skittish about using AI or that are not versed in prompting skills. A cogent argument in favor of add-ons and AI automation for prompt engineering asserts that this kind of functionality will indeed democratize generative AI. No one will need to have swashbuckling wordsmithing acumen. Everyone will fully and productively be able to utilize generative AI due to the add-ons doing the heavy lifting for them.
A counterargument is that our wording skills will radically degrade as we become increasingly reliant on add-ons to do the wordsmithing task for us. Perhaps the democratization of generative AI will be a downfall that leads to the massive deskilling of society. All that these added tools are insidiously going to do is allow us to race to the bottom in terms of a bone-crushing loss in literacy. For my analysis of both sides of this controversial contention, see the link here.
In terms of the AI Law ramifications, we can look at prompt engineering in two key ways.
First, the use of generative AI by lawyers is a topic I’ve extensively covered and pointed out that the legal field is going to undergo a dramatic transformation once generative AI becomes data-trained in matters of the law, see my predictions about AI as applied to the law in the link here. Many lawyers at this time are unsure of how to use generative AI for their legal endeavors. Part of this relates to the lack of familiarity with prompt engineering and how to energize the best feasible legal use of generative AI. Additional and quite important caveats include the Intellectual Property rights qualms (see my coverage at the link here), privacy and confidentiality of their clients (see the link here), AI hallucinations impacting legal uses of generative AI (see the link here), and the like.
Second, there is a rising interest in devising new laws and regulations that pertain to the governance of generative AI, which consists of applying the law to AI (i.e., the other side of the AI-Law realm is the Law-AI considerations). Lawmakers are struggling with trying to keep AI suitably constrained to protect society, yet not overstep and perhaps dampen or outright crush innovation in AI that provides vital societal benefits. Do we for example need new laws to ensure that generative AI doesn’t proffer discriminatory narratives or interact in unduly biased and unseemly ways? Some say we do need such laws; others insist that we should wait and let innovation first takes its course, see my analyses at the link here and the link here.
Please keep AI Ethics and AI Law at the forefront of your thinking when it comes to the present and future direction of generative AI and AI all told.
Tactical Versus Strategic Perspective On Prompting
Returning to the everyday facets of prompt engineering, the techniques and approaches proffered about prompt design and prompt wording can be conveniently characterized this way:
Let’s combine then the two earlier noted aspects of Prompt Wording and Prompt Automation with the notions of Tactical oriented prompting versus Strategic oriented prompting.
Doing so gets us these four squares (a 2x2 matrix):
Those four combinations are each potent in their own manner.
In the first instance of PW-TP, you might seek out someone’s litany of prompt wording advice that is primarily of a tactical nature, such as telling you to mindfully word your prompts affirmatively and confidently. This might include writing shortened sentences and keeping your vocabulary directly to the point at hand concerning whatever you want the generative AI to ruminate on. Etc.
The second instance, PA-TP, consists of AI add-ons and tools that help with your prompt wording on a somewhat microscopic scale. I’ll showcase in a moment an example of this that was recently announced and released by Stephen Wolfram.
The third instance, PW-SP, entails learning about how to strategically make use of generative AI via the larger picture prompting strategies. I’ll be showcasing such an example momentarily herein that was recently devised and depicted by Dazza Greenwood.
The fourth instance, PA-SP, involves using AI add-ons and tools that contend with a macroscopic perspective on your prompts and the use of generative AI. My earlier mention of the latest in in-context modeling with the use of vector databases is illustrative of this emerging category.
Prompt Automation That Is Constructively Tactical Oriented
One quite exciting recent announcement and released product in the PA-TP category consisted of a suite of prompt-oriented macros that can be used in generative AI. The novel approach was described by the noted British-American computer scientist Stephen Wolfram via his business Wolfram Research in a blog posting entitled “Prompts for Work & Play: Launching the Wolfram Prompt Repository” posted on June 7, 2023.
First, think about your use of macros in ordinary spreadsheets. You might find yourself routinely doing the same action over and over, such as copying a spreadsheet cell and modifying it before you paste it into another part of the sheet. Rather than always laboriously performing that action, you might craft a macro that semi-automates the spreadsheet task at hand. You can thereafter merely invoke the macro and the spreadsheet activity will be run via the stored macro.
Let’s use that same concept when composing prompts in generative AI.
Suppose you sometimes opt to have generative AI interact as though it is the beloved character Yoda from Star Wars. You might initially devise a prompt that tells generative AI to pretend that it is Yoda and respond to you henceforth in a Yoda-like manner. This persona-establishing prompt could be several sentences in length. You might need to provide a somewhat detailed explanation about the types of lingo Yoda would use and how far you want the generative AI to go when responding in that preferred tone and style.
Each time that you are using generative AI and want to invoke the Yoda persona, you would either have to laboriously retype that depiction or maybe store it in a file and do a copy-and-paste into the prompt window of the AI app. Quite tiring. Instead, you could potentially create a macro that contained the same set of instructions and merely invoke the macro. The macro would feed that prompt silently into the generative AI and get the AI pattern-matching into the contextual setting that you want.
That’s the underlying notion of devising or revising generative AI to encompass the use of macros. This can also be done by employing an AI add-on to the prompt component of the generative AI.
Wolfram announced and made available just such a macro capability, as these excerpts from the recent blog posting explain (generative AI in this depiction is also referred to as a large language model or LLM):
As stated in the above excerpts, they have set up a Prompt Repository that contains the various macros. The macros are initially categorized as being in one of three classifications, consisting of personas, functions, and modifiers.
Here’s how the three categories of prompts are defined in the blog posting:
I’m sure you are curious as to how the macros are used in a given prompt.
In the case of the personas, you can invoke a macro by preceding the macro name with an “@” and use that within the wording of your prompt. Consider this example of getting generative AI to start interacting as Yoda and respond to your query about wanting to eat some chocolate:
For invoking a function, you precede the name of the function with an exclamation mark and can once again slide this into your prompt. An example would be that you tell the generative AI to rephrase your prompts into formalized wording:
A modifier is used by preceding the named modifier with a hashtag symbol. For example, there is a modifier that will get the generative AI to respond to questions by only replying with a Yes or No answer. Take a look at this:
You might be wondering what the underlying definitions are for each of those macros. In the instance of the modifier that requires the generative AI to respond with only Yes or No, here’s the general hidden prompt that is passed into the generative AI when you use the “#YesNo” modifier:
The Prompt Repository currently includes approximately 52 personas, 105 functions, and 36 modifier prompts (these counts are changing as more macros are continually being added to the database). Here’s an illustrative listing of some of the personas available at this time, such as Yoda, AnimalSpeak, CarnivalBarker, CutiePie, DrillSergeant, GenZSpeak, Interrogator, MadHatter, OldTimeGangsterSpeak, PirateSpeak, Spock, SurferDudeSpeak, and others. The idea is that more macros will be added to the collection based on feedback and the anticipated needs of generative AI users.
In a sense, you could suggest that the macros become their own kind of shortcut language. The same can be said about the use of a spreadsheet. Once you have lots of macros to choose from, eventually you are likely to pepper your spreadsheet with a slew of macros. This makes things faster for you when crafting the spreadsheet and making use of it.
The Wolfram blog emphasizes that higher-level wording conception:
This is laudable work and you can expect that many vendors will be bringing macro foundations to the already zany and ever-changing world of generative AI.
The Future Of Macros When Using For Prompt Engineering
There are numerous tradeoffs associated generally with the use of macros.
Let’s consider some salient caveats and overall potential gotchas associated with any overall approach that makes use of macros, including as used in spreadsheets and now evolving to be used in generative AI prompts.
First, we are possibly going to see lots of vendors that offer macros for generative AI. Each set or suite of macros is likely to differ from someone else’s set. The names might be different. The underlying prompt wording might be different. And so on. Thus, the likely proprietary nature means that you might get used to using a particular vendor’s macros and essentially become dependent upon those particular macros and what they do.
Second, the macros will need to be established for each generative AI app that the vendor intends to make the macros available in. You might be using a generative AI that doesn’t yet have the macros established. In that case, if you have become overly comfortable using the macros, you might be somewhat trapped (by your own self-selection) into only using a generative AI that perchance has those macros established.
Third, let’s assume that most people will not studiously examine the underlying definitions of the macros that are available in a particular generative AI. The person won’t know for sure what the macro is feeding into the generative AI. A false assumption on the part of the person using the macro is bound to create confusion and consternation when using the generative AI.
Fourth, suppose that the macros are sneakily modified or somehow hacked. All of a sudden, unbeknownst to you, the macros you previously used are now doing under-the-hood shenanigans that you had no idea could occur. The vendor would presumably have secured their macros to avoid this corruption.
Fifth, and tying somewhat to the fourth point, a question arises as to whether you should be permitted to craft macros anew. And, if so, whether only you would have access to those macros or whether they might be made available to others. On the one hand, it abundantly makes sense to allow people to make available their macros publicly so that others can benefit from their capability. The other side of that coin is that the macros might be dastardly or at least not well tested, ergo those that opt to use the macros could be on shaky ground (the retort is that perhaps the wisdom of the crowd will weed out those foul macros or that a Yelp-like rating will provide a sense of trustworthiness about those added macros).
In the case of the Wolfram Prompt Repository, here’s what they are undertaking:
Allow me a brief overall final thought on the macro’s topic.
If we are all going to pervasively go the macros route across the board for use in prompts and for generative AI, perhaps a global standard might be a useful means of trying to keep a semblance of order and balance. The hope of course is that this would be done expeditiously before the horse gets out of the barn (though, maybe the horse is already galloping past the open gate).
Time will tell.
Doing Prompt Wording In A Strategic Mindset
I had earlier identified that another overarching formulation about prompt engineering consists of Prompt Wording that entails Strategic Prompting (PW-SP).
Let’s dive into that construct.
I’m betting that you might know a famous line often used by Stephen Covey, author of the acclaimed book The 7 Habits of Highly Effective People that said this: “Begin with the end in mind.” The principle stipulates that you might be wise to first think about the end goal of whatever you aim to do. In academic literature, this is commonly referred to as backward planning, backward goal-setting, backward design, reverse order planning, end-goal planning, back-casting, and other monikers.
On an aside, I disfavor the use of the word “backward” in those catchphrases because it seems to spur some to think negatively of the precept at hand. We tend to usually consider anything that is backward as being messed up or distorted in an undesirable fashion. For that reason, I tend to refer to the principle as end-goal planning. You are seeking to start your plan by first identifying the end goal that you wish to attain. You can then either work step-by-step from that end goal and arrive at the steps needed to ultimately work forward to the end goal, or you can jump to the opening of your plan but at least will have in mind the end goal that you hope to arrive at.
To further elaborate why it is beneficial to do end-goal planning, consider these excerpts from a research study entitled “Backward Planning: Effects Of Planning Direction On Predictions Of Task Completion” by Jessica Wiese, Roger Buehler, and Dale Griffin, Cambridge University Press, posted on January 1, 2023:
If you don’t begin your deliberations by first identifying a desired end goal, the chances are that you’ll wander all over the map due to not having an endpoint in mind. This is illuminated by Bill Copeland, American poet and esteemed historian when he offered this cautionary bit of wisdom: “The trouble with not having a goal is that you can spend your life running up and down the field and never score.”
I assume that you can see the compelling basis for doing end-goal planning.
A recent and insightful posting entitled “Success is All You Need” by Dazza Greenwood, founder of law.MIT.edu (research) and CIVICS.com (consultancy), posted on June 19, 2023, available at the link here, has cleverly managed to dovetail together notable prompt engineering considerations of generative AI with the all-so-important end-goal planning cogitation:
This fully fits into the PW-SP categorization by outrightly proffering a vital insight for prompt wording that seeks to get you to strategically think about how you are composing your prompts. You should start your prompt engineering by first coming up with a suitable end goal, coupled with tangible and pertinent metrics.
Explicitly say to yourself:
This is relatively straightforward as a rule of thumb, but sadly often overlooked and not at the top of mind for those using generative AI. You might be startlingly surprised to know that many users of generative AI are either unaware of this methodology or are already mired in the classic seat-of-the-pants methods. They summarily launch into a generative AI session and fumble their way through their AI dialogue. These ad hoc pursuers tend to waste time as they meander in a scattered fashion. If they or their firm is paying for the computer cycles of the cloud servers running the AI, the final billing is bound to be a lot higher than it needed to be.
These scattergun efforts also tend to confound the mathematical and computational pattern-matching. The generative AI cannot land on whatever sloppy drift is being obscurely conveyed. You can ask one person with an ad hoc demeanor to seek out answers via generative AI and ask a second person that is using an end-goal precept, and the odds are pretty solid that the end-goal embracer is going to get a more on-target dialogue, in a shorter time frame, and with a much greater beneficial outcome than the ad hoc pursuer (there are various prompt engineering and human behavior studies underway that I anticipate will shed light on this).
Dazza Greenwood provided in his blog posting some handy examples of how prompts can be devised and suitably worded via leveraging a strategic mindset. This amply showcases that you don’t need to be a brain surgeon or rocket scientist to use a strategic perspective for devising your prompts. A bit of elbow grease and thoughtful contemplation will do the trick.
Consider these example prompts as noted in his blog:
Closely observe that the first prompt above includes stipulated goals as part of the end-goal planning being performed. The generative AI will tend to be guided toward trying to computationally pattern-match to achieve in this example the three success criteria. If you had left out the success criteria, the generative AI might produce all manner of a dialogue that though could be interesting, wouldn’t necessarily home in on those specific criteria. Again, this provides a semblance of the boundaries and goalposts that you want the generative AI to focus on.
The second prompt further reinforces this attention to the stated three success criteria. The chances are that the first prompt will get a lengthy narrative that articulates how the generative AI-devised plan meets the stated criteria. In the second prompt, you get the generative AI to format the plan in a readily accessible shape, based on the engagement of the three success criteria, and you tell the AI to essentially explain or provide a rationale. I’ve discussed at length the importance of having AI explain the logic underlying its outputs, an expanding and vital field known as explainable AI or XAI, see my coverage at the link here.
Bottom-line is that you should give due consideration to the strategic perspective associated with prompt engineering and I’d stridently suggest that you take a look at Dazza Greenwood’s shrewd commentary on how to get underway in that vaunted and vital pursuit.
Conclusion
Those that want to toy around with generative AI are welcome to continue their idle playfulness. They probably don’t need any training or guidance on the writing of their prompts. Nor do they likely need any AI add-ons that could help with their prompt design. If fun is all that is required, they can remain joyfully in the fun zone of generative AI.
But if you are taking generative AI seriously, perhaps in a professional context such as a lawyer or a medical doctor (see my coverage of medical malpractice that is looming over medical professionals that use generative AI in haphazard ways, at the link here), you need to wake up and smell the roses. You need to get serious about how you compose your prompts.
There is an adage in the computer field that says garbage-in begets garbage-out (referred to as GIGO). When your prompts are essentially garbage, the generative AI is going to have a devil of a time mathematically and computationally trying to serve you with useful interaction and essays. You are also increasing the chances of the generative AI producing errors, falsehoods, biases, glitches, and so-called AI hallucinations.
Get your act together and immerse yourself into the proper and advisable tactics and strategies of prompt wording and prompt engineering. Furthermore, go ahead and kick the tires on the latest emerging AI add-ons and tools that are purposefully devised to boost your prompting efforts. You will be glad that you have lifted yourself into a state of being that maximizes the payoff from using generative AI.
A final remark for now.
A truism that we all tend to learn at one point during our lives is that if you aren’t sure of where you are going, you’ll probably end up somewhere else. The widespread rush to use generative AI has spurred lots of people to wantonly make use of generative AI. They enter prompts that have little chance of garnering successful dialogues and regrettably receive answers from generative AI that have marginal value.
Keep your eye on the prize, compose your prompts soundly, use whatever AI add-ons seem practical and beneficial, and aim to make generative AI sing wonderous and ample songs for you.
Those are decidedly wise words worth living by.
Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 6.8+ million amassed
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
I’m the Tech Liaison for the New York City Writing Project. I… (more)
I’m the Tech Liaison for the New York City Writing Project. I… (more)
Three themes in this text:
1. Generative AI
2. Adequate Description
3. Dialogue
Take ten minutes to freewrite about these themes. Imagine you are having a conversation with an AI and describe what the dynamic is like. Write about the idea of providing a good description to an AI in order to get meaningful results. Consider what kind of dialogue you can have with an AI and how it differs from talking to a person. Don’t stop to edit or evaluate your writing – just keep writing and exploring these topics.
New Conversation
Hide Full Comment
I’m the Tech Liaison for the New York City Writing Project. I… (more)
I’m the Tech Liaison for the New York City Writing Project. I… (more)
I’m not sure why the author compared generative AI to a box of chocolates. I understand the idea that generative AI may not give us the output or fulfilment we expect if our prompt is inaccurate, but I don’t understand why the analogy of trying different chocolates is relevant here. Could you please help explain this further so that I can better understand the text?
New Conversation
Hide Full Comment
General Document Comments 0