Eliot, Lance. “Latest Prompt Engineering Technique Inventively Transforms Imperfect Prompts into Superb Interactions for Using Generative AI.” Forbes, Forbes Magazine, 26 July 2023, www.forbes.com/sites/lanceeliot/2023/07/26/latest-prompt-engineering-technique-inventively-transforms-imperfect-prompts-into-superb-interactions-for-using-generative-ai/?sh=3da6d2902c31.
Perfect is the enemy of good.
That sage piece of wisdom attributed to Voltaire is a handy reminder that sometimes when you strive for perfection you might be making a mistake. Rather than going all out to attain perfection, it is often more prudent and practical to be satisfied with achieving good enough. We often see time and again that the dusty and bumpy trail achingly seeking perfection is littered with those that have carelessly expended undue excess energy and attention.
A commonsense willingness to go along with a good enough strategy can be entirely fulfilling.
In today’s column, I am continuing my special series on the latest advances in prompt engineering and will discuss how the widespread desire to compose perfect prompts is getting in the way of effectively using generative AI to get your everyday meaningful work done. People tend to mentally contort themselves trying to ascertain the perfect wording for their prompts. This turns out to be misguided altruism. The latest research on generative AI indicates that you can instead allow yourself to make use of imperfect prompts, out of which you can aggregate and devise stout prompts that will do the trick and be successful for you.
In short, you can fruitfully harness and leverage imperfection by design, and up your prompt engineering acumen when it comes to speedily and productively leaning into generative AI.
My coverage will proceed as follows. First, we will look closely at an important AI research study that examined the prevalence of imperfect prompts when using generative AI. The researchers proposed a clever means of turning imperfect prompts into robust ones. I will further expand on their underlying notions and elucidate a specific step-by-step technique that I use that also leverages imperfect prompts rather than the usual tendency to get mired and depressed about wanting to compose perfect prompts. Oddly enough, you can embrace imperfection and allow it to become your friend, assuming that you take the right steps to transform your imperfect prompts into good enough improvements.
While we are on this weighty matter, I would like to briefly take a side tangent that is worthy to contemplate. Some are at times puzzled that the focus herein entails going from imperfect prompts toward (only) good enough prompts. Shouldn’t we instead be concentrating on going from imperfect prompts to purely perfect prompts? If we are going to make the leap, go all the way, comes the typical refrain on these crucial matters.
Aha, recall my opening remark about perfection being the enemy of good. I argue that the very conception of so-called perfect prompts is a mirage and problematic. You can’t get there, and it doesn’t make sense to try.
I shall endeavor to explain why, so keep on reading, thanks.
Why Perfect Prompts Can’t Be
Here’s the deal.
Generative AI apps make use of probabilistic and statistical underpinnings when parsing prompts and computationally ascertaining what kind of generated responses are to be produced. The crux is that each time that you enter a prompt, even if you word it exactly the same each time, you are extremely likely to get a quite different generated response versus getting precisely the same response each time.
Some people seem to think that if they compose a prompt in some ideal fashion, this will spur the generative AI to respond in a wholly compliant and completely predictable manner. They would be mistaken and wrong-minded. The thing is, you are playing a Las Vegas game of roulette whether you realize it or not. Generative AI is like a proverbial box of chocolates, you never know what you might get. This isn’t a tightly bounded tit-for-tat. At best, you can only guess roughly how your prompt will be interpreted and how the generative AI might potentially respond.
Thus, if you sit there all day staring at a blank screen and painfully mentally agonizing over coming up with a perfect prompt, the odds are that by the time you enter the prompt and get the generative AI to respond, you could have used all that listless time and energy toward more useful purposes. Do not stare at the prompt window of generative AI and allow yourself to become frozen by analysis paralysis. You are going to be much better served by going with something less than a presumed perfect prompt.
That being said, a smarmy retort is that you then might as well compose and enter any wildcard prompt that you desire. Nope, that’s not a good idea either. You shouldn’t just toss your arms in the air and give up hope trying to compose reasonable prompts. Prompts should be within the ballpark of the matter you are pursuing with the generative AI else you will really get a plethora of nonsense and confusion.
The desired aim is to reach the Goldilocks of prompts for your ongoing prompt engineering prowess.
You want prompts that are just right, not too cold, and not too hot.
A means to arrive at the Goldilocks range is to start with imperfect prompts and use those to coalesce toward good enough prompts. Meanwhile, for heaven’s sake and your sake, toss perfect prompts out of your mind and out of the window. They aren’t worth your breath. Worse still, they are likely to mercilessly grab you and keep you in a dark dungeon of despair. You will forever be chagrined that your perfect prompts do not reliably turn out to be perfect. You might occasionally hit the lottery, but on average over continual use of generative AI, you are rarely going to knock one out of the park.
Do not fall for that seductive sorrowful siren song (note, a siren song in Greek mythology consisted of music and voices that lured sailors to their doom and shipwreck).
Before I dive into the transforming of imperfect prompts, let’s make sure we are all on the same table when it comes to the keystones of prompt engineering and generative AI.
Prompt Engineering Is A Cornerstone For Generative AI
As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI. Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.
For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including OpenAI’s newest feature called custom instructions and which I refer to as persistent context (see the link here), the leveraging of multi-personas in generative AI (see the link here), the advent of chain-of-thought approaches (see the link here), use of in-model learning and vector databases (see the link here), and additional coverage including an inspection of the use of macros and the use of end-goal planning when using generative AI (see the link here).
The use of generative AI can altogether succeed or falter based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
With the above as an overarching perspective, we are ready to jump into today’s discussion.
Prompts Are The Coinage Of The Realm
I will next take a gander at a handy research study on generative AI entitled “Ask Me Anything: A Simple Strategy For Prompting Language Models” by Simran Arora , Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, and Christopher Ré (posted online on November 20, 2022). I’ll showcase some excerpts and provide an analysis of what the researchers did and how their results can be suitably leveraged.
Remember that I earlier indicated you are barking up the wrong tree by trying to devise so-called perfect prompts. Part of the reason that I gave is that you cannot for sure anticipate how the generative AI will react to a given prompt. A keen way to say this is to denote that you are dealing with a brittle process, which is what the research paper forthrightly brings up:
I trust that you can see how this particular research study reinforces the conception that you ought to set aside perfect prompts and allow yourself to work with imperfect prompts that can lead to stout prompts.
They proffer the idea that you can aggregate together a bunch of imperfect prompts to get something of a resultant synergistic good-enough prompt. I tend to speak of this as a triangulation method. Do something that is perhaps off-target but within the playing field. Do another attempt, again in the range of the playing field. After doing this some number of times, coalesce the separate and imperfect prompts into a newly derived prompt that contains the lessons learned from the litany of quickly concocted imperfect prompts.
The beauty is that you are nearly immediately moving ahead in trying to get toward a good-enough prompt. You don’t need to sit quietly and stew over things. Come up with a reasonable yet imperfect prompt, try it out right away, and then come up with another one. Continue doing this. Once you’ve sensed that you’ve done it enough times, only now do you put on your erstwhile thinking cap and aim for a viable and battle-worn good-enough prompt.
Here's what the researchers found:
If you are interested in the details of the above-mentioned research study, they, fortunately, have made available their prompts and code on GitHub. I applaud any AI research that makes available their prompts and code. By making the materials publicly available, other AI researchers can test, verify, and revalidate the results, plus they can then use the source as a jumping point to make further progress in the AI realm. Sadly, many AI researchers do not publicly post their materials and ergo we don’t have a ready means to gauge the validity of the experiments and nor can we build upon the already traversed turf.
That’s a darned shame.
While I’m standing on top of this impromptu soapbox, I would also like to stridently suggest that all AI research studies ought to include some kind of impact report concerning AI Ethics and AI Law facets of their research. The thing is, much of the AI work being done is solely focused on technological factors. Few seem to be willing or thoughtful enough to also weigh in about the societal and legal dimensions of their AI endeavors.
In the instance of the research study that I’ve been discussing herein, they did provide some insightful remarks about how their AI research could be used and potentially regrettably misused:
Now for some analysis and further pontification.
As I’ve discussed frequently in my column, you can expect that much of prompt engineering will be augmented via the use of add-ons or built-in tools that aid otherwise manually-based prompting strategies. I mention this because the numerous cheat sheets and by-hand courses regrettably don’t particularly scale up, as it were. You are not going to get on the order of hundreds of millions of human users of generative AI to lift themselves into the stratosphere of prompt engineering. It just isn’t logistically and human behaviorally feasible.
The development and fielding of prompting add-ons and tools enable solid prompting strategies at scale. The same can be said of this iterative approach involving imperfect prompts being rapidly used to glean a good-enough prompt. Envision that rather than you manually coming up with a series of imperfect prompts by hand, a handy-dandy tool in the generative AI will take on that task for you.
I realize that might seem a bit recursive or a dream-within-a-dream akin to the popular movie “Inception”. Here’s how this will work. You would be entering a roughshod prompt, which the added AI-powered prompting tool would generate a bunch of additional imperfect prompts. Those in turn would be automatically or semi-automatically aggregated into a good-enough prompt by the tool. This good-enough prompt then becomes the final prompt that you enter into or simply pass along to the generative AI to get the interaction or essay that you are aiming for.
Easy-peasy.
Your Step-By-Step Guide For This Special Prompting Technique
If you want to try out this technique, I’ve got some helpful pointers for you.
Here’s what I do whenever I am facing a knotty problem and I want to get generative AI to give suitable directed focus. To clarify, you would not likely use this technique if you are doing one-offs or lollygagging along with generative AI. I don’t think the effort would be worth it, or as some might say, the squeeze wouldn’t be worth the juice.
Okay, so assume that you have a somewhat complicated problem that you are having a hard time envisioning how to best compose as a prompt for generative AI. Your mind is muddled. Where do you start? What details should you include? Would it be wisest to pose the problem as a question or instead write the prompt as a directive overtly telling the generative AI what to do?
Questions, questions, questions.
Read my lips — stop worrying.
Get on with the task.
Do the best that you can do and do so in a series of forthrightly “imperfect” prompts (ones that in your heart of hearts, you know are not what you want or need to say, but you’ve got to start somewhere). A journey of a thousand miles requires that you put one foot in front of another. Repeatedly.
My four overall major steps are these:
Notice that this is a decidedly action-oriented method.
The good news is that you will likely feel better about using generative AI due to the action-inspired moving-things-along mantra of this technique. Furthermore, I am granting you explicit permission to assuredly not be perfect. Yes, that’s right, you are being handed a get-out-of-jail-free card. Whereas most of us would suffer endlessly with a burning desire to write the perfect prompt, I am telling you that this is not something you need to further grouse over.
The delightful song Let It Go provides solace here.
Moving on, let’s unpack those four major steps:
(1) Devise A Knowingly Imperfect Prompt Set
(2) Use The Imperfect Prompt Set To Derive An Aggregated Prompt
(3) Utilize The Aggregated Prompt And See How Things Go
(4) Rinse And Repeat, Plus Identify Prompting Lessons
I don’t have the space here to go into a full-blown example of the process, but if there is sufficient reader interest I’ll gladly do so in a later follow-up column and illuminate the subtle nuances of this notable technique.
This technique ought to be one of many that you have in your arsenal of prompt engineering methods.
Each technique that you gradually become comfortable using will likely apply to particular circumstances. Do not fall into the classic trap of trying to use one technique in all situations. This is the veritable hammer and the nail. If all that you have is a hammer, the entire world seems to be filled with nails and you use the hammer even when you shouldn’t. The same goes for becoming fixated on a particular prompt engineering technique.
Learn this technique and use it when warranted.
Conclusion
I have yet another rationale for why we should potentially embrace imperfect prompts. It is a potentially mind-boggling justification. For your safety, please sit down and prepare yourself.
Are you ready?
An article from the Harvard Business Review (HBR) said this about the ability of people in general to ask questions:
The excerpt notes that people are not necessarily versed in asking questions. Some people are good at it. Probably most are not. My point is that we now have seemingly hundreds of millions of people using generative AI apps and we are nonchalantly assuming that all or most of those people know how to ask questions. They need to know how to suitably ask questions in the sense that generative AI fundamentally works on a question-and-answering basis.
Namely, you ask a question, and the generative AI is supposed to respond.
I think you see the problem at hand. If people aren’t especially versed at asking questions, and yet if they have to ask questions to get answers from generative AI, things are messed up. The matter gets even worse when you take into account my earlier point that generative AI works on a probabilistic and statistical basis. You have people trying to use generative AI that isn’t especially suave at asking questions. Simultaneously, you have generative AI that is potentially going to provide generated responses that will somewhat wantonly vary each time you ask a question. I suppose, cheekily, you could say it is the blind leading the blind (an old saying likely worth retiring).
How do we get ourselves out of this vexing bind?
I have three possibilities to share with you.
First, we can try to get the whole world to be better at asking questions. Wow, that’s an impressive and amazingly worthwhile heroic ambition all told. This would especially aid people that then opt to make use of generative AI, though that’s a petite reason for pursuing such a lofty goal.
Another angle would be to accept that people are generally shaky at asking questions. As such, when using generative AI, give them a path toward turning their weakness into a strength by using lots of imperfect prompts to find your way toward a good-enough prompt. Voila, solution as shown via the technique described herein.
Third, we can advance AI so that generative AI can make up for the fact that humans by and large aren’t astute at asking questions. As I mentioned earlier, all kinds of AI-powered add-ons and various prompt engineering tools will undoubtedly be blended into generative AI and make life easier for humans using generative AI.
I am betting you relish that last angle as the most exhilarating of the three options. We almost always embrace using automation when doing so relieves us of laborious chores.
Sorry to say, not everyone believes that having the AI do our question-asking uplifting for us is such a good idea. They worry that people will allow their ability to ask questions to decay. Worse and worse we will go. If our question-asking capability is already skimpy, imagine what will occur if the AI does the question-asking for us going forward. We might be totally bereft of question-asking competence.
I suppose we can refer to the brilliant and all-knowing Albert Einstein and see if there’s anything he might have once uttered that could help us solve this dilemma. “Question everything,” that’s what Albert Einstein famously urged us to do. Those are wise words well worth living by, so go ahead and start asking more questions, as many as you like, and aim to perfect your question-asking skills, though be willing to settle for good enough if that cuts the mustard.
A perfectly good aspiration.
Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 6.8+ million amassed views of his AI columns. As a seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research. Previously a professor at USC and UCLA, and head of a pioneering AI Lab, he frequently speaks at major AI industry events. Author of over 50 books, 750 articles, and 400 podcasts, he has made appearances on media outlets such as CNN and co-hosted the popular radio show Technotrends. He's been an adviser to Congress and other legislative bodies and has received numerous awards/honors. He serves on several boards, has worked as a Venture Capitalist, an angel investor, and a mentor to founder entrepreneurs and startups.
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
General Document Comments 0