Want to get the most out of your AI tools? Get your prompt engineering in place
Generative AI has exploded into the public consciousness—and businesses of every kind have begun to take note. So have we. In the past few months alone, we’ve explored how ChatGPT can help craft annual reviews, develop proposals, collect user feedback for healthcare organizations, and simplify clinical trial data for a major pharmaceutical company, among other use cases.
It’s clear that generative AI is a powerful tool that can boost productivity. Yet what’s also clear is that its outputs are only as good as your inputs.
That’s where prompt engineering comes in.
Prompt engineering is the ability to create questions and processes—either through text or code—that improve the output of generative AI.
For instance, asking for ChatGPT’s help in writing a resource guide on prompt engineering isn’t as simple as saying, “Write me a how-to guide on prompt engineering.” That prompt could return a morass of potentially unstructured text, without any guiderails around audience or style.
Instead, a good prompt engineer would incorporate information about context, tone, voice, audience, and examples while also offering specific direction and formatting guidance. It might look something like this:
"You are a management consultant with years of experience with generative AI. Write a 600-word article listing out the top five best practices for effective prompt engineering of ChatGPT, aimed at business leaders. Use short, declarative sentences, descriptive subheads, and bullet points to support each of the five points—as well as examples of good and bad prompts for each of the five best practices."
Part coder, part psychologist, part writer. Prompt engineers know how to get the most out of generative AI tools. In what follows, we break down what makes a good prompt so that you can, too.
One of the best ways to get high-quality results from generative AI is to begin by creating a persona you want the AI to mimic and giving it a clear aim. This provides initial context related to tone, voice, perspective, and purpose. For example, if you’re looking to generate ideas for a food blog, you might start your interaction with:
"You are a food blogger with a history of writing about Italian recipes. Today we are looking to generate ideas for a new content series."
Each layer of specificity will help improve the quality of results, so whereas “food blogger” might generate topics on new ramen restaurants or tapas recipes, “food blogger who covers Italian cuisine” will generate more effective, tailored results.
Bonus tip: If you want the AI to structure its responses in a specific format, you might consider providing examples of the types of content the blog has done previously, such as "the history of pasta making" or "the best canned tomato varieties." You could even ask it for five pitches in bullet points or a list of ten search-optimized article titles.
Prompts that break up a larger project into smaller tasks are likely to yield better results. A prompt to “write a science fiction novel” is going to produce an output that is far less predictable—and less likely to be helpful—than a series of prompts that break the process up.
For instance, you might start by asking instead:
Similarly, feeding data to the generative AI in smaller sections will promote better interpretations. If ChatGPT is asked to summarize a three-hour meeting that covered five different topics using the meeting transcript, it will produce a better product if it is asked to summarize each thirty-minute block separately, the output of which can be combined and edited by a human to create a comprehensive summary.
Bonus tip: Remember to use regular speech. Generative AI, especially user-friendly platforms like ChatGPT, are designed to understand prompts and context in a natural language format, so provide instructions the way you would to a person rather than a computer. Avoid symbols or mathematical representations like "X" or "Y" in your prompts for best results.
Even after following these instructions perfectly, your prompt may produce an incomprehensible, incorrect, or just plain strange output. When that happens, experiment with new ways to structure the prompt or input the data.
For example, a prompt that returns an uninspired first paragraph for an article could be restructured to emphasize creativity and a casual tone as key elements of the task. Additionally, sometimes a key word that changes the meaning or provides more context might be missing from the prompt, as in the example below, where “salmon swimming upstream” might instead say “salmon fish swimming upstream in a river.”
Even the “wrong” results can provide useful information. It might reveal that you’re missing a key step in a task, or perhaps that your instructions lack clarity and would benefit from also explaining what not to do.
If the AI still fails to return the desired result after extensive experimentation, it may mean that leaving that task to humans is ultimately more productive (for now). Just like any technology, generative AI also has limitations—especially around accuracy and arithmetic—so there’s a chance your prompt isn’t the right kind of task for this tool.
Bonus tip: Generative AI applications have a "stochastic" nature, meaning that they produce unique results each time—so you can keep trying until you get an output you like.
One of the most important best practices is to (eventually) step away from the generative AI and make the output your own.
Of course it always needs a human fact check. But it’s also important to remember that while generative AI is a powerful tool, it ultimately just replicates a consensus of what has come before. It might produce a helpful first draft, but it can’t replace a human when it comes to nuance, personality, or insights (at least, not yet). To be an interesting and engaging final product it needs more—such as quotes, anecdotes, colloquial phrases, humor, variation, voice—that make it uniquely yours.
Bonus tip: Don't worry about being brief in the prompt. It could include a format of expected output, additional detail of what to include, what to exclude, key assumptions, which persona to act like, the level of complexity or brevity expected in the output, etc.
This only briefly introduces the concept of prompt engineering and its potential applications—its possibilities are endless, and we’ll explore those in more depth in the following pieces—specifically how to incorporate AI into your workflow and advanced applications for next-generation AI tools.
This is the first article in a three-part series on unlocking the power of generative AI.
How Private Equity Firms Can Address Hidden Security Flaws in Open-Source Software (OSS)