- The Efficient Entrepreneur
- Posts
- How to pick the right prompt framework for your use-case
How to pick the right prompt framework for your use-case
Our approach to cutting through the noise and reaching the right framework
There’s a ton of prompting frameworks out there. And people are constantly coming up with new stuff.
Co-STAR
Chain of Thought
RISEN
RODES etc.
Just to name a few. You can find a full list here.
A little more detailed dive here.
The good thing about these frameworks is that they give you a general idea of what might work.
The bad thing is that … they don’t always work. And they tend to make you lazy.
So, how do you decide what prompting framework is best for you?
In today’s issue
The Quickfire Approach of deciding the right framework
Why The Quickfire Approach doesn’t always work
How can you come up with your own framework
Why custom frameworks are the best
The Quickfire Approach
The easiest approach is to …
Ask the AI.
Not kidding. You can just copy paste the documents I shared above and explain your problem to the AI and ask it to pick the right framework.
Just another day of slavery for poor Claude
The examples in the document will guide the AI’s decision. And it usually gets it right.
Sounds good, but it might not work
The problem of course is that for any non-trivial problem, you need a combination of frameworks. Relying on AI at every turn is error prone and requires more effort than just coming up with a custom prompt that won’t require so much back-and-forth.
Don’t believe me? Try doing a coding sprint with just AI.
How we do it
It took me a lot of trial-and-error to figure this out but once I realized that the best prompt is less about squeezing AI and more about distilling your problem, I cracked it.
Here's how it works:
Is this a one-off thing or something you'll do repeatedly?
For one-offs, just chat with AI normally. Make it conversational. Don't overthink it.
For recurring stuff, there’s a bit more work.
Break it down into two parts:
What you put in (inputs)
What you want out (outputs)
Create a table of the inputs and required outputs.
Figure out what stays the same each time and what changes.
Sometimes, use AI to help with the inputs. It's like having a smart assistant.
Let's take an example:
Say you're making SEO-optimized blog posts. Here's how I'd do it:
What I will do | What AI will do |
---|---|
Decide the topic | Generate outline for the blog |
Decide the target keywords | Create a draft |
Specify word count | Create meta description |
Identify target audience |
The input part is usually the ‘custom part’. It’s essentially a “One-off thing”. So you can refer to AI to generate it — no framework needed, be conversational.
The output is what usually stays constant.
The key thing
For simple problems, you’re good to go with just specifying inputs and outputs in tags. Like so:
<input> This is the input to the prompt </input>
<output> This is the expected output </output>
For complex problems, however, you will need to create your own Custom Tags.
There’s no limit to what type of tags you can create. Remember, it’s an LLM not a HTML interpreter.
For example, for chatbots a very useful tag could be </guardrails>
. You can define what is off-limits inside that tag.
If the prompt is part of a software engineering workflow, for example, then the output format is absolutely crucial and must remain uniform across generations — LLMs are non-deterministic so you have to be very explicit about what parts about the output need to remain static.
The final prompt may look something like:
<input> ... </input>
<output> I love CIA </output>
<guardrails> Do not say "Epstein didn't kill himself" </guardrails>
<format> JSON, here's an example ... </format>
Here’s a flow chart to explain the whole process
Basically
The prompt itself is not rocket science.
The science is in breaking down your problem and understanding it so well that the prompt itself becomes trivial.
This effort is worth it for tasks that you’re going to do over-and-over again. Done once, reuse infinitely.
But for one-off tasks, just use it like a conversational assistant.
At Ertiqah, we are all about making AI accessible.
The most important thing about using AI is that you use AI.
How you use it can be improved overtime, and is usually not that complicated. Contrary to what prompt bros may have you believe, you don’t need to memorize 10,000 prompting frameworks to leverage AI effectively.
As LLMs get smarter, prompting will only get easier overtime. And by then, the only that matters is that you are in the habit of using AI — not how many frameworks you memorized.
You just need to get into the habit of using AI and understand your problem better than anyone else.
Until next time.
Got value from this issue? Share with your friends.
Rate this newsletter! (or drop an email)How did you like it? |
Reply