Do you still need to be good at prompt engineering?

Yes, and no...

Hi again,

I will (try to) keep this one brief. In the past, I have shared resources to get started with prompt engineering (will add the links to the bottom). That’s basically why I started this newsletter initially: to teach prompting.

But things have changed since. Big time.

News for humans, by humans.

  • Today's news.

  • Edited to be unbiased as humanly possible.

  • Every morning, we triple-check headlines, stories, and sources for bias.

  • All by hand with no algorithms.

When Prompts Were King

Back when LLMs were new (and kinda dumb), we had to treat them like finicky toddlers. You know the drill:

  • Use specific tags

  • "Code" the LLM with weird incentives ("I'll give you $20 if you get this right!")

  • Follow strict formats and terminology

It was like trying to speak a secret language just to get a half-decent response. Exhausting, right?

But now, context is everything.

LLMs have grown up. They've gotten smarter. And you know what that means? All that prompt engineering wizardry? It's becoming about as useful as a screen door on a submarine.

We will never see the “Prompt Engineer” role actually show up on job posts. Sad life.

And the importance of prompt engineering as a skill will only trend downward.

Today, it’s all about context. The more context you can give an LLM about a situation, the better its output is gonna be. Simple as that.

Example: ChatGPT-o1

Let's take the ChatGPT-o1 model as an example. This bad boy already does chain-of-thought prompting behind the scenes. You don't need to hold its hand anymore. Give it a loose prompt, and it'll figure out what you're after.

Even Claude (that doesn't do chain-of-thought prompting) gets you just fine, as long as you give it the full picture. And you won’t see any material difference by trying to refine the prompts either.

But what about AI Startups?

Great question.

The best AI startups set up the workflow (or job-to-be-done, if you’re a Project Manager) in a way that facilitates the context communication.

Ok, that was a mouthful. What I mean to say is:

Using a good AI product will feel like it just gets you. Because it actually does. It’s designed with the context in mind. For example, LiGo (ok, I might be biased but we’re getting good reviews) starts with the context: Your content theme. And is mindful of that context throughout the experience.

How big of a moat is this? Well, big enough to make a solid business. Because at the end of the day, no matter how smart the vanilla ChatGPT or Claude models get, the reason why they are not your go-to choice of performing daily tasks is:

Every session refreshes the context and it needs to be provided manually.

But Wait, Don't Throw Out Your Prompt Engineering Playbook Just Yet

I know I've written a ton about prompt engineering in the past. If you're new to the AI game or just want to dive deeper, you can read them here:

If you’ve never dabbled with prompt engineering before, I highly recommend giving them a read. This will help you understand how LLMs ‘think’ — which will transform the way you use AI, whether prompt engineering remains relevant or not.

Learning prompt engineering:

  1. Teaches you how LLMs process information

  2. Helps you troubleshoot when you're not getting the results you want

  3. Gives you a fallback if simpler methods aren't cutting it

So yeah, while you don't need to obsess over prompts like before, that knowledge is still solid gold.

The Bottom Line

Look, I'm not saying prompt engineering is completely useless. There are still edge cases where it matters. But for 90% of what we do? It's overkill.

The future is about having natural conversations with AI. Giving it the full picture. Treating it less like a finicky algorithm and more like a really smart (but sometimes quirky) colleague.

So, next time you're tempted to spend an hour crafting the "perfect" prompt, try this instead:

  1. Explain the situation clearly

  2. Provide relevant background info

  3. State what you need

That's it. No fancy tricks required.

Speaking of tools 

We've got this voice-to-text Chrome extension we use internally. Works with most major AI models, free for 30 minutes of transcription. We're keeping it on the down-low for now (startup life, you know? We’re broke), so consider this your exclusive invite to try it out. If you dig it, a review on the Chrome store would be awesome. Just … don’t share the link with anyone please. If this goes viral, we’ll go broke 😂 

The Big Picture

We're entering a new era of AI interaction. One where the barriers between human and machine communication are breaking down. It's not about speaking the AI's language anymore – it's about the AI learning to speak ours.

And you know what? That's pretty damn exciting.

Until next time,

Junaid

Rate this newsletter! (or drop an email)

How did you like it?

Login or Subscribe to participate in polls.

Reply

or to participate.