If there were one technique that I could recommend people, it is few-shot prompting, which is just giving the AI examples of what you want it to do.
Sander SchulhoffAI prompt engineering in 2025: What works and what doesn't
Execution → Technical Tradeoffs
If there were one technique that I could recommend people, it is few-shot prompting, which is just giving the AI examples of what you want it to do.
The core idea is that there's some task in your prompt that you want the model to do. Don't answer this. Before answering it, tell me what are some subproblems that would need to be solved first?
You ask the LLM to solve some problem. It does it, great, and then you're like, 'Hey, can you go and check your response?' It outputs something, you get it to criticize itself and then to improve itself.
You want to give it as much information about that task as possible. Including a lot of information just in general about your task is often very helpful.