Introduction

Large Language Models (LLMs) are powerful but can be surprisingly finicky. Tuning their prompts or instructions to reach the “production-level accuracy” you need can feel like an endless cycle of trial and error. That’s where metaprompting comes in.

Metaprompting is the practice of using a more intelligent model - to iteratively improve the prompts or instructions fed to a less intelligent model. By building a structured feedback loop (Generate → Evaluate → Improve → Repeat), we can systematically refine prompts until performance stabilizes or meets our project criteria.

In this article, we’ll explore the key ideas behind metaprompting, outline the steps of the feedback loop, and highlight best practices to avoid pitfalls like overfitting or token overruns. By the end, you’ll have a clearer understanding of how to structure an iterative process to drastically improve LLM outcomes—whether you’re building Q&A systems, chatbots, or any task that needs repeated refinement.


What Is Metaprompting?

Metaprompting involves two main models:

  1. A “Target Model” – The LLM that performs the actual task (e.g., GPT-3.5, GPT-4, or a smaller GPT variant).
  2. A “Meta Model” (O1) – A more capable (or differently specialized) system that reviews the instructions given to the Target Model and iterates on them.

The basic workflow:

  1. Initial Prompt/Routine – You start with a draft set of instructions, known as a “routine.”
  2. Evaluate – You run that routine on a test set (eval data) to measure performance.
  3. Refine – You feed the routine plus the evaluation results back to the Meta Model. The Meta Model suggests edits, clarifications, or structural changes.
  4. Re-run – You take the updated routine, test it again, and compare results.

This loop repeats until you’re satisfied with performance or run out of time/budget.


Step 1: Generate an Initial Routine

Most projects begin with an LLM prompt that was adapted from human-readable text (e.g., documentation, guidelines). These human-facing materials are rarely structured in a way an LLM can easily follow, so the first step is transformation into a more LLM-friendly format—we’ll call this a “routine.”

  1. Set an Objective: Clarify what the LLM should do.