Don't Let Your LLM Hallucinate—Check Out These Prompting Rules and Methods!

From the LLM Bootcamp lectures by UC Berkeley PhD alumni, I’ve learned key rules and methods that improve prompts. This article covers techniques like Active Prompting, Meta Prompting, and Few-Shot Prompting etc designed to make models think more systematically and produce better results.

Essential Rules for Writing Effective Prompts

Here are a few important rules that can make prompts more effective. These techniques improve the accuracy and usefulness of responses.

Use Structured Text

Language models perform better when given well-organized input. Just like humans find clear instructions easier to follow, models also work better with structured prompts.

Example: Asking for a Python Function
❌ Without Structure:
"Write a Python function that checks if a number is prime."

✅ With Structure (Pseudocode):

Define a function is_prime(n):
  If n is less than 2, return False
  For each number i from 2 to sqrt(n):
    If n is divisible by i, return False
  Return True

A structured prompt makes it easier for the model to understand and follow specific steps, reducing mistakes.

Break Down Complex Requests (Decomposition)

When asking a model to complete a big task, it’s better to split it into smaller steps. This improves accuracy and keeps responses more focused.

Example: Writing a Blog Post
❌ All-in-One Prompt:
"Write a blog post about climate change and include recent statistics."

✅ Step-by-Step Approach:

Gather facts: "List recent climate change statistics from 2023."
Plan the content: "Create an outline for a climate change blog post."
Write in parts: "Write an engaging introduction for a climate change blog post."
Expand each section separately.
Breaking down the task ensures each part is handled properly before moving to the next.

✅ With Structure (Pseudocode):

Define a function is_prime(n):
  If n is less than 2, return False
  For each number i from 2 to sqrt(n):
    If n is divisible by i, return False
  Return True

Guide the Model’s Reasoning (Chain of Thought)

Instead of asking for a direct answer, guiding the model through logical steps leads to better responses.

Example: Checking for a Prime Number
❌ Simple Prompt:
"Is 2,345 a prime number?"

✅ Step-by-Step Prompt:
"Let’s determine if 2,345 is a prime number by checking divisibility step by step."

This approach encourages the model to explain its reasoning, making the response more reliable.

Combine Multiple Responses (Ensembling)

No model is perfect. Each has strengths and weaknesses, so combining multiple responses can lead to better accuracy.

For example, instead of relying on a single model to summarize an article, you can:

  • Ask two different models and compare results.
  • Run the same model multiple times and merge the best answers.
    This method helps filter out inconsistencies and improves response quality.

Different types of Prompting Methods

Active Prompting

Active Prompting is a method that helps language models improve their reasoning by selecting the most useful examples for learning. Instead of relying on a fixed set of pre-written prompts, this approach identifies uncertain or ambiguous questions and refines them through human annotation.

How Active Prompting Works

The process involves four key steps:

  • Uncertainty Estimation – The model answers a set of training questions multiple times (e.g., five attempts). If the answers vary significantly, the question is marked as uncertain.
  • Selection – The most uncertain questions are chosen for human review.
  • Annotation – Humans provide step-by-step reasoning for these selected questions.
  • Inference – The newly annotated examples are added back into the system, improving the model’s accuracy for similar tasks.

Why Active Prompting Matters

By focusing on the most uncertain cases, Active Prompting ensures that the model learns from the most challenging examples. This targeted learning approach helps refine responses, making them more accurate and reliable over time.

Meta Prompting

Meta Prompting focuses on structure rather than just content. Inspired by mathematical reasoning, it provides a systematic way to connect tasks with prompts, making the model’s thinking process more flexible—closer to how humans approach problems.

Instead of giving the model specific examples, Meta Prompting teaches it general patterns and frameworks. This allows the model to adapt to different types of tasks without relying on pre-written examples.

Few Shot prompting

Few-shot prompting provides the model with a few examples of a task before asking it to generate an answer.
These examples act as a guide, helping the model understand the expected format and logic. Once trained on these examples, the model generalizes the approach to solve similar problems.

Self consistency

Self-consistency is a prompt engineering technique that improves reasoning in step-by-step problem-solving. Instead of relying on a single reasoning path, this method explores multiple paths and selects the most consistent answer.

How It Works

  • Prompting – Guide the model using step-by-step reasoning (chain-of-thought prompting).
  • Generating Multiple Paths – Instead of choosing the most likely answer at each step, generate different reasoning paths by sampling multiple responses.
  • Selecting the Most Consistent Answer – Compare all generated answers and choose the one that appears most frequently.

Consider this example for solving a simple math problem
Prompt:
A farmer has 10 apples. He gives 3 to his friend and then buys 5 more. How many apples does he have now?

Initial Output:
3 (Incorrect)

Applying self-consistency, we generate multiple reasoning paths:

Output 1:

The farmer starts with 10 apples.
He gives away 3, leaving him with 7.
He buys 5 more, so 7 + 5 = 12.
Output 2:

The farmer had 10 apples.
He gave 3 to a friend, leaving 7.
Then he bought 5 more, so 7 + 5 = 12.
Since both reasoning paths lead to 12, we confirm this as the correct answer.

Self-consistency mimics human problem-solving by allowing models to explore different solutions and choose the most reliable one.

Effective prompt engineering is key to improving the accuracy and reliability of language model responses. Techniques like structured prompts, decomposition, active prompting, meta prompting, few-shot prompting, and self-consistency help models reason better and produce clearer answers.

LiveAPI: Super-Convenient API Docs That Always Stay Up-To-Date

Many internal services lack documentation, or the docs drift from the code. Even with effort, customer-facing API docs can be hard to use. With LiveAPI, connect your Git repository to automatically generate interactive API docs with clear descriptions, references, and "try it" editors.