Skip to content
English
  • There are no suggestions because the search field is empty.

How to write effective prompts for Excel Agents

Effective prompt engineering is a prerequisite for reliable, repeatable outcomes when working with AI Agents.

Across the industry, prompt design is increasingly treated not as creative writing, but as an engineering discipline: clear intent, structured instructions, and explicit verification logic materially improve agent reliability. This document consolidates industry-wide prompt engineering best practices and interprets them in the context of Excel Agents.

What are the Excel Agents?

DataSnipper Excel Agents move beyond automation to bring intelligence directly into Excel. Instead of working through manual steps, you simply describe what you want to achieve and the Agent takes care of the execution. From reading source documents and reconciling data to applying audit and finance logic, Excel Agents execute complete tests end to end. The result is transparent, explainable output that is fully traceable back to the underlying evidence and ready for review.

Step-by-step tutorial

Step 1: Start with a clear goal and success definition

Best practice

Clearly state what you want the agent to achieve and what a “good” outcome looks like.

Why this matters

State the objective in operational terms, not abstract analysis. Describe the artifact you expect to exist in the workbook once the agent is finished (e.g. a table, reconciliation, or set of Snips), not just the analysis you want performed.

Example

“Create a reconciliation table that ties the AP subledger total to the GL AP control account for October–December. Include columns for Subledger Total, GL Total, Variance, Variance %, and a short explanation of likely drivers. Flag any variance above €500.”

Step 2: Explain why instructions exist, not just what to do

Best practice

When giving rules or constraints, briefly explain their purpose.

Why this matters

This reduces brittle rule-following and allows the agent to reason correctly when encountering edge cases.

This reinforces intent and improves adherence without increasing prompt complexity.

Example

“Use only posted transactions dated within the test period (based on Posting Date), because our procedure tests the completeness and accuracy of the final ledger population for that period. Exclude drafts/unposted items so the population reflects what’s actually recorded in the financial statements.”

Step 3: Structure prompts explicitly

Best practice

Use a clear, repeatable structure for prompts.

Why this matters

Well-structured prompts reduce cognitive load for the agent and make instructions easier to follow. This is a foundational principle in prompt engineering across models and platforms.

Recommended structure for Excel Agents

  • Context: What data is being used and why
  • Instructions: What actions to take, in what order
  • Output expectations: What should be created or returned

Clear separation prevents the agent from confusing background information with executable instructions.

Example

Context:

“Sheet ‘GL_Detail’ contains transactions; ‘Trial_Balance’ contains period totals.”

Instructions:

“1) Filter GL_Detail to period and account range. 2) Summarize by account code. 3) Compare to Trial_Balance. 4) Identify mismatches.”

Output expectations:

“Create a new sheet ‘Tie-Out’ with summary table + variance flags + a short note on exceptions.”

Step 4: Be specific about outputs and formatting

Best practice

Explicitly define the expected output format.

Why this matters

Specify whether the output should be:

  • a table (and which columns it should contain)
  • Snips linked to source documents
  • formulas, totals, or checks
  • specific number formats or precision

This removes ambiguity and reduces rework.

Example

“Output a table in a new sheet called ‘Exception_Log’ with columns: Document ID, Vendor, Invoice Date, Amount, GL Account, Matched GL Amount, Variance, Evidence Snip Link, Reviewer Note. Format currency as EUR with 2 decimals and add a totals row.”

Step 5: Break complex work into steps

Best practice

Decompose multi-part tasks into sequential steps.

Why this matters

Outline the workflow you expect the agent to follow (e.g. extract → compare → summarize). Sequential instructions improve execution quality and make intermediate results easier to review.

Example

“Extract invoice totals and invoice numbers from the selected documents into a table.

  1. Normalize invoice numbers (remove spaces/dashes).
  2. Match to GL entries by invoice number and vendor.
  3. Summarize match rate and list exceptions with reasons.”

Step 6: Iterate deliberately and maintain prompt versions

Best practice

Treat prompt development as an iterative process and retain versions.

Why this matters

Store prompts directly in the workbook (e.g. a dedicated worksheet or section). This creates an audit-friendly record and makes successful prompts easy to reuse and share. Another benefit is the ability to capture and share learnings with colleagues. Although the final version is what we ultimately maintain, documenting the learning process provides valuable context for peers.

Example

“Prompts tab:

v1: initial extraction prompt

v2: added rule for posted transactions only + rationale

v3: added exception log format + self-check steps

Keep each version with a short note: ‘what changed’ and ‘why it improved’.”

Step 7: Use reasoning techniques intentionally

Best practice

Explicitly guide reasoning for complex analysis.

Why this matters

Structured reasoning improves accuracy, especially for analytical and comparison-based tasks. This principle is consistent across industry research on agent performance.

Ask the agent to reason in stages (summarize → compare → flag → explain). Avoid vague requests to “think step by step” without defining what those steps should be.

Example

“Work in these stages:

Summarize the population (counts, totals, time range).

Compare key totals to the control total.

Flag exceptions above threshold and categorize likely causes (timing, mapping, missing entries).

Produce a concise exception narrative suitable for a workpaper.”

Step 8: Develop internal enablement practices

Best practice

Actively invest in internal enablement around prompt usage when adopting Excel Agents.

Why this matters

Across the industry, teams that see sustained value from AI agents treat prompting as a shared capability rather than an individual skill. Without intentional enablement, effective prompts remain isolated, inconsistent, or relearned repeatedly. Structured enablement accelerates adoption, improves output quality, and reduces variance across users and engagements.

Effective practices include:

  • Develop internal prompt champions

Identify power users who take ownership of crafting, refining, and validating prompts. These individuals act as a reference point for teams, help set standards, and ensure that prompts align with firm methodologies and assurance expectations.

  • Curate a shared prompt library

Maintain a collection of proven prompts within workbook templates. A shared library reduces duplicated effort and provides a starting point for new or infrequent users, while still allowing adaptation to engagement-specific needs.

  • Share best practices and learnings

Encourage teams to document what worked, what didn’t, and why. Sharing patterns, pitfalls, and improvements helps institutionalize prompt quality and builds confidence in agent-assisted workflows.