Skip to content

Prompt Engine

The Prompt Engine treats prompts as versioned software, not static text. It learns your team’s preferences and evolves its behavior over time.

The best prompt for a team changes as their codebase, conventions, and preferences evolve. A prompt that works for a Django monolith is wrong for a microservices Go codebase. Teams cannot tune prompts without understanding prompt engineering.

The solution: user-customizable prompts that evolve from feedback automatically.

The constitution is your project’s non-negotiable rules. Generated by maina init, stored at .maina/constitution.md.

# Project Constitution
## Stack
- Runtime: Bun
- Language: TypeScript strict
- Lint/Format: Biome 2.x (NOT ESLint/Prettier)
- Test: bun:test (NOT Jest)
- Error handling: Result<T, E> pattern. Never throw.
## Architecture
- All DB access through repository layer
- API responses: { data, error, meta } envelope
- Feature modules are self-contained packages

The constitution is:

  • Injected as preamble into every AI call — the AI always knows your project’s rules
  • Stable DNA — not subject to A/B testing or prompt evolution
  • Per-project — different repos have different constitutions
  • Editable directly — update it when your project fundamentals change

Drop markdown files in .maina/prompts/ to control AI behavior per task:

.maina/prompts/
review.md # What to focus on during code reviews
tests.md # How your team writes tests
commit.md # Commit message style and conventions
plan.md # Planning document structure

Custom prompts override Maina’s built-in defaults. Your review.md tells the AI exactly what your team cares about during reviews. Your tests.md teaches the AI how your team writes tests.

Edit prompts with the CLI:

Terminal window
maina prompt edit review # Opens in $EDITOR
maina prompt edit tests # Opens in $EDITOR

Every prompt — default and custom — is hashed. The Prompt Engine tracks per version:

  • Usage count — how many times the prompt was used
  • Accept rate — how often the AI output was accepted vs. rejected
  • Prompt hash — content-addressed versioning; change the content, get a new version

When a user accepts or rejects AI output, the outcome is recorded against the prompt hash that produced it.

Feedback drives automatic prompt improvement.

Prompt v1
|
v
AI output --> Human accepts or rejects
|
v
Feedback stored with prompt hash
|
v
Every N interactions: maina learn analyses patterns
|
v
AI proposes improved prompt v2
|
v
Developer reviews diff, approves/modifies/rejects
|
v
A/B test: 80% v2, 20% v1
|
v
If v2 outperforms --> promoted
If v1 outperforms --> v2 retired

The evolution loop is conservative:

  • Human approval required — no prompt changes without developer review
  • A/B testing — new prompts are tested against the incumbent at 80/20 split
  • Data-driven promotion — the version with the higher accept rate wins
  • Rollback-safe — rejected versions are retired, not deleted

Every AI output uses [NEEDS CLARIFICATION: specific question] for ambiguous requirements. The AI never guesses.

This convention is enforced by the Prompt Engine — all default prompts include the instruction to mark ambiguity rather than assume. Custom prompts inherit this behavior unless explicitly overridden.

CommandDescription
maina learnAnalyze feedback patterns across all prompts. Proposes improvements for prompts with low accept rates. Manages active A/B tests.
maina prompt edit <task>Open the custom prompt for a task in $EDITOR. Creates the file if it does not exist.
maina prompt listShow all prompt tasks with version hashes, usage counts, and accept rates.
Terminal window
# See which prompts are underperforming
maina prompt list
# Edit the review prompt to better match your team's needs
maina prompt edit review
# After several commits, analyze feedback and evolve
maina learn
# Track improvement
maina stats