Prompt Engine
The Prompt Engine treats prompts as versioned software, not static text. It learns your team’s preferences and evolves its behavior over time.
The Problem
Section titled “The Problem”The best prompt for a team changes as their codebase, conventions, and preferences evolve. A prompt that works for a Django monolith is wrong for a microservices Go codebase. Teams cannot tune prompts without understanding prompt engineering.
The solution: user-customizable prompts that evolve from feedback automatically.
Constitution
Section titled “Constitution”The constitution is your project’s non-negotiable rules. Generated by maina init, stored at .maina/constitution.md.
# Project Constitution
## Stack- Runtime: Bun- Language: TypeScript strict- Lint/Format: Biome 2.x (NOT ESLint/Prettier)- Test: bun:test (NOT Jest)- Error handling: Result<T, E> pattern. Never throw.
## Architecture- All DB access through repository layer- API responses: { data, error, meta } envelope- Feature modules are self-contained packagesThe constitution is:
- Injected as preamble into every AI call — the AI always knows your project’s rules
- Stable DNA — not subject to A/B testing or prompt evolution
- Per-project — different repos have different constitutions
- Editable directly — update it when your project fundamentals change
Custom Prompts
Section titled “Custom Prompts”Drop markdown files in .maina/prompts/ to control AI behavior per task:
.maina/prompts/ review.md # What to focus on during code reviews tests.md # How your team writes tests commit.md # Commit message style and conventions plan.md # Planning document structureCustom prompts override Maina’s built-in defaults. Your review.md tells the AI exactly what your team cares about during reviews. Your tests.md teaches the AI how your team writes tests.
Edit prompts with the CLI:
maina prompt edit review # Opens in $EDITORmaina prompt edit tests # Opens in $EDITORPrompt Versioning
Section titled “Prompt Versioning”Every prompt — default and custom — is hashed. The Prompt Engine tracks per version:
- Usage count — how many times the prompt was used
- Accept rate — how often the AI output was accepted vs. rejected
- Prompt hash — content-addressed versioning; change the content, get a new version
When a user accepts or rejects AI output, the outcome is recorded against the prompt hash that produced it.
Evolution Loop
Section titled “Evolution Loop”Feedback drives automatic prompt improvement.
Prompt v1 | vAI output --> Human accepts or rejects | v Feedback stored with prompt hash | v Every N interactions: maina learn analyses patterns | v AI proposes improved prompt v2 | v Developer reviews diff, approves/modifies/rejects | v A/B test: 80% v2, 20% v1 | v If v2 outperforms --> promoted If v1 outperforms --> v2 retiredThe evolution loop is conservative:
- Human approval required — no prompt changes without developer review
- A/B testing — new prompts are tested against the incumbent at 80/20 split
- Data-driven promotion — the version with the higher accept rate wins
- Rollback-safe — rejected versions are retired, not deleted
[NEEDS CLARIFICATION] Markers
Section titled “[NEEDS CLARIFICATION] Markers”Every AI output uses [NEEDS CLARIFICATION: specific question] for ambiguous requirements. The AI never guesses.
This convention is enforced by the Prompt Engine — all default prompts include the instruction to mark ambiguity rather than assume. Custom prompts inherit this behavior unless explicitly overridden.
Commands
Section titled “Commands”| Command | Description |
|---|---|
maina learn | Analyze feedback patterns across all prompts. Proposes improvements for prompts with low accept rates. Manages active A/B tests. |
maina prompt edit <task> | Open the custom prompt for a task in $EDITOR. Creates the file if it does not exist. |
maina prompt list | Show all prompt tasks with version hashes, usage counts, and accept rates. |
Example workflow
Section titled “Example workflow”# See which prompts are underperformingmaina prompt list
# Edit the review prompt to better match your team's needsmaina prompt edit review
# After several commits, analyze feedback and evolvemaina learn
# Track improvementmaina stats