Persona B — AI Agent Platform
Scenario
You ship a ChatGPT plugin / Claude tool / Mistral function that helps users find cross-border products. The agent receives natural-language queries in any language, must route to the correct commerce inventory, must not hallucinate categories.
What mogan-i18n replaces
- LLM-as-translator (slow, non-deterministic, hallucinates categories)
- Custom embedding pipelines per locale (storage + recompute cost)
- Brittle prompt engineering for category disambiguation
Working code (function calling)
{
"type": "function",
"function": {
"name": "resolve_commerce_intent",
"description": "Map user query to canonical commerce category slug",
"parameters": {
"type": "object",
"properties": {
"user_query": { "type": "string" },
"user_locale": { "type": "string", "enum": ["us","de","fr","jp","kr","tw","cn","br","es","it"] }
},
"required": ["user_query","user_locale"]
}
}
}Then implement resolve_commerce_intent by calling /v1/lookup. Agent gets back a canonical slug + confidence — passes to downstream inventory tool deterministically.
Why AI agents specifically benefit
- Determinism: same input → same canonical slug, every time
- Audit trail: each lookup includes _provenance metadata
- Cost: $0.0001 per lookup vs $0.01+ per LLM translation call
- Latency: < 5ms vs LLM 500ms-2s
- Compliance: PII pre-scrub on dict-grow ingestion
Full source: /use-cases.md §Persona B (verbatim from spec, lines 104-208)