# mogan-i18n Use Cases — 3 Personas with Working Code

**Audience**: developer choosing whether to integrate mogan-i18n
**Source-of-truth**: this doc renders at `https://mogan-i18n.com/docs/use-cases`
**Spec ref**: MOGAN-I18N-SAAS-v1 §8 (3 personas)

Each persona has: scenario, code (Python + Node), expected behavior, what mogan-i18n replaces.

---

## Persona A — Commerce Platform Developer

### Scenario

You build a Shopify app that helps merchants sell internationally. Users search in their language, but your product DB is English-keyed. Generic translation breaks brand names + category nuance. You need to resolve cross-language commerce intent in real-time during search.

### What mogan-i18n replaces

- Building per-language SEO content
- Off-the-shelf translation API (Google Translate / DeepL) which loses commerce specificity
- Manual curation of brand-name passthrough lists
- Custom regex for "removedor de manchas" → "skin care spot remover"

### Working code (Python + Node)

**Python**:

```python
import os
import requests
from typing import Optional

API = "https://api.mogan-i18n.com"
KEY = os.environ["MOGAN_I18N_KEY"]

def resolve_search_to_product_db(user_query: str, user_locale: str) -> Optional[dict]:
    """User searches in their language → canonical slug → query your product DB."""
    r = requests.get(
        f"{API}/v1/lookup",
        params={"keyword": user_query, "locale": user_locale},
        headers={"x-api-key": KEY},
        timeout=5,
    )
    if r.status_code == 404:
        # No canonical match — fall back to fuzzy search in your DB
        return None
    r.raise_for_status()
    canonical = r.json()
    # Now query your product DB by canonical_slug
    return query_my_products_by_canonical(canonical["canonical_slug"])

# Example usage
results = resolve_search_to_product_db(
    user_query="シミ取り",
    user_locale="jp"
)
# canonical_slug = "skin_care_spot_remover" → use in your product DB query
```

**Node**:

```javascript
const fetch = require('node-fetch');

const API = 'https://api.mogan-i18n.com';
const KEY = process.env.MOGAN_I18N_KEY;

async function resolveSearchToProductDb(userQuery, userLocale) {
  const url = new URL(`${API}/v1/lookup`);
  url.searchParams.set('keyword', userQuery);
  url.searchParams.set('locale', userLocale);

  const r = await fetch(url, {
    headers: { 'x-api-key': KEY },
    timeout: 5000,
  });

  if (r.status === 404) return null;
  if (!r.ok) throw new Error(`mogan-i18n ${r.status}`);

  const canonical = await r.json();
  return queryMyProductsByCanonical(canonical.canonical_slug);
}

// Example
const results = await resolveSearchToProductDb('máquina de lavar', 'br');
// canonical_slug = "washing_machine"
```

### Expected behavior

- p50 latency: 3-5ms server-side + network roundtrip (~30-100ms total depending on caller location)
- 9 production-ready locales: us / de / fr / it / es / br / jp / kr / tw
- 1 partial locale: mx (68.5% data via cross-locale lift from es)
- 5 in-progress: se / vn / pl / id / tr (5-10% coverage, returns 200 if hit, 404 if miss)

### Tiering & cost

- **Free tier**: 10,000 requests/day → enough for ~30K cart sessions assuming 1 lookup per cart
- Beyond free: contact for early access (Phase 2 pricing)

---

## Persona B — AI Agent Platform

### Scenario

You build a custom GPT or Claude tool-use that helps users buy international goods. User asks "buy 日本除斑面膜" (Japanese spot-remover sheet mask) in mixed Japanese-Chinese. Your agent needs canonical product type for downstream catalog search.

### What mogan-i18n replaces

- Hallucinated product taxonomy
- Multilingual prompt engineering for "translate this commerce term"
- Per-language cargo-cult product type lists

### Working code (function calling)

**OpenAI function spec**:

```json
{
  "type": "function",
  "function": {
    "name": "resolve_commerce_intent",
    "description": "Resolve cross-language commerce search to canonical product taxonomy. Use this BEFORE searching downstream catalogs.",
    "parameters": {
      "type": "object",
      "properties": {
        "keyword": {
          "type": "string",
          "description": "User's commerce keyword in their source language. Pass exactly as user typed."
        },
        "locale": {
          "type": "string",
          "enum": ["us", "de", "fr", "it", "es", "br", "jp", "kr", "tw", "mx"],
          "description": "Source locale of the keyword. If user mixes languages, pick dominant locale."
        }
      },
      "required": ["keyword", "locale"]
    }
  }
}
```

**Function handler (Python)**:

```python
import os
import requests

def resolve_commerce_intent(keyword: str, locale: str) -> dict:
    r = requests.get(
        "https://api.mogan-i18n.com/v1/lookup",
        params={"keyword": keyword, "locale": locale},
        headers={"x-api-key": os.environ["MOGAN_I18N_KEY"]},
        timeout=5,
    )
    if r.status_code == 404:
        return {"canonical_slug": None, "fallback": "use_english_query_directly"}
    r.raise_for_status()
    return r.json()
```

**Claude tool-use schema** (Anthropic SDK):

```python
from anthropic import Anthropic

client = Anthropic()

tools = [{
    "name": "resolve_commerce_intent",
    "description": "Cross-language commerce taxonomy resolution. 9 production locales.",
    "input_schema": {
        "type": "object",
        "properties": {
            "keyword": {"type": "string"},
            "locale": {"type": "string", "enum": ["us","de","fr","it","es","br","jp","kr","tw","mx"]}
        },
        "required": ["keyword", "locale"]
    }
}]

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    tools=tools,
    messages=[{"role": "user", "content": "Buy 日本除斑面膜 for me"}],
)
# Claude calls resolve_commerce_intent(keyword="除斑面膜", locale="jp")
# → canonical_slug = "skin_care_spot_remover_sheet_mask" (or similar)
# → agent then queries downstream catalog with normalized term
```

### Expected behavior

- Agent calls function → mogan-i18n resolves → agent uses canonical_slug for catalog search
- Avoids hallucinated taxonomy ("the user might mean...") — replaces with deterministic resolution
- Supports DICT-GROW: agent's miss queries improve everyone's dictionary nightly

### Why AI agents specifically benefit

1. **Function-callable**: clean schema, deterministic output
2. **Sub-5ms latency**: in-loop with LLM thinking budget
3. **Provenance metadata** (`_provenance.source`): agent can explain "I resolved this via mogan-i18n's curated dictionary, source=manual-seed" if user asks

---

## Persona C — Translation Tooling

### Scenario

You build an e-commerce localization SaaS. Off-the-shelf MT (DeepL / Google) translates word-by-word. You need commerce-vertical taxonomy with brand-name preservation, category nuance, and back-translation validation built in.

### What mogan-i18n replaces

- Per-vertical curation of commerce terms
- Brand-name passthrough exception lists
- Manual back-translation validation
- Per-locale SEO content writers

### Working code (Python pipeline)

```python
import os
import requests

KEY = os.environ["MOGAN_I18N_KEY"]
API = "https://api.mogan-i18n.com"

def enrich_translation_pipeline(source_text: str, source_locale: str) -> dict:
    """
    Layer 1: try mogan-i18n exact-match (commerce-curated dictionary)
    Layer 2: fall back to your existing MT for non-commerce text
    Layer 3: enrich MT output with commerce taxonomy where available
    """
    # Tokenize source_text into commerce term candidates (your tokenizer)
    candidates = tokenize_commerce_terms(source_text)
    enriched = {}
    for term in candidates:
        r = requests.get(
            f"{API}/v1/lookup",
            params={"keyword": term, "locale": source_locale},
            headers={"x-api-key": KEY},
            timeout=5,
        )
        if r.status_code == 200:
            enriched[term] = r.json()  # canonical_slug + type + provenance

    # Now your MT enriches with these tags before translating
    mt_output = your_existing_mt(source_text, source_locale, enriched)
    return mt_output

# Reverse pipeline: take a canonical slug, get all locale variants
def expand_canonical_to_locales(canonical_slug: str) -> dict:
    """E.g., feed your customer 'refrigerator' canonical → get all SEO variants."""
    r = requests.get(
        f"{API}/v1/canonical/{canonical_slug}",
        headers={"x-api-key": KEY},
        timeout=5,
    )
    r.raise_for_status()
    return r.json()  # variants object keyed by locale
```

### Expected behavior

- Commerce-specific terms get correct disambiguation (`tênis` → `sneakers` not `tennis`, `geladeira` → `refrigerator` not literal "icebox")
- Brand names preserved (`apple iphone` → `apple_iphone`, not "fruit phone")
- Back-translation Jaccard provided per entry for quality scoring
- 25K+ leaf categories mapped to Google Product Taxonomy

### Pipeline integration sketch

```
Source text (es-MX)
    ↓
Tokenize commerce candidates → ["refrigerador", "samsung evolution"]
    ↓
mogan-i18n /v1/lookup (parallel)
    ↓
Tagged enrichment:
  refrigerador → canonical_slug=refrigerator, type=category
  samsung evolution → canonical_slug=samsung_refrigerator_evolution, type=integrated
    ↓
Your existing MT runs WITH commerce tags as hints
    ↓
Output: machine-translated text + structured commerce metadata
```

---

## Comparison matrix — when to choose mogan-i18n

| Need | mogan-i18n | Google Translate | DeepL | Build in-house |
|---|---|---|---|---|
| Commerce-specific term disambiguation | ✅ native | ❌ generic only | ❌ generic only | months of curation |
| Brand name preservation | ✅ tagged as `brand` | ⚠️ inconsistent | ⚠️ inconsistent | manual lists |
| Back-translation validation | ✅ Jaccard per entry | ❌ no | ❌ no | wrap MT in pipeline |
| Self-evolving (auto-learn from misses) | ✅ DICT-GROW nightly | ❌ no | ❌ no | infrastructure heavy |
| AI provenance metadata | ✅ per-entry source/model tags | ❌ no | ❌ no | DIY |
| Cross-locale variants (one canonical → 9 locales) | ✅ /v1/canonical | ❌ no equivalent | ❌ no equivalent | DIY |
| Sub-5ms latency | ✅ Redis-backed | ⚠️ ~50-200ms | ⚠️ ~50-200ms | depends on cache |
| Free tier for evaluation | ✅ 10K/day | ⚠️ limited | ⚠️ limited | n/a |
| Cost beyond free | TBD Phase 2 (currently free) | $20/M chars | $25/M chars | infra + curation $$$$ |

---

## Honest limitations

mogan-i18n is **not a general-purpose translation API**:

- ❌ Don't use for free-form text (essays, articles, conversations)
- ❌ Don't use for non-commerce vocabulary (medical terms, legal contracts, poetry)
- ❌ Don't expect new languages immediately — current 9 production locales, 5 in progress, 49 designed
- ❌ Tier 2 (semantic embedding) and Tier 3 (LLM-assisted disambiguation) are Phase 2 — currently 503 stubs

For those use cases, pair mogan-i18n with a general-purpose MT for the commerce terms only, fall back to MT for surrounding text.

---

## Pricing & SLA (current state, honest)

- **Today**: Free tier (10K requests/day) is the only option. No paid tier yet.
- **SLA**: 99.5% target, no contractual SLA credits today
- **Phase 2 pricing** (TBD): paid tiers planned, no published prices yet
- **Enterprise / white-label / SLA-backed**: Phase 3, contact for early access

We're in pre-monetization phase. Use freely for evaluation. Provide feedback (TBD email).

---

## Next steps

1. Try sandbox without signup: see [Quickstart](MOGAN-I18N-SAAS-v1-quickstart.md) Step 1
2. Sign up for free tier: see Quickstart Step 2
3. Review full API spec: see [OpenAPI 3.1 YAML](MOGAN-I18N-SAAS-v1-openapi.yaml)
4. Read main spec: see [MOGAN-I18N-SAAS-v1.draft.md](MOGAN-I18N-SAAS-v1.draft.md)

---

— mogan-i18n Use Cases v0 · 2026-05-02 · 3 personas with working code
